You are here |
neuralnetworksanddeeplearning.com | ||
| | | |
sriku.org
|
|
| | | | ||
| | | |
karpathy.github.io
|
|
| | | | Musings of a Computer Scientist. | |
| | | |
windowsontheory.org
|
|
| | | | (Updated and expanded 12/17/2021) I am teaching deep learning this week in Harvard's CS 182 (Artificial Intelligence) course. As I'm preparing the back-propagation lecture, Preetum Nakkiran told me about Andrej Karpathy's awesome micrograd package which implements automatic differentiation for scalar variables in very few lines of code. I couldn't resist using this to show how... | |
| | | |
pyimagesearch.com
|
|
| | In this tutorial, you will learn what gradient descent is, how gradient descent enables us to train neural networks, variations of gradient descent, including Stochastic Gradient Descent (SGD), and how SGD can be improved using momentum and Nesterov acceleration. |