|
You are here |
newvick.com | ||
| | | | |
windowsontheory.org
|
|
| | | | | (Updated and expanded 12/17/2021) I am teaching deep learning this week in Harvard's CS 182 (Artificial Intelligence) course. As I'm preparing the back-propagation lecture, Preetum Nakkiran told me about Andrej Karpathy's awesome micrograd package which implements automatic differentiation for scalar variables in very few lines of code. I couldn't resist using this to show how... | |
| | | | |
comsci.blog
|
|
| | | | | In this tutorial, we will learn two different methods to implement neural networks from scratch using Python: Extremely simple method: Finite difference Still a very simple method: Backpropagation | |
| | | | |
dennybritz.com
|
|
| | | | | This the thirdpart of the Recurrent Neural Network Tutorial. | |
| | | | |
www.ericekholm.com
|
|
| | | Learning maximum likelihood estimation by fitting logistic regression 'by hand' (sort of) | ||