|
You are here |
sebastianraschka.com | ||
| | | | |
neuralnetworksanddeeplearning.com
|
|
| | | | | [AI summary] The text provides an in-depth explanation of the backpropagation algorithm in neural networks. It starts by discussing the concept of how small changes in weights propagate through the network to affect the final cost, leading to the derivation of the partial derivatives required for gradient descent. The explanation includes a heuristic argument based on tracking the perturbation of weights through the network, resulting in a chain of partial derivatives. The text also touches on the historical context of how backpropagation was discovered, emphasizing the process of simplifying complex proofs and the role of using weighted inputs (z-values) as intermediate variables to streamline the derivation. Finally, it concludes with a citation and licens... | |
| | | | |
jxmo.io
|
|
| | | | | A primer on variational autoencoders (VAEs) culminating in a PyTorch implementation of a VAE with discrete latents. | |
| | | | |
dennybritz.com
|
|
| | | | | All the code is also available as an Jupyter notebook on Github. | |
| | | | |
www.aiweirdness.com
|
|
| | | I've recently been experimenting with one of my favorite old-school neural networks, a tiny program that runs on my laptop and knows only about the data I give it. Without internet training, char-rnn doesn't have outside references to draw on (for better or for worse) but it still manages to | ||