Explore >> Select a destination


You are here

swethatanamala.github.io
| | futurism.com
2.5 parsecs away

Travel
| | This post was originally written by Manan Shah as a response to a question on Quora.
| | www.aiweirdness.com
5.4 parsecs away

Travel
| | I've recently been experimenting with one of my favorite old-school neural networks, a tiny program that runs on my laptop and knows only about the data I give it. Without internet training, char-rnn doesn't have outside references to draw on (for better or for worse) but it still manages to
| | cpury.github.io
2.4 parsecs away

Travel
| | Learning how to produce Bible-like texts with Recurrent Neural Networks
| | neuralnetworksanddeeplearning.com
12.3 parsecs away

Travel
| [AI summary] The text provides an in-depth explanation of the backpropagation algorithm in neural networks. It starts by discussing the concept of how small changes in weights propagate through the network to affect the final cost, leading to the derivation of the partial derivatives required for gradient descent. The explanation includes a heuristic argument based on tracking the perturbation of weights through the network, resulting in a chain of partial derivatives. The text also touches on the historical context of how backpropagation was discovered, emphasizing the process of simplifying complex proofs and the role of using weighted inputs (z-values) as intermediate variables to streamline the derivation. Finally, it concludes with a citation and licens...