|
You are here |
robotchinwag.com | ||
| | | | |
jingnanshi.com
|
|
| | | | | Tutorial on automatic differentiation | |
| | | | |
blog.demofox.org
|
|
| | | | | This article explains how these four things fit together and shows some examples of what they are used for. Derivatives Derivatives are the most fundamental concept in calculus. If you have a function, a derivative tells you how much that function changes at each point. If we start with the function $latex y=x^2-6x+13$, we can... | |
| | | | |
marcospereira.me
|
|
| | | | | In this post we summarize the math behind deep learning and implement a simple network that achieves 85% accuracy classifying digits from the MNIST dataset. | |
| | | | |
iclr-blogposts.github.io
|
|
| | | The product between the Hessian of a function and a vector, the Hessian-vector product (HVP), is a fundamental quantity to study the variation of a function. It is ubiquitous in traditional optimization and machine learning. However, the computation of HVPs is often considered prohibitive in the context of deep learning, driving practitioners to use proxy quantities to evaluate the loss geometry. Standard automatic differentiation theory predicts that the computational complexity of an HVP is of the same order of magnitude as the complexity of computing a gradient. The goal of this blog post is to provide a practical counterpart to this theoretical result, showing that modern automatic differentiation frameworks, JAX and PyTorch, allow for efficient computat... | ||