|
You are here |
www.hhyu.org | ||
| | | | |
bdtechtalks.com
|
|
| | | | | Gradient descent is the main technique for training machine learning and deep learning models. Read all about it. | |
| | | | |
blog.demofox.org
|
|
| | | | | This article explains how these four things fit together and shows some examples of what they are used for. Derivatives Derivatives are the most fundamental concept in calculus. If you have a function, a derivative tells you how much that function changes at each point. If we start with the function $latex y=x^2-6x+13$, we can... | |
| | | | |
robotchinwag.com
|
|
| | | | | Deriving the gradients for the backward pass for matrix multiplication using tensor calculus | |
| | | | |
theorydish.blog
|
|
| | | The chain rule is a fundamental result in calculus. Roughly speaking, it states that if a variable $latex c$ is a differentiable function of intermediate variables $latex b_1,\ldots,b_n$, and each intermediate variable $latex b_i$ is itself a differentiable function of $latex a$, then we can compute the derivative $latex \frac{{\mathrm d} c}{{\mathrm d} a}$ as... | ||