|
You are here |
www.eigentales.com | ||
| | | | |
jakevdp.github.io
|
|
| | | | | ||
| | | | |
www.jeremykun.com
|
|
| | | | | Machine learning is broadly split into two camps, statistical learning and non-statistical learning. The latter we've started to get a good picture of on this blog; we approached Perceptrons, decision trees, and neural networks from a non-statistical perspective. And generally "statistical" learning is just that, a perspective. Data is phrased in terms of independent and dependent variables, and statistical techniques are leveraged against the data. In this post we'll focus on the simplest example of thi... | |
| | | | |
robotchinwag.com
|
|
| | | | | Deriving the gradients for the backward pass for matrix multiplication using tensor calculus | |
| | | | |
adl1995.github.io
|
|
| | | [AI summary] The article explains various activation functions used in neural networks, their properties, and applications, including binary step, tanh, ReLU, and softmax functions. | ||