|
You are here |
www.3blue1brown.com | ||
| | | | |
www.danieldjohnson.com
|
|
| | | | | Writeup for my first major machine learning project. | |
| | | | |
michael-lewis.com
|
|
| | | | | This is a short summary of some of the terminology used in machine learning, with an emphasis on neural networks. I've put it together primarily to help my own understanding, phrasing it largely in non-mathematical terms. As such it may be of use to others who come from more of a programming than a mathematical background. | |
| | | | |
www.paepper.com
|
|
| | | | | [AI summary] This article explains how to train a simple neural network using Numpy in Python without relying on frameworks like TensorFlow or PyTorch, focusing on the implementation of ReLU activation, weight initialization, and gradient descent for optimization. | |
| | | | |
programmathically.com
|
|
| | | Sharing is caringTweetIn this post, we develop an understanding of why gradients can vanish or explode when training deep neural networks. Furthermore, we look at some strategies for avoiding exploding and vanishing gradients. The vanishing gradient problem describes a situation encountered in the training of neural networks where the gradients used to update the weights [] | ||