You are here |
blog.evjang.com | ||
| | | |
kevinlynagh.com
|
|
| | | | ||
| | | |
teddykoker.com
|
|
| | | | Gradient-descent-based optimizers have long been used as the optimization algorithm of choice for deep learning models. Over the years, various modifications to the basic mini-batch gradient descent have been proposed, such as adding momentum or Nesterovs Accelerated Gradient (Sutskever et al., 2013), as well as the popular Adam optimizer (Kingma & Ba, 2014). The paper Learning to Learn by Gradient Descent by Gradient Descent (Andrychowicz et al., 2016) demonstrates how the optimizer itself can be replac... | |
| | | |
algobeans.com
|
|
| | | | While an artificial neural network could learn to recognize a cat on the left, it would not recognize the same cat if it appeared on the right. To solve this problem, we introduce convolutional neural networks. | |
| | | |
e-catworld.com
|
|
| |