Explore >> Select a destination


You are here

justindomke.wordpress.com
| | blog.research.google
10.2 parsecs away

Travel
| |
| | teddykoker.com
7.2 parsecs away

Travel
| | Gradient-descent-based optimizers have long been used as the optimization algorithm of choice for deep learning models. Over the years, various modifications to the basic mini-batch gradient descent have been proposed, such as adding momentum or Nesterovs Accelerated Gradient (Sutskever et al., 2013), as well as the popular Adam optimizer (Kingma & Ba, 2014). The paper Learning to Learn by Gradient Descent by Gradient Descent (Andrychowicz et al., 2016) demonstrates how the optimizer itself can be replac...
| | francisbach.com
9.3 parsecs away

Travel
| |
| | finnstats.com
76.0 parsecs away

Travel
| Best Books For Deep Learning. We've compiled a list of the top deep learning books for you. Check it out now.