Explore >> Select a destination


You are here

yasha.solutions
| | bdtechtalks.com
5.0 parsecs away

Travel
| | Gradient descent is the main technique for training machine learning and deep learning models. Read all about it.
| | dustintran.com
7.7 parsecs away

Travel
| | Having recently finished some papers with Rajesh Ranganath and Dave Blei on variational models [1] [2], I'm now a bit free to catch up on my reading of recen...
| | www.paepper.com
7.2 parsecs away

Travel
| | Today's paper: Rethinking 'Batch' in BatchNorm by Wu & Johnson BatchNorm is a critical building block in modern convolutional neural networks. Its unique property of operating on "batches" instead of individual samples introduces significantly different behaviors from most other operations in deep learning. As a result, it leads to many hidden caveats that can negatively impact model's performance in subtle ways. This is a citation from the paper's abstract and the emphasis is mine which caught my attention. Let's explore these subtle ways which can negatively impact your model's performance! The paper of Wu & Johnson can be found on arxiv.
| | www.analyticsvidhya.com
50.3 parsecs away

Travel
| Learn about Attention Mechanism, its introduction in deep learning, implementation in Python using Keras, and its applications in computer vision.