Explore >> Select a destination


You are here

michael-lewis.com
| | www.khanna.law
0.9 parsecs away

Travel
| | You want to train a deep neural network. You have the data. It's labeled and wrangled into a useful format. What do you do now?
| | kavita-ganesan.com
1.4 parsecs away

Travel
| | This article examines the parts that make up neural networks and deep neural networks, as well as the fundamental different types of models (e.g. regression), their constituent parts (and how they contribute to model accuracy), and which tasks they are designed to learn.
| | www.paepper.com
1.2 parsecs away

Travel
| | Today's paper: Rethinking 'Batch' in BatchNorm by Wu & Johnson BatchNorm is a critical building block in modern convolutional neural networks. Its unique property of operating on "batches" instead of individual samples introduces significantly different behaviors from most other operations in deep learning. As a result, it leads to many hidden caveats that can negatively impact model's performance in subtle ways. This is a citation from the paper's abstract and the emphasis is mine which caught my attention. Let's explore these subtle ways which can negatively impact your model's performance! The paper of Wu & Johnson can be found on arxiv.
| | www.paepper.com
12.7 parsecs away

Travel
| [AI summary] This article explains how to train a simple neural network using Numpy in Python without relying on frameworks like TensorFlow or PyTorch, focusing on the implementation of ReLU activation, weight initialization, and gradient descent for optimization.