Explore >> Select a destination


You are here

research.google
| | quomodocumque.wordpress.com
13.2 parsecs away

Travel
| | Word2vec is a way of representing words and phrases as vectors in medium-dimensional space developed by Tomas Mikolov and his team at Google; you can train it on any corpus you like (see Ben Schmidt's blog for some great examples) but the version of the embedding you can download was trained on about 100 billion...
| | bdtechtalks.com
3.6 parsecs away

Travel
| | The transformer model has become one of the main highlights of advances in deep learning and deep neural networks.
| | ai.googleblog.com
2.8 parsecs away

Travel
| | [AI summary] This blog post discusses Google Research's exploration of transfer learning through the T5 model, highlighting its application in natural language processing tasks and the development of the C4 dataset.
| | www.paepper.com
10.2 parsecs away

Travel
| Today's paper: Rethinking 'Batch' in BatchNorm by Wu & Johnson BatchNorm is a critical building block in modern convolutional neural networks. Its unique property of operating on "batches" instead of individual samples introduces significantly different behaviors from most other operations in deep learning. As a result, it leads to many hidden caveats that can negatively impact model's performance in subtle ways. This is a citation from the paper's abstract and the emphasis is mine which caught my attention. Let's explore these subtle ways which can negatively impact your model's performance! The paper of Wu & Johnson can be found on arxiv.