|
You are here |
wtfleming.github.io | ||
| | | | |
comsci.blog
|
|
| | | | | In this blog post, we will learn about vision transformers (ViT), and implement an MNIST classifier with it. We will go step-by-step and understand every part of the vision transformers clearly, and you will see the motivations of the authors of the original paper in some of the parts of the architecture. | |
| | | | |
blog.paperspace.com
|
|
| | | | | Follow this tutorial to learn what attention in deep learning is, and why attention is so important in image classification tasks. We then follow up with a demo on implementing attention from scratch with VGG. | |
| | | | |
www.jeremymorgan.com
|
|
| | | | | Want to learn about PyTorch? Of course you do. This tutorial covers PyTorch basics, creating a simple neural network, and applying it to classify handwritten digits. | |
| | | | |
adl1995.github.io
|
|
| | | [AI summary] The article explains various activation functions used in neural networks, their properties, and applications, including binary step, tanh, ReLU, and softmax functions. | ||