 
      
    | You are here | colah.github.io | ||
| | | | | www.singlelunch.com | |
| | | | | This is the blog version of a talk of mine on embedding methods. It's the main slides and what I would say in the talk. Intended audience: Anyone interested in embedding methods. I don'... | |
| | | | | www.asimovinstitute.org | |
| | | | | With new neural networkarchitectures popping up every now and then, its hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) can be a bit overwhelming at first. So I decided to compose a cheat sheet containingmany of thosearchitectures. Most of theseare neural networks, some are completely [] | |
| | | | | distill.pub | |
| | | | | By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents. | |
| | | | | blog.paperspace.com | |
| | | Follow this tutorial to learn what attention in deep learning is, and why attention is so important in image classification tasks. We then follow up with a demo on implementing attention from scratch with VGG. | ||