|
You are here |
github.com | ||
| | | | |
blog.fastforwardlabs.com
|
|
| | | | | The common approach in machine learning is to train and optimize one task at a time. In contrast, multitask learning (MTL) trains related tasks in parallel, using a shared representation. One advantage of MTL is improved generalization - using information regarding related tasks prevents a model from being overly focused on a single task, while it is also learning to produce better results. MTL is an approach, and is not restricted to any particular algorithm. | |
| | | | |
www.jeremymorgan.com
|
|
| | | | | Want to learn about PyTorch? Of course you do. This tutorial covers PyTorch basics, creating a simple neural network, and applying it to classify handwritten digits. | |
| | | | |
www.ntentional.com
|
|
| | | | | Highlights from my favorite Deep Learning efficiency-related papers at ICLR 2020 | |
| | | | |
vxlabs.com
|
|
| | | I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time. | ||