You are here |
jxmo.io | ||
| | | |
vxlabs.com
|
|
| | | | I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time. | |
| | | |
www.nicktasios.nl
|
|
| | | | In the Latent Diffusion Series of blog posts, I'm going through all components needed to train a latent diffusion model to generate random digits from the MNIST dataset. In the second post, we will bu | |
| | | |
tiao.io
|
|
| | | | An in-depth practical guide to variational encoders from a probabilistic perspective. | |
| | | |
sander.ai
|
|
| | This is an addendum to my post about typicality, where I try to quantify flawed intuitions about high-dimensional distributions. |