|
You are here |
kvfrans.com | ||
| | | | |
vxlabs.com
|
|
| | | | | I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time. | |
| | | | |
blog.otoro.net
|
|
| | | | | [AI summary] This text discusses the development of a system for generating large images from latent vectors, combining Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). It explores the use of Conditional Perceptual Neural Networks (CPPNs) to create images with specific characteristics, such as style and orientation, by manipulating latent vectors. The text also covers the ability to perform arithmetic on latent vectors to generate new images and the potential for creating animations by transitioning between different latent states. The author suggests future research directions, including training on more complex datasets and exploring alternative training objectives beyond Maximum Likelihood. | |
| | | | |
jxmo.io
|
|
| | | | | A primer on variational autoencoders (VAEs) culminating in a PyTorch implementation of a VAE with discrete latents. | |
| | | | |
neptune.ai
|
|
| | | The generative models method is a type of unsupervised learning. In supervised learning, the deep learning model learns to map the input to the output. In each iteration, the loss is being calculated and the model is optimised using backpropagation. In unsupervised learning, we don't feed the target variables to the deep learning model like... | ||