|
You are here |
kyunghyuncho.me | ||
| | | | |
jxmo.io
|
|
| | | | | A primer on variational autoencoders (VAEs) culminating in a PyTorch implementation of a VAE with discrete latents. | |
| | | | |
lilianweng.github.io
|
|
| | | | | [Updated on 2021-09-19: Highly recommend this blog post on score-based generative modeling by Yang Song (author of several key papers in the references)]. [Updated on 2022-08-27: Added classifier-free guidance, GLIDE, unCLIP and Imagen. [Updated on 2022-08-31: Added latent diffusion model. [Updated on 2024-04-13: Added progressive distillation, consistency models, and the Model Architecture section. | |
| | | | |
blog.fastforwardlabs.com
|
|
| | | | | The Variational Autoencoder (VAE) neatly synthesizes unsupervised deep learning and variational Bayesian methods into one sleek package. In Part I of this series, we introduced the theory and intuition behind the VAE, an exciting development in machine learning for combined generative modeling and inference-"machines that imagine and reason." To recap: VAEs put a probabilistic spin on the basic autoencoder paradigm-treating their inputs, hidden representations, and reconstructed outputs as probabilistic ... | |
| | | | |
blog.otoro.net
|
|
| | | [AI summary] This text discusses the development of a system for generating large images from latent vectors, combining Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). It explores the use of Conditional Perceptual Neural Networks (CPPNs) to create images with specific characteristics, such as style and orientation, by manipulating latent vectors. The text also covers the ability to perform arithmetic on latent vectors to generate new images and the potential for creating animations by transitioning between different latent states. The author suggests future research directions, including training on more complex datasets and exploring alternative training objectives beyond Maximum Likelihood. | ||