|
You are here |
www.depthfirstlearning.com | ||
| | | | |
blog.fastforwardlabs.com
|
|
| | | | | The Variational Autoencoder (VAE) neatly synthesizes unsupervised deep learning and variational Bayesian methods into one sleek package. In Part I of this series, we introduced the theory and intuition behind the VAE, an exciting development in machine learning for combined generative modeling and inference-"machines that imagine and reason." To recap: VAEs put a probabilistic spin on the basic autoencoder paradigm-treating their inputs, hidden representations, and reconstructed outputs as probabilistic ... | |
| | | | |
jxmo.io
|
|
| | | | | A primer on variational autoencoders (VAEs) culminating in a PyTorch implementation of a VAE with discrete latents. | |
| | | | |
yang-song.net
|
|
| | | | | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ... | |
| | | | |
www.v7labs.com
|
|
| | | Learn about the different types of neural network architectures. | ||