|
You are here |
colinraffel.com | ||
| | | | |
yang-song.net
|
|
| | | | | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ... | |
| | | | |
www.depthfirstlearning.com
|
|
| | | | | [AI summary] The provided text is a comprehensive set of notes and exercises covering various topics in Generative Adversarial Networks (GANs) and their improvements, including standard GANs, Wasserstein GANs (WGANs), and WGAN with Gradient Penalty (WGAN-GP). The content includes theoretical explanations, practical implementation tasks, and discussion of challenges and solutions in training GANs. Key topics include the mathematical foundations of GANs, the limitations of standard GANs (such as mode collapse and sensitivity to hyperparameters), the introduction of WGANs to address these issues through the Wasserstein distance, and further improvements with WGAN-GP to mitigate problems like weight clipping instability. The text also includes exercises for calc... | |
| | | | |
blog.evjang.com
|
|
| | | | | This is a tutorial on common practices in training generative models that optimize likelihood directly, such as autoregressive models and ... | |
| | | | |
jxmo.io
|
|
| | | A primer on variational autoencoders (VAEs) culminating in a PyTorch implementation of a VAE with discrete latents. | ||