Explore >> Select a destination


You are here

blog.evjang.com
| | yang-song.net
7.1 parsecs away

Travel
| | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ...
| | distill.pub
7.9 parsecs away

Travel
| | What we'd like to find out about GANs that we don't know yet.
| | tiao.io
7.0 parsecs away

Travel
| | An in-depth practical guide to variational encoders from a probabilistic perspective.
| | neptune.ai
54.4 parsecs away

Travel
| Reinforcement learning from human feedback has turned out to be the key to unlocking the full potential of today's LLMs.