Explore >> Select a destination


You are here

sriku.org
| | www.scijournal.org
5.8 parsecs away

Travel
| | This guide will show you how to write a dot product in LaTeX
| | thenumb.at
4.7 parsecs away

Travel
| |
| | zserge.com
4.6 parsecs away

Travel
| | Neural network and deep learning introduction for those who skipped the math class but wants to follow the trend
| | yang-song.net
23.3 parsecs away

Travel
| This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ...