|
You are here |
distill.pub | ||
| | | | |
sander.ai
|
|
| | | | | Thoughts on the tension between iterative refinement as the thing that makes diffusion models work, and our continual attempts to make it _less_ iterative. | |
| | | | |
yang-song.net
|
|
| | | | | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ... | |
| | | | |
resources.paperdigest.org
|
|
| | | | | The Conference on Neural Information Processing Systems (NIPS) is one of the top machine learning conferences in the world. Paper Digest Team analyze all papers published on NIPS in the past years, and presents the 15 most influential papers for each year. This ranking list is automatically construc | |
| | | | |
www.chrisritchie.org
|
|
| | | [AI summary] A blog post discussing the simulation of artificial life with neural networks, focusing on agent behavior, population dynamics, and future development goals. | ||