Explore >> Select a destination


You are here

xcorr.net
| | blog.shakirm.com
52.6 parsecs away

Travel
| | Memory, the ways in which we remember and recall past experiences and data to reason about future events, is a termused frequently in current literature. All models in machine learning consist of...
| | yang-song.net
100.0 parsecs away

Travel
| | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ...
| | distill.pub
90.9 parsecs away

Travel
| | Part one of a three part deep dive into the curve neuron family.
| | teddykoker.com
219.5 parsecs away

Travel
| A few posts back I wrote about a common parameter optimization method known as Gradient Ascent. In this post we will see how a similar method can be used to create a model that can classify data. This time, instead of using gradient ascent to maximize a reward function, we will use gradient descent to minimize a cost function. Lets start by importing all the libraries we need: