Explore >> Select a destination


You are here

cyclostationary.blog
| | blog.shakirm.com
82.5 parsecs away

Travel
| | Memory, the ways in which we remember and recall past experiences and data to reason about future events, is a termused frequently in current literature. All models in machine learning consist of...
| | yang-song.net
28.1 parsecs away

Travel
| | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ...
| | bartwronski.com
28.1 parsecs away

Travel
| | Recently, numerous academic papers in the machine learning / computer vision / image processing domains (re)introduce and discuss a "frequency loss function" or "spectral loss" - and while for many it makes sense and nicely improves achieved results, some of them define or use it wrongly. The basic idea is - instead of comparing pixels...
| | cp4space.hatsya.com
104.9 parsecs away

Travel
| At the end of the recent post on a combinatorial proof of Houston's identity, I ended with the following paragraph: This may seem paradoxical, but there's an analogous situation in fast matrix multiplication: the best known upper bound for the tensor rank of 4-by-4 matrix multiplication is 49, by applying two levels of Strassen's algorithm,...