Explore >> Select a destination


You are here

almostsuremath.com
| | mkatkov.wordpress.com
13.8 parsecs away

Travel
| | For probability space $latex (\Omega, \mathcal{F}, \mathbb{P})$ with $latex A \in \mathcal{F}$ the indicator random variable $latex {\bf 1}_A : \Omega \rightarrow \mathbb{R} = \left\{ \begin{array}{cc} 1, & \omega \in A \\ 0, & \omega \notin A \end{array} \right.$ Than expected value of the indicator variable is the probability of the event $latex \omega \in...
| | francisbach.com
13.4 parsecs away

Travel
| |
| | yang-song.net
12.6 parsecs away

Travel
| | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ...
| | extremal010101.wordpress.com
46.7 parsecs away

Travel
| With Alexandros Eskenazis we posted a paper on arxiv "Learning low-degree functions from a logarithmic number of random queries" exponentially improving randomized query complexity for low degree functions. Perhaps a very basic question one asks in learning theory is as follows: there is an unknown function $latex f : \{-1,1\}^{n} \to \mathbb{R}$, and we are...