Explore >> Select a destination


You are here

www.ethanepperly.com
| | fa.bianp.net
2.6 parsecs away

Travel
| | The Langevin algorithm is a simple and powerful method to sample from a probability distribution. It's a key ingredient of some machine learning methods such as diffusion models and differentially private learning. In this post, I'll derive a simple convergence analysis of this method in the special case when the ...
| | almostsuremath.com
4.2 parsecs away

Travel
| | The Rademacher distribution is probably the simplest nontrivial probability distribution that you can imagine. This is a discrete distribution taking only the two possible values $latex {\{1,-1\}}&fg=000000$, each occurring with equal probability. A random variable X has the Rademacher distribution if $latex \displaystyle {\mathbb P}(X=1)={\mathbb P}(X=-1)=1/2. &fg=000000$ A Randemacher sequence is an IID sequence of...
| | extremal010101.wordpress.com
3.2 parsecs away

Travel
| | Suppose we want to understand under what conditions on $latex B$ we have $latex \begin{aligned} \mathbb{E} B(f(X), g(Y))\leq B(\mathbb{E}f(X), \mathbb{E} g(Y)) \end{aligned}$holds for all test functions, say real valued $latex f,g$, where $latex X, Y$ are some random variables (not necessarily all possible random variables!). If $latex X=Y$, i.e., $latex X$ and $latex Y$ are...
| | fabricebaudoin.blog
33.2 parsecs away

Travel
| In this section, we consider a diffusion operator $latex L=\sum_{i,j=1}^n \sigma_{ij} (x) \frac{\partial^2}{ \partial x_i \partial x_j} +\sum_{i=1}^n b_i (x)\frac{\partial}{\partial x_i}, $ where $latex b_i$ and $latex \sigma_{ij}$ are continuous functions on $latex \mathbb{R}^n$ and for every $latex x \in \mathbb{R}^n$, the matrix $latex (\sigma_{ij}(x))_{1\le i,j\le n}$ is a symmetric and non negative matrix. Our...