Explore >> Select a destination


You are here

www.ethanepperly.com
| | nickhar.wordpress.com
4.8 parsecs away

Travel
| | 1. Low-rank approximation of matrices Let $latex {A}&fg=000000$ be an arbitrary $latex {n \times m}&fg=000000$ matrix. We assume $latex {n \leq m}&fg=000000$. We consider the problem of approximating $latex {A}&fg=000000$ by a low-rank matrix. For example, we could seek to find a rank $latex {s}&fg=000000$ matrix $latex {B}&fg=000000$ minimizing $latex { \lVert A - B...
| | fa.bianp.net
2.8 parsecs away

Travel
| | The Langevin algorithm is a simple and powerful method to sample from a probability distribution. It's a key ingredient of some machine learning methods such as diffusion models and differentially private learning. In this post, I'll derive a simple convergence analysis of this method in the special case when the ...
| | djalil.chafai.net
3.1 parsecs away

Travel
| | This post is mainly devoted to a probabilistic proof of a famous theorem due to Schoenberg on radial positive definite functions. Let us begin with a general notion: we say that \( {K:\mathbb{R}^d\times\mathbb{R}^d\rightarrow\mathbb{R}} \) is a positive definite kernel when \[ \forall n\geq1, \forall x_1,\ldots,x_n\in\mathbb{R}^d, \forall c\in\mathbb{C}^n, \quad\sum_{i=1}^n\sum_{j=1}^nc_iK(x_i,x_j)\bar{c}_j\geq0. \] When \( {K} \) is symmetric, i.e. \( {K(x,y)=K(y,x)} \) for...
| | sander.ai
18.4 parsecs away

Travel
| Diffusion models have become very popular over the last two years. There is an underappreciated link between diffusion models and autoencoders.