|
You are here |
djalil.chafai.net | ||
| | | | |
fa.bianp.net
|
|
| | | | | The Langevin algorithm is a simple and powerful method to sample from a probability distribution. It's a key ingredient of some machine learning methods such as diffusion models and differentially private learning. In this post, I'll derive a simple convergence analysis of this method in the special case when the ... | |
| | | | |
fabricebaudoin.blog
|
|
| | | | | In this lecture, we studySobolev inequalities on Dirichlet spaces. The approach we develop is related to Hardy-Littlewood-Sobolev theory The link between the Hardy-Littlewood-Sobolev theory and heat kernel upper bounds is due to Varopoulos, but the proof I give below I learnt it from my colleague RodrigoBañuelos. It bypasses the Marcinkiewicz interpolation theorem,that was originally used... | |
| | | | |
windowsontheory.org
|
|
| | | | | Previous post: ML theory with bad drawings Next post: What do neural networks learn and when do they learn it, see also all seminar posts and course webpage. Lecture video (starts in slide 2 since I hit record button 30 seconds too late - sorry!) - slides (pdf) - slides (Powerpoint with ink and animation)... | |
| | | | |
mycqstate.wordpress.com
|
|
| | | Today I'd like to sketch a question that's been pushing me in a lot of different directions over the past few years --- some sane, others less so; few fruitful, but all instructive. The question is motivated by the problem of placing upper bounds on the amount of entanglement needed to play a two-player non-local... | ||