|
You are here |
djalil.chafai.net | ||
| | | | |
mkatkov.wordpress.com
|
|
| | | | | For probability space $latex (\Omega, \mathcal{F}, \mathbb{P})$ with $latex A \in \mathcal{F}$ the indicator random variable $latex {\bf 1}_A : \Omega \rightarrow \mathbb{R} = \left\{ \begin{array}{cc} 1, & \omega \in A \\ 0, & \omega \notin A \end{array} \right.$ Than expected value of the indicator variable is the probability of the event $latex \omega \in... | |
| | | | |
fabricebaudoin.blog
|
|
| | | | | In this lecture, we studySobolev inequalities on Dirichlet spaces. The approach we develop is related to Hardy-Littlewood-Sobolev theory The link between the Hardy-Littlewood-Sobolev theory and heat kernel upper bounds is due to Varopoulos, but the proof I give below I learnt it from my colleague RodrigoBañuelos. It bypasses the Marcinkiewicz interpolation theorem,that was originally used... | |
| | | | |
windowsontheory.org
|
|
| | | | | Previous post: ML theory with bad drawings Next post: What do neural networks learn and when do they learn it, see also all seminar posts and course webpage. Lecture video (starts in slide 2 since I hit record button 30 seconds too late - sorry!) - slides (pdf) - slides (Powerpoint with ink and animation)... | |
| | | | |
resources.paperdigest.org
|
|
| | | The Conference on Neural Information Processing Systems (NIPS) is one of the top machine learning conferences in the world. Paper Digest Team analyze all papers published on NIPS in the past years, and presents the 15 most influential papers for each year. This ranking list is automatically construc | ||