|
You are here |
almostsuremath.com | ||
| | | | |
xorshammer.com
|
|
| | | | | Nonstandard Analysis is usually used to introduce infinitesimals into the real numbers in an attempt to make arguments in analysis more intuitive. The idea is that you construct a superset $latex \mathbb{R}^*$ which contains the reals and also some infinitesimals, prove that some statement holds of $latex \mathbb{R}^*$, and then use a general "transfer principle"... | |
| | | | |
mkatkov.wordpress.com
|
|
| | | | | For probability space $latex (\Omega, \mathcal{F}, \mathbb{P})$ with $latex A \in \mathcal{F}$ the indicator random variable $latex {\bf 1}_A : \Omega \rightarrow \mathbb{R} = \left\{ \begin{array}{cc} 1, & \omega \in A \\ 0, & \omega \notin A \end{array} \right.$ Than expected value of the indicator variable is the probability of the event $latex \omega \in... | |
| | | | |
extremal010101.wordpress.com
|
|
| | | | | Suppose we want to understand under what conditions on $latex B$ we have $latex \begin{aligned} \mathbb{E} B(f(X), g(Y))\leq B(\mathbb{E}f(X), \mathbb{E} g(Y)) \end{aligned}$holds for all test functions, say real valued $latex f,g$, where $latex X, Y$ are some random variables (not necessarily all possible random variables!). If $latex X=Y$, i.e., $latex X$ and $latex Y$ are... | |
| | | | |
windowsontheory.org
|
|
| | | Previous post: ML theory with bad drawings Next post: What do neural networks learn and when do they learn it, see also all seminar posts and course webpage. Lecture video (starts in slide 2 since I hit record button 30 seconds too late - sorry!) - slides (pdf) - slides (Powerpoint with ink and animation)... | ||