|
You are here |
pfzhang.wordpress.com | ||
| | | | |
algorithmsoup.wordpress.com
|
|
| | | | | The ``probabilistic method'' is the art of applying probabilistic thinking to non-probabilistic problems. Applications of the probabilistic method often feel like magic. Here is my favorite example: Theorem (Erdös, 1965). Call a set $latex {X}&fg=000000$ sum-free if for all $latex {a, b \in X}&fg=000000$, we have $latex {a + b \not\in X}&fg=000000$. For any finite... | |
| | | | |
mikespivey.wordpress.com
|
|
| | | | | The Riemann zeta function $latex \zeta(s)$ can be expressed as $latex \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}$, for complex numbers s whose real part is greater than 1. By analytic continuation, $latex \zeta(s)$ can be extended to all complex numbers except where $latex s = 1$. The power sum $latex S_a(M)$ is given by $latex S_a(M) =... | |
| | | | |
nhigham.com
|
|
| | | | | The Cayley-Hamilton Theorem says that a square matrix $LATEX A$ satisfies its characteristic equation, that is $latex p(A) = 0$ where $latex p(t) = \det(tI-A)$ is the characteristic polynomial. This statement is not simply the substitution ``$latex p(A) = \det(A - A) = 0$'', which is not valid since $latex t$ must remain a scalar... | |
| | | | |
fa.bianp.net
|
|
| | | The Langevin algorithm is a simple and powerful method to sample from a probability distribution. It's a key ingredient of some machine learning methods such as diffusion models and differentially private learning. In this post, I'll derive a simple convergence analysis of this method in the special case when the ... | ||