|
You are here |
nhigham.com | ||
| | | | |
nla-group.org
|
|
| | | | | by Sven Hammarling and Nick Higham It is often thought that Jim Wilkinson developed backward error analysis because of his early involvement in solving systems of linear equations. In his 1970 Turing lecture [5] he described an experience, during world war II at the Armament Research Department, of solving a system of twelve linear equations | |
| | | | |
djalil.chafai.net
|
|
| | | | | Let $X$ be an $n\times n$ complex matrix. The eigenvalues $\lambda_1(X), \ldots, \lambda_n(X)$ of $X$ are the roots in $\mathbb{C}$ of its characteristic polynomial. We label them in such a way that $\displaystyle |\lambda_1(X)|\geq\cdots\geq|\lambda_n(X)|$ with growing phases. The spectral radius of $X$ is $\rho(X):=|\lambda_1(X)|$. The singular values $\displaystyle s_1(X)\geq\cdots\geq s_n(X)$ of $X$ are the eigenvalues of the positive semi-definite Hermitian... | |
| | | | |
lucatrevisan.wordpress.com
|
|
| | | | | The spectral norm of the infinite $latex {d}&fg=000000$-regular tree is $latex {2 \sqrt {d-1}}&fg=000000$. We will see what this means and how to prove it. When talking about the expansion of random graphs, abobut the construction of Ramanujan expanders, as well as about sparsifiers, community detection, and several other problems, the number $latex {2 \sqrt{d-1}}&fg=000000$... | |
| | | | |
fa.bianp.net
|
|
| | | The Langevin algorithm is a simple and powerful method to sample from a probability distribution. It's a key ingredient of some machine learning methods such as diffusion models and differentially private learning. In this post, I'll derive a simple convergence analysis of this method in the special case when the ... | ||