Explore >> Select a destination


You are here

matheuscmss.wordpress.com
| | djalil.chafai.net
3.2 parsecs away

Travel
| | Let $X$ be an $n\times n$ complex matrix. The eigenvalues $\lambda_1(X), \ldots, \lambda_n(X)$ of $X$ are the roots in $\mathbb{C}$ of its characteristic polynomial. We label them in such a way that $\displaystyle |\lambda_1(X)|\geq\cdots\geq|\lambda_n(X)|$ with growing phases. The spectral radius of $X$ is $\rho(X):=|\lambda_1(X)|$. The singular values $\displaystyle s_1(X)\geq\cdots\geq s_n(X)$ of $X$ are the eigenvalues of the positive semi-definite Hermitian...
| | datuan5pdes.wordpress.com
3.9 parsecs away

Travel
| | 1 bài vi?t ???c xu?t b?n b?i datuan5pdes vào October 10, 2015
| | nhigham.com
1.4 parsecs away

Travel
| | The spectral radius $latex \rho(A)$ of a square matrix $latex A\in\mathbb{C}^{n\times n}$ is the largest absolute value of any eigenvalue of $LATEX A$: $latex \notag \rho(A) = \max\{\, |\lambda|: \lambda~ \mbox{is an eigenvalue of}~ A\,\}. $ For Hermitian matrices (or more generally normal matrices, those satisfying $LATEX AA^* = A^*A$) the spectral radius is just...
| | www.jeremykun.com
26.1 parsecs away

Travel
| Machine learning is broadly split into two camps, statistical learning and non-statistical learning. The latter we've started to get a good picture of on this blog; we approached Perceptrons, decision trees, and neural networks from a non-statistical perspective. And generally "statistical" learning is just that, a perspective. Data is phrased in terms of independent and dependent variables, and statistical techniques are leveraged against the data. In this post we'll focus on the simplest example of thi...