|
You are here |
nhigham.com | ||
| | | | |
djalil.chafai.net
|
|
| | | | | Let $X$ be an $n\times n$ complex matrix. The eigenvalues $\lambda_1(X), \ldots, \lambda_n(X)$ of $X$ are the roots in $\mathbb{C}$ of its characteristic polynomial. We label them in such a way that $\displaystyle |\lambda_1(X)|\geq\cdots\geq|\lambda_n(X)|$ with growing phases. The spectral radius of $X$ is $\rho(X):=|\lambda_1(X)|$. The singular values $\displaystyle s_1(X)\geq\cdots\geq s_n(X)$ of $X$ are the eigenvalues of the positive semi-definite Hermitian... | |
| | | | |
xorshammer.com
|
|
| | | | | There are a number of applications of logic to ordinary mathematics, with the most coming from (I believe) model theory. One of the easiest and most striking that I know is called Ax's Theorem. Ax's Theorem: For all polynomial functions $latex f\colon \mathbb{C}^n\to \mathbb{C}^n$, if $latex f$ is injective, then $latex f$ is surjective. Very... | |
| | | | |
hbfs.wordpress.com
|
|
| | | | | Evaluating polynomials is not a thing I do very often. When I do, it's for interpolation and splines; and traditionally those are done with relatively low degree polynomials-cubic at most. There are a few rather simple tricks you can use to evaluate them efficiently, and we'll have a look at them. A polynomial is an... | |
| | | | |
teddykoker.com
|
|
| | | Google AI recently released a paper, Rethinking Attention with Performers (Choromanski et al., 2020), which introduces Performer, a Transformer architecture which estimates the full-rank-attention mechanism using orthogonal random features to approximate the softmax kernel with linear space and time complexity. In this post we will investigate how this works, and how it is useful for the machine learning community. | ||