|
You are here |
www.ethanepperly.com | ||
| | | | |
mcyoung.xyz
|
|
| | | | | [AI summary] This text provides an in-depth explanation of linear algebra concepts, including vector spaces, linear transformations, matrix multiplication, and field extensions. It emphasizes the importance of understanding these concepts through the lens of linear maps and their composition, which naturally leads to the matrix multiplication formula. The text also touches on the distinction between vector spaces and abelian groups, and discusses the concept of field extensions, such as [R:Q] and [C:R]. The author mentions their art blog and acknowledges their own drawing of the content. | |
| | | | |
nhigham.com
|
|
| | | | | For a polynomial $latex \notag \phi(t) = a_kt^k + \cdots + a_1t + a_0, $ where $latex a_k\in\mathbb{C}$ for all $latex k$, the matrix polynomial obtained by evaluating $latex \phi$ at $latex A\in\mathbb{C}^{n \times n}$ is $latex \notag \phi(A) = a_kA^k + \cdots + a_1A + a_0 I. $ (Note that the constant term is... | |
| | | | |
nickhar.wordpress.com
|
|
| | | | | 1. Low-rank approximation of matrices Let $latex {A}&fg=000000$ be an arbitrary $latex {n \times m}&fg=000000$ matrix. We assume $latex {n \leq m}&fg=000000$. We consider the problem of approximating $latex {A}&fg=000000$ by a low-rank matrix. For example, we could seek to find a rank $latex {s}&fg=000000$ matrix $latex {B}&fg=000000$ minimizing $latex { \lVert A - B... | |
| | | | |
marcospereira.me
|
|
| | | In this post we summarize the math behind deep learning and implement a simple network that achieves 85% accuracy classifying digits from the MNIST dataset. | ||