Explore >> Select a destination


You are here

nhigham.com
| | nickhar.wordpress.com
1.3 parsecs away

Travel
| | 1. Low-rank approximation of matrices Let $latex {A}&fg=000000$ be an arbitrary $latex {n \times m}&fg=000000$ matrix. We assume $latex {n \leq m}&fg=000000$. We consider the problem of approximating $latex {A}&fg=000000$ by a low-rank matrix. For example, we could seek to find a rank $latex {s}&fg=000000$ matrix $latex {B}&fg=000000$ minimizing $latex { \lVert A - B...
| | blog.georgeshakan.com
2.4 parsecs away

Travel
| | Principal Component Analysis (PCA) is a popular technique in machine learning for dimension reduction. It can be derived from Singular Value Decomposition (SVD) which we will discuss in this post. We will cover the math, an example in python, and finally some intuition. The Math SVD asserts that any $latex m \times d$ matrix $latex...
| | hadrienj.github.io
2.4 parsecs away

Travel
| | In this post, we will learn about the Moore Penrose pseudoinverse as a way to find an approaching solution where no solution exists. In some cases, a system ...
| | alanrendall.wordpress.com
21.4 parsecs away

Travel
| The theorem of the title is about dividing smooth functions by other smooth functions or, in other words, representing a given smooth function in terms of products of other smooth functions. A large part of the account which follows is based on that in the book 'Normal Forms and Unfoldings for Local Dynamical Systems' by...