Explore >> Select a destination


You are here

hbfs.wordpress.com
| | nickhar.wordpress.com
5.8 parsecs away

Travel
| | 1. Low-rank approximation of matrices Let $latex {A}&fg=000000$ be an arbitrary $latex {n \times m}&fg=000000$ matrix. We assume $latex {n \leq m}&fg=000000$. We consider the problem of approximating $latex {A}&fg=000000$ by a low-rank matrix. For example, we could seek to find a rank $latex {s}&fg=000000$ matrix $latex {B}&fg=000000$ minimizing $latex { \lVert A - B...
| | rhubbarb.wordpress.com
4.2 parsecs away

Travel
| | My previous post was written with the help of a few very useful tools: LaTeX mathematical typesetting Gummi LaTeX editor Python programming language PyX Python / LaTeX graphics package my own PyPyX wrapper around PyX LaTeX2WP script for easy conversion from LaTeX to WordPress HTML
| | mikespivey.wordpress.com
4.1 parsecs away

Travel
| | It's fairly well-known, to those who know it, that $latex \displaystyle \left(\sum_{k=1}^n k \right)^2 = \frac{n^2(n+1)^2}{4} = \sum_{k=1}^n k^3 $. In other words, the square of the sum of the first n positive integers equals the sum of the cubes of the first n positive integers. It's probably less well-known that a similar relationship holds...
| | poissonisfish.com
27.4 parsecs away

Travel
| My last entryintroduces principal component analysis (PCA), one of many unsupervised learning tools. I concluded the post with a demonstration of principal component regression (PCR), which essentially is a ordinary least squares (OLS) fit using the first $latex k &s=1$ principal components (PCs) from the predictors. This brings about many advantages: There is virtually no...