|
You are here |
hadrienj.github.io | ||
| | | | |
grigory.github.io
|
|
| | | | | Discussion of the class on Foundations of Data Science that I am teaching at IU this Fall. | |
| | | | |
blog.georgeshakan.com
|
|
| | | | | Principal Component Analysis (PCA) is a popular technique in machine learning for dimension reduction. It can be derived from Singular Value Decomposition (SVD) which we will discuss in this post. We will cover the math, an example in python, and finally some intuition. The Math SVD asserts that any $latex m \times d$ matrix $latex... | |
| | | | |
matthewmcateer.me
|
|
| | | | | Important mathematical prerequisites for getting into Machine Learning, Deep Learning, or any of the other space | |
| | | | |
statisticaloddsandends.wordpress.com
|
|
| | | If $latex Z_1, \dots, Z_n$ are independent $latex \text{Cauchy}(0, 1)$ variables and $latex w= (w_1, \dots, w_n)$ is a random vector independent of the $latex Z_i$'s with $latex w_i \geq 0$ for all $latex i$ and $latex w_1 + \dots w_n = 0$, it is well-known that $latex \displaystyle\sum_{i=1}^n w_i Z_i$ also has a $latex... | ||