|
You are here |
jaykmody.com | ||
| | | | |
hadrienj.github.io
|
|
| | | | | In this post, we will see special kinds of matrix and vectors the diagonal and symmetric matrices, the unit vector and the concept of orthogonality. | |
| | | | |
blog.georgeshakan.com
|
|
| | | | | Principal Component Analysis (PCA) is a popular technique in machine learning for dimension reduction. It can be derived from Singular Value Decomposition (SVD) which we will discuss in this post. We will cover the math, an example in python, and finally some intuition. The Math SVD asserts that any $latex m \times d$ matrix $latex... | |
| | | | |
sebastianraschka.com
|
|
| | | | | I'm Sebastian: a machine learning & AI researcher, programmer, and author. As Staff Research Engineer Lightning AI, I focus on the intersection of AI research, software development, and large language models (LLMs). | |
| | | | |
jaketae.github.io
|
|
| | | So far on this blog, we have looked the mathematics behind distributions, most notably binomial, Poisson, and Gamma, with a little bit of exponential. These distributions are interesting in and of themselves, but their true beauty shines through when we analyze them under the light of Bayesian inference. In today's post, we first develop an intuition for conditional probabilities to derive Bayes' theorem. From there, we motivate the method of Bayesian inference as a means of understanding probability. | ||