|
You are here |
nhigham.com | ||
| | | | |
hadrienj.github.io
|
|
| | | | | In this post, we will see special kinds of matrix and vectors the diagonal and symmetric matrices, the unit vector and the concept of orthogonality. | |
| | | | |
stephenmalina.com
|
|
| | | | | Matrix Potpourri # As part of reviewing Linear Algebra for my Machine Learning class, I've noticed there's a bunch of matrix terminology that I didn't encounter during my proof-based self-study of LA from Linear Algebra Done Right. This post is mostly intended to consolidate my own understanding and to act as a reference to future me, but if it also helps others in a similar position, that's even better! | |
| | | | |
francisbach.com
|
|
| | | | | [AI summary] This article explores the properties of matrix relative entropy and its convexity, linking it to machine learning and information theory. It discusses the use of positive definite matrices in various contexts, including concentration inequalities and kernel methods. The article also includes a lemma on matrix cumulant generating functions and its proof, as well as references to relevant literature. | |
| | | | |
almostsuremath.com
|
|
| | | I start these notes on stochastic calculus with the definition of a continuous time stochastic process. Very simply, a stochastic process is a collection of random variables $latex {\{X_t\}_{t\ge 0}}&fg=000000$ defined on a probability space $latex {(\Omega,\mathcal{F},{\mathbb P})}&fg=000000$. That is, for each time $latex {t\ge 0}&fg=000000$, $latex {\omega\mapsto X_t(\omega)}&fg=000000$ is a measurable function from $latex... | ||