Explore >> Select a destination


You are here

hadrienj.github.io
| | stephenmalina.com
1.7 parsecs away

Travel
| | Matrix Potpourri # As part of reviewing Linear Algebra for my Machine Learning class, I've noticed there's a bunch of matrix terminology that I didn't encounter during my proof-based self-study of LA from Linear Algebra Done Right. This post is mostly intended to consolidate my own understanding and to act as a reference to future me, but if it also helps others in a similar position, that's even better!
| | thenumb.at
2.7 parsecs away

Travel
| | [AI summary] This text provides an in-depth exploration of how functions can be treated as vectors, particularly in the context of signal and geometry processing. It discusses the representation of functions as infinite-dimensional vectors, the use of Fourier transforms in various domains (such as 1D, spherical, and mesh-based), and the application of linear algebra to functions for tasks like compression and smoothing. The text also touches on the mathematical foundations of these concepts, including the Laplace operator, eigenfunctions, and orthonormal bases. It concludes with a list of further reading topics and acknowledges the contributions of reviewers.
| | nhigham.com
2.1 parsecs away

Travel
| | In linear algebra terms, a correlation matrix is a symmetric positive semidefinite matrix with unit diagonal. In other words, it is a symmetric matrix with ones on the diagonal whose eigenvalues are all nonnegative. The term comes from statistics. If $latex x_1, x_2, \dots, x_n$ are column vectors with $latex m$ elements, each vector containing...
| | www.v7labs.com
11.6 parsecs away

Travel
| Autoencoders are a type of neural network that can be used for unsupervised learning. Explore different types of autoencoders and learn how they work.