Explore >> Select a destination


You are here

logosconcarne.com
| | hadrienj.github.io
11.5 parsecs away

Travel
| | In this post, we will see special kinds of matrix and vectors the diagonal and symmetric matrices, the unit vector and the concept of orthogonality.
| | thenumb.at
13.7 parsecs away

Travel
| | [AI summary] This text provides an in-depth exploration of how functions can be treated as vectors, particularly in the context of signal and geometry processing. It discusses the representation of functions as infinite-dimensional vectors, the use of Fourier transforms in various domains (such as 1D, spherical, and mesh-based), and the application of linear algebra to functions for tasks like compression and smoothing. The text also touches on the mathematical foundations of these concepts, including the Laplace operator, eigenfunctions, and orthonormal bases. It concludes with a list of further reading topics and acknowledges the contributions of reviewers.
| | matthewmcateer.me
18.4 parsecs away

Travel
| | Important mathematical prerequisites for getting into Machine Learning, Deep Learning, or any of the other space
| | vxlabs.com
24.1 parsecs away

Travel
| I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time.