You are here |
randorithms.com | ||
| | | |
www.jeremykun.com
|
|
| | | | The singular value decomposition (SVD) of a matrix is a fundamental tool in computer science, data analysis, and statistics. It's used for all kinds of applications from regression to prediction, to finding approximate solutions to optimization problems. In this series of two posts we'll motivate, define, compute, and use the singular value decomposition to analyze some data. (Jump to the second post) I want to spend the first post entirely on motivation and background. | |
| | | |
www.jeremykun.com
|
|
| | | | The standard inner product of two vectors has some nice geometric properties. Given two vectors $ x, y \in \mathbb{R}^n$, where by $ x_i$ I mean the $ i$-th coordinate of $ x$, the standard inner product (which I will interchangeably call the dot product) is defined by the formula $$\displaystyle \langle x, y \rangle = x_1 y_1 + \dots + x_n y_n$$ This formula, simple as it is, produces a lot of interesting geometry. | |
| | | |
yasha.solutions
|
|
| | | | A loss function, also known as a cost function or objective function, is a critical component in training machine learning models, particularly in neural networks and deep learning... | |
| | | |
blog.demofox.org
|
|
| | Lately I've been eyeball deep in noise, ordered dithering and related topics, and have been learning some really interesting things. As the information coalesces it'll become apparent whether there is going to be a huge mega post coming, or if there will be several smaller ones. In the meantime, I wanted to share this bite... |