|
You are here |
fa.bianp.net | ||
| | | | |
blog.omega-prime.co.uk
|
|
| | | | | The most fundamental technique in statistical learning is ordinary least squares (OLS) regression. If we have a vector of observations \(y\) and a matrix of features associated with each observation \(X\), then we assume the observations are a linear function of the features plus some (iid) random noise, \(\epsilon\): | |
| | | | |
francisbach.com
|
|
| | | | | ||
| | | | |
lucatrevisan.wordpress.com
|
|
| | | | | (This is the sixth in a series of posts on online optimization techniques and their ``applications'' to complexity theory, combinatorics and pseudorandomness. The plan for this series of posts is to alternate one post explaining a result from the theory of online convex optimization and one post explaining an ``application.'' The first two posts were... | |
| | | | |
www.philipzucker.com
|
|
| | | |||