|
You are here |
pablormier.github.io | ||
| | | | |
www.oranlooney.com
|
|
| | | | | Today, let me be vague. No statistics, no algorithms, no proofs. Instead, we're going to go through a series of examples and eyeball a suggestive series of charts, which will imply a certain conclusion, without actually proving anything; but which will, I hope, provide useful intuition. The premise is this: For any given problem, there exists learned featured representations which are better than any fixed/human-engineered set of features, even once the cost of the added parameters necessary to also learn the new features into account. | |
| | | | |
afiodorov.github.io
|
|
| | | | | The other day I presented a t-SNE plot to a software engineer. "But what isit", I was asked. Good question, I thought... | |
| | | | |
randorithms.com
|
|
| | | | | The Taylor series is a widely-used method to approximate a function, with many applications. Given a function \(y = f(x)\), we can express \(f(x)\) in terms ... | |
| | | | |
windowsontheory.org
|
|
| | | Previous post: ML theory with bad drawings Next post: What do neural networks learn and when do they learn it, see also all seminar posts and course webpage. Lecture video (starts in slide 2 since I hit record button 30 seconds too late - sorry!) - slides (pdf) - slides (Powerpoint with ink and animation)... | ||