|
You are here |
thenumb.at | ||
| | | | |
francisbach.com
|
|
| | | | | ||
| | | | |
www.jeremykun.com
|
|
| | | | | Last time we investigated the naive (which I'll henceforth call "classical") notion of the Fourier transform and its inverse. While the development wasn't quite rigorous, we nevertheless discovered elegant formulas and interesting properties that proved useful in at least solving differential equations. Of course, we wouldn't be following this trail of mathematics if it didn't result in some worthwhile applications to programming. While we'll get there eventually, this primer will take us deeper down the rabbit hole of abstraction. | |
| | | | |
peterbloem.nl
|
|
| | | | | ||
| | | | |
dustintran.com
|
|
| | | One aspect I always enjoy about machine learning is that questions often go back to the basics. The field essentially goes into an existential crisis every dozen years-rethinking our tools and asking foundational questions such as "why neural networks" or "why generative models".1 This was a theme in my conversations during NIPS 2016 last week, where a frequent topic was on the advantages of a Bayesian perspective to machine learning. Not surprisingly, this appeared as a big discussion point during the p... | ||