Explore >> Select a destination


You are here

windowsontheory.org
| | blog.demofox.org
11.9 parsecs away

Travel
| | This article explains how these four things fit together and shows some examples of what they are used for. Derivatives Derivatives are the most fundamental concept in calculus. If you have a function, a derivative tells you how much that function changes at each point. If we start with the function $latex y=x^2-6x+13$, we can...
| | akosiorek.github.io
10.1 parsecs away

Travel
| | Machine learning is all about probability.To train a model, we typically tune its parameters to maximise the probability of the training dataset under the mo...
| | jsteinhardt.wordpress.com
10.6 parsecs away

Travel
| | [Highlights for the busy: de-bunking standard "Bayes is optimal" arguments; frequentist Solomonoff induction; and a description of the online learning framework.] Short summary. This essay makes many points, each of which I think is worth reading, but if you are only going to understand one point I think it should be "Myth 5? below, which...
| | vxlabs.com
68.5 parsecs away

Travel
| I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time.