Explore >> Select a destination


You are here

emilygorcenski.com
| | jcarroll.xyz
14.7 parsecs away

Travel
| | I love small projects for helping me learn, especially programming. I'm still learning Julia, and have found myself wanting more "little silly things" I can digest and learn from. A lot of the projects I see in Julia are big mathematical models, and I'm just not ready to dive that deep yet. This series of tweets caught my eye, partly because of the cool animation, but also the bite-sized amount of information it was conveying - that interpolation in Julia can be specified so easily, thanks in large part to the multiple dispatch design of the language.
| | adamdrake.com
1.0 parsecs away

Travel
| | Adam Drake is an advisor to scale-up tech companies. He writes about ML/AI/crypto/data, leadership, and building tech teams.
| | lispy.wordpress.com
3.1 parsecs away

Travel
| | I didn't immediately recognize the equation that Steve Knight used in his answer to problem 6 of the Euler Project, but it was in fact just the formula for an arithmetic series. This one's actually pretty easy to come to from an intuitive standpoint. The story goes that Gauss had one of the meanest school...
| | blog.fastforwardlabs.com
37.6 parsecs away

Travel
| The Variational Autoencoder (VAE) neatly synthesizes unsupervised deep learning and variational Bayesian methods into one sleek package. In Part I of this series, we introduced the theory and intuition behind the VAE, an exciting development in machine learning for combined generative modeling and inference-"machines that imagine and reason." To recap: VAEs put a probabilistic spin on the basic autoencoder paradigm-treating their inputs, hidden representations, and reconstructed outputs as probabilistic ...