You are here |
jeremykun.com | ||
| | | |
www.jeremykun.com
|
|
| | | | We are about to begin a series where we analyze large corpora of English words. In particular, we will use a probabilistic analysis of Google's ngrams to solve various tasks such as spelling correction, word segmentation, on-line typing prediction, and decoding substitution ciphers. This will hopefully take us on a wonderful journey through elementary probability, dynamic programming algorithms, and optimization. As usual, the code implemented in this post is available from this blog's Github page, and w... | |
| | | |
www.jeremykun.com
|
|
| | | | When addressing the question of what it means for an algorithm to learn, one can imagine many different models, and there are quite a few. This invariably raises the question of which models are "the same" and which are "different," along with a precise description of how we're comparing models. We've seen one learning model so far, called Probably Approximately Correct (PAC), which espouses the following answer to the learning question: | |
| | | |
www.chrisstucchio.com
|
|
| | | | So I've just launched my new startup, BeerBnB. It's a hip little site matching beer drinkers with specialty microbreweries - AirBnB for drinkers, or maybe eBay for brewers. My marketer growth hacker has gotten some early publicity by advertising in the bathroom of a few bars - the result was 794 unique ... | |
| | | |
www.approximatelycorrect.com
|
|
| | By Zachary C. Lipton* & Jacob Steinhardt* *equal authorship Originally presented at ICML 2018: Machine Learning Debates [arXiv link] Published in Communications of the ACM 1 Introduction Collectively, machine learning (ML) researchers are engaged in |