You are here |
burakkanber.com | ||
| | | |
tomhume.org
|
|
| | | | I don't remember how I came across it, but this is one of the most exciting papers I've read recently. The authors train a neural network that tries to identify the next in a sequence of MNIST samples, presented in digit order. The interesting part is that when they include a proxy for energy usage in the loss function (i.e. train it to be more energy-efficient), the resulting network seems to exhibit the characteristics of predictive coding: some units seem to be responsible for predictions, others for encoding prediction error. | |
| | | |
markodenic.com
|
|
| | | | Free programming books, algorithms, public APIs, and much more. | |
| | | |
sander.ai
|
|
| | | | Slides for my talk at the Deep Learning London meetup | |
| | | |
zserge.com
|
|
| | Neural network and deep learning introduction for those who skipped the math class but wants to follow the trend |