|
You are here |
polukhin.tech | ||
| | | | |
coen.needell.org
|
|
| | | | | In my last post on computer vision and memorability, I looked at an already existing model and started experimenting with variations on that architecture. The most successful attempts were those that use Residual Neural Networks. These are a type of deep neural network built to mimic specific visual structures in the brain. ResMem, one of the new models, uses a variation on ResNet in its architecture to leverage that optical identification power towards memorability estimation. M3M, another new model, ex... | |
| | | | |
sander.ai
|
|
| | | | | Slides for my talk at the Deep Learning London meetup | |
| | | | |
coornail.net
|
|
| | | | | Neural networks are a powerful tool in machine learning that can be trained to perform a wide range of tasks, from image classification to natural language processing. In this blog post, well explore how to teach a neural network to add together two numbers. You can also think about this article as a tutorial for tensorflow. | |
| | | | |
wtfleming.github.io
|
|
| | | [AI summary] This post discusses achieving 99.1% accuracy in binary image classification of cats and dogs using an ensemble of ResNet models with PyTorch. | ||