You are here |
tiao.io | ||
| | | |
jxmo.io
|
|
| | | | A primer on variational autoencoders (VAEs) culminating in a PyTorch implementation of a VAE with discrete latents. | |
| | | |
blog.keras.io
|
|
| | | | ||
| | | |
vxlabs.com
|
|
| | | | I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time. | |
| | | |
yasha.solutions
|
|
| | A loss function, also known as a cost function or objective function, is a critical component in training machine learning models, particularly in neural networks and deep learning... |