|
You are here |
graphneural.network | ||
| | | | |
blog.keras.io
|
|
| | | | | [AI summary] The text discusses various types of autoencoders and their applications. It starts with basic autoencoders, then moves to sparse autoencoders, deep autoencoders, and sequence-to-sequence autoencoders. The text also covers variational autoencoders (VAEs), explaining their structure and training process. It includes code examples for each type of autoencoder and mentions the use of tools like TensorBoard for visualization. The VAE section highlights how to generate new data samples and visualize the latent space. The text concludes with references and a note about the potential for further topics. | |
| | | | |
michael-lewis.com
|
|
| | | | | This is a short summary of some of the terminology used in machine learning, with an emphasis on neural networks. I've put it together primarily to help my own understanding, phrasing it largely in non-mathematical terms. As such it may be of use to others who come from more of a programming than a mathematical background. | |
| | | | |
www.chrisritchie.org
|
|
| | | | | Examples of keras merging layers, convolutional layers, and activation functions in L, RGB, HSV, and YCbCr. | |
| | | | |
golb.hplar.ch
|
|
| | | [AI summary] The article describes the implementation of a neural network in Java and JavaScript for digit recognition using the MNIST dataset, covering forward and backpropagation processes. | ||