|
You are here |
optional.is | ||
| | | | |
deepai.org
|
|
| | | | | A field of computer science that aims to teach computers how to learn and act without being explicitly programmed. | |
| | | | |
scorpil.com
|
|
| | | | | In Part One of the "Understanding Generative AI" series, we delved into Tokenization - the process of dividing text into tokens, which serve as the fundamental units of information for neural networks. These tokens are crucial in shaping how AI interprets and processes language. Building upon this foundational knowledge, we are now ready to explore Neural Networks - the cornerstone technology underpinning all Artificial Intelligence research. A Short Look into the History Neural Networks, as a technology, have their roots in the 1940s and 1950s. | |
| | | | |
www.markhw.com
|
|
| | | | | [AI summary] The blog post discusses modeling variance in data using the gamlss package in R, focusing on the user's film ratings over time. It highlights how the standard deviation of ratings increases with the release year of films, reflecting the user's movie selection habits. The analysis shows that older films have higher average ratings and lower variability, while newer films have lower average ratings and higher variability. The post emphasizes the importance of considering variance in social phenomena and provides practical examples using R for data visualization and statistical modeling. | |
| | | | |
vxlabs.com
|
|
| | | I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time. | ||