| 
	     You are here  | 
        blog.fastforwardlabs.com | ||
| | | | | 
            
              bdtechtalks.com
             | 
        |
| | | | | The transformer model has become one of the main highlights of advances in deep learning and deep neural networks. | |
| | | | | 
            
              jonathanbgn.com
             | 
        |
| | | | | Since the deep learning wave started in the early 2010s, there has been much hype and disappointments. I feel that a big part of this is due to high expectations driven by research progress that do not translate so well in real-world applications. Hopefully, self-supervised learning might be able to close the gap between these two worlds. This learning paradigm is not new, but it has seen a resurgence of interest over the past few years thanks to mediatized successes like GPT-3 or BERT. Many AI pundits h... | |
| | | | | 
            
              www.altexsoft.com
             | 
        |
| | | | | A dive into the machine learning pipeline on the production stage: the description of architecture, tools, and general flow of the model deployment. | |
| | | | | 
            
              vxlabs.com
             | 
        |
| | | I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time. | ||