You are here |
petewarden.com | ||
| | | |
blog.adnansiddiqi.me
|
|
| | | | Learn the basics of Large Language Models (LLMs) in this introduction to GenAI series. Discover how LLMs work, their architecture, and practical applications like customer support, content creation, and software development. | |
| | | |
www.mrowe.co.za
|
|
| | | | Michael Rowe is an Associate Professor in the Department of Physiotherapy at the University of the Western Cape in Cape Town, South Africa. | |
| | | |
inbetaphysio.com
|
|
| | | | In this episode, we discuss the implications of generative AI on assessment, and on learning and teaching more broadly. This was a wide-ranging conversation that explored some of the detail around how language models work, it's inability to compare responses to valid models of the world, practical uses for AI in teaching, learning, and assessment, and the risks of having AI being trained on data generated by AI. | |
| | | |
www.paepper.com
|
|
| | Today's paper: Rethinking 'Batch' in BatchNorm by Wu & Johnson BatchNorm is a critical building block in modern convolutional neural networks. Its unique property of operating on "batches" instead of individual samples introduces significantly different behaviors from most other operations in deep learning. As a result, it leads to many hidden caveats that can negatively impact model's performance in subtle ways. This is a citation from the paper's abstract and the emphasis is mine which caught my attention. Let's explore these subtle ways which can negatively impact your model's performance! The paper of Wu & Johnson can be found on arxiv. |