Explore >> Select a destination


You are here

www.let-all.com
| | windowsontheory.org
2.6 parsecs away

Travel
| | Previous post: ML theory with bad drawings Next post: What do neural networks learn and when do they learn it, see also all seminar posts and course webpage. Lecture video (starts in slide 2 since I hit record button 30 seconds too late - sorry!) - slides (pdf) - slides (Powerpoint with ink and animation)...
| | francisbach.com
3.5 parsecs away

Travel
| | [AI summary] This text discusses the scaling laws of optimization in machine learning, focusing on asymptotic expansions for both strongly convex and non-strongly convex cases. It covers the derivation of performance bounds using techniques like Laplace's method and the behavior of random minimizers. The text also explains the 'weird' behavior observed in certain plots, where non-strongly convex bounds become tight under specific conditions. The analysis connects theoretical results to practical considerations in optimization algorithms.
| | www.jeremykun.com
2.4 parsecs away

Travel
| | When addressing the question of what it means for an algorithm to learn, one can imagine many different models, and there are quite a few. This invariably raises the question of which models are "the same" and which are "different," along with a precise description of how we're comparing models. We've seen one learning model so far, called Probably Approximately Correct (PAC), which espouses the following answer to the learning question:
| | fodsi.us
15.4 parsecs away

Travel
| [AI summary] The ML4A Virtual Workshop explores how machine learning enhances classical algorithms through data-driven approaches, featuring talks on deep generative models, model-based deep learning, and learning-augmented algorithms.