|
You are here |
windowsontheory.org | ||
| | | | |
hackmd.io
|
|
| | | | | ||
| | | | |
francisbach.com
|
|
| | | | | [AI summary] This text discusses the scaling laws of optimization in machine learning, focusing on asymptotic expansions for both strongly convex and non-strongly convex cases. It covers the derivation of performance bounds using techniques like Laplace's method and the behavior of random minimizers. The text also explains the 'weird' behavior observed in certain plots, where non-strongly convex bounds become tight under specific conditions. The analysis connects theoretical results to practical considerations in optimization algorithms. | |
| | | | |
iclr-blogposts.github.io
|
|
| | | | | Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle | |
| | | | |
goodfire.ai
|
|
| | | Goodfire is an AI research company building practical interpretability tools for safe and reliable generative models. | ||