|
You are here |
almostsuremath.com | ||
| | | | |
francisbach.com
|
|
| | | | | [AI summary] This text discusses the scaling laws of optimization in machine learning, focusing on asymptotic expansions for both strongly convex and non-strongly convex cases. It covers the derivation of performance bounds using techniques like Laplace's method and the behavior of random minimizers. The text also explains the 'weird' behavior observed in certain plots, where non-strongly convex bounds become tight under specific conditions. The analysis connects theoretical results to practical considerations in optimization algorithms. | |
| | | | |
ianwrightsite.wordpress.com
|
|
| | | | | Riemann's Zeta function is an infinite sublation of Hegelian integers. | |
| | | | |
terrytao.wordpress.com
|
|
| | | | | Many modern mathematical proofs are a combination of conceptual arguments and technical calculations. There is something of a tradeoff between the two: one can add more conceptual arguments to try ... | |
| | | | |
www.approximatelycorrect.com
|
|
| | | By Zachary C. Lipton* & Jacob Steinhardt* *equal authorship Originally presented at ICML 2018: Machine Learning Debates [arXiv link] Published in Communications of the ACM 1 Introduction Collectively, machine learning (ML) researchers are engaged in | ||