|
You are here |
justindomke.wordpress.com | ||
| | | | |
tomaugspurger.net
|
|
| | | | | This work is supported by Anaconda Inc. This post describes a recent improvement made to TPOT. TPOT is an automated machine learning library for Python. It does some feature engineering and hyper-parameter optimization for you. TPOT uses genetic algorithms to evaluate which models are performing well and how to choose new models to try out in the next generation. Parallelizing TPOT In TPOT-730, we made some modifications to TPOT to support distributed training. As a TPOT user, the only changes you need to make to your code are | |
| | | | |
francisbach.com
|
|
| | | | | [AI summary] This text discusses the scaling laws of optimization in machine learning, focusing on asymptotic expansions for both strongly convex and non-strongly convex cases. It covers the derivation of performance bounds using techniques like Laplace's method and the behavior of random minimizers. The text also explains the 'weird' behavior observed in certain plots, where non-strongly convex bounds become tight under specific conditions. The analysis connects theoretical results to practical considerations in optimization algorithms. | |
| | | | |
blog.research.google
|
|
| | | | | [AI summary] This blog post introduces Stochastic Re-weighted Gradient Descent (RGD), a novel optimization algorithm that improves deep neural network performance by re-weighting data points during training based on their difficulty, enhancing generalization and robustness against data distribution shifts. | |
| | | | |
opensource.org
|
|
| | | [AI summary] The post discusses open source licenses and their categorization, along with information about cookie consent management on the Open Source Initiative website. | ||