|
You are here |
jhui.github.io | ||
| | | | |
francisbach.com
|
|
| | | | | ||
| | | | |
www.jeremykun.com
|
|
| | | | | This post is a sequel to Formulating the Support Vector Machine Optimization Problem. The Karush-Kuhn-Tucker theorem Generic optimization problems are hard to solve efficiently. However, optimization problems whose objective and constraints have special structure often succumb to analytic simplifications. For example, if you want to optimize a linear function subject to linear equality constraints, one can compute the Lagrangian of the system and find the zeros of its gradient. More generally, optimizing... | |
| | | | |
matthewmcateer.me
|
|
| | | | | Important mathematical prerequisites for getting into Machine Learning, Deep Learning, or any of the other space | |
| | | | |
algorithmsoup.wordpress.com
|
|
| | | The ``probabilistic method'' is the art of applying probabilistic thinking to non-probabilistic problems. Applications of the probabilistic method often feel like magic. Here is my favorite example: Theorem (Erdös, 1965). Call a set $latex {X}&fg=000000$ sum-free if for all $latex {a, b \in X}&fg=000000$, we have $latex {a + b \not\in X}&fg=000000$. For any finite... | ||