|
You are here |
fa.bianp.net | ||
| | | | |
jeremykun.wordpress.com
|
|
| | | | | This post is a sequel toFormulating the Support Vector Machine Optimization Problem. The Karush-Kuhn-Tucker theorem Generic optimization problems are hard to solve efficiently. However, optimization problems whose objective and constraints have special structureoften succumb to analytic simplifications. For example, if you want to optimize a linear function subject to linear equality constraints, one can compute... | |
| | | | |
blog.omega-prime.co.uk
|
|
| | | | | The most fundamental technique in statistical learning is ordinary least squares (OLS) regression. If we have a vector of observations \(y\) and a matrix of features associated with each observation \(X\), then we assume the observations are a linear function of the features plus some (iid) random noise, \(\epsilon\): | |
| | | | |
www.ethanepperly.com
|
|
| | | | | ||
| | | | |
francisbach.com
|
|
| | | [AI summary] The blog post discusses non-convex quadratic optimization problems and their solutions, including the use of strong duality, semidefinite programming (SDP) relaxations, and efficient algorithms. It highlights the importance of these problems in machine learning and optimization, particularly for non-convex problems where strong duality holds. The post also mentions the equivalence between certain non-convex problems and their convex relaxations, such as SDP, and provides examples of when these relaxations are tight or not. Key concepts include the role of eigenvalues in quadratic optimization, the use of Lagrange multipliers, and the application of methods like Newton-Raphson for solving these problems. The author also acknowledges contributions... | ||