|
You are here |
suricrasia.online | ||
| | | | |
francisbach.com
|
|
| | | | | [AI summary] The blog post discusses non-convex quadratic optimization problems and their solutions, including the use of strong duality, semidefinite programming (SDP) relaxations, and efficient algorithms. It highlights the importance of these problems in machine learning and optimization, particularly for non-convex problems where strong duality holds. The post also mentions the equivalence between certain non-convex problems and their convex relaxations, such as SDP, and provides examples of when these relaxations are tight or not. Key concepts include the role of eigenvalues in quadratic optimization, the use of Lagrange multipliers, and the application of methods like Newton-Raphson for solving these problems. The author also acknowledges contributions... | |
| | | | |
www.johndcook.com
|
|
| | | | | Notes on math and software: probability, approximations, special functions, regular expressions, Python, C++, R, etc. | |
| | | | |
www.jeremykun.com
|
|
| | | | | Machine learning is broadly split into two camps, statistical learning and non-statistical learning. The latter we've started to get a good picture of on this blog; we approached Perceptrons, decision trees, and neural networks from a non-statistical perspective. And generally "statistical" learning is just that, a perspective. Data is phrased in terms of independent and dependent variables, and statistical techniques are leveraged against the data. In this post we'll focus on the simplest example of thi... | |
| | | | |
liorsinai.github.io
|
|
| | | A series on automatic differentiation in Julia. Part 1 provides an overview and defines explicit chain rules. | ||