Explore >> Select a destination


You are here

www.nowozin.net
| | www.jeremykun.com
14.0 parsecs away

Travel
| | Machine learning is broadly split into two camps, statistical learning and non-statistical learning. The latter we've started to get a good picture of on this blog; we approached Perceptrons, decision trees, and neural networks from a non-statistical perspective. And generally "statistical" learning is just that, a perspective. Data is phrased in terms of independent and dependent variables, and statistical techniques are leveraged against the data. In this post we'll focus on the simplest example of thi...
| | dustintran.com
13.3 parsecs away

Travel
| | One aspect I always enjoy about machine learning is that questions often go back to the basics. The field essentially goes into an existential crisis every dozen years-rethinking our tools and asking foundational questions such as "why neural networks" or "why generative models".1 This was a theme in my conversations during NIPS 2016 last week, where a frequent topic was on the advantages of a Bayesian perspective to machine learning. Not surprisingly, this appeared as a big discussion point during the p...
| | thirdorderscientist.org
12.4 parsecs away

Travel
| |
| | codethrasher.com
84.9 parsecs away

Travel
| A linear mapping from a vector space to a field of scalars. In other words, a linear function which acts upon a vector resulting in a real number (scalar) \begin{equation} \alpha\,:\,\mathbf{V} \longrightarrow \mathbb{R} \end{equation} Simplistically, covectors can be thought of as "row vectors", or: \begin{equation} \begin{bmatrix} 1 & 2 \end{bmatrix} \end{equation} This might look like a standard vector, which would be true in an orthonormal basis, but it is not true generally.