You are here |
www.nowozin.net | ||
| | | |
dustintran.com
|
|
| | | | One aspect I always enjoy about machine learning is that questions often go back to the basics. The field essentially goes into an existential crisis every dozen years-rethinking our tools and asking foundational questions such as "why neural networks" or "why generative models".1 This was a theme in my conversations during NIPS 2016 last week, where a frequent topic was on the advantages of a Bayesian perspective to machine learning. Not surprisingly, this appeared as a big discussion point during the p... | |
| | | |
thirdorderscientist.org
|
|
| | | | ||
| | | |
sriku.org
|
|
| | | | ||
| | | |
djalil.chafai.net
|
|
| | This post is mainly devoted to a probabilistic proof of a famous theorem due to Schoenberg on radial positive definite functions. Let us begin with a general notion: we say that \( {K:\mathbb{R}^d\times\mathbb{R}^d\rightarrow\mathbb{R}} \) is a positive definite kernel when \[ \forall n\geq1, \forall x_1,\ldots,x_n\in\mathbb{R}^d, \forall c\in\mathbb{C}^n, \quad\sum_{i=1}^n\sum_{j=1}^nc_iK(x_i,x_j)\bar{c}_j\geq0. \] When \( {K} \) is symmetric, i.e. \( {K(x,y)=K(y,x)} \) for... |