|
You are here |
fa.bianp.net | ||
| | | | |
fabricebaudoin.blog
|
|
| | | | | In this section, we consider a diffusion operator $latex L=\sum_{i,j=1}^n \sigma_{ij} (x) \frac{\partial^2}{ \partial x_i \partial x_j} +\sum_{i=1}^n b_i (x)\frac{\partial}{\partial x_i}, $ where $latex b_i$ and $latex \sigma_{ij}$ are continuous functions on $latex \mathbb{R}^n$ and for every $latex x \in \mathbb{R}^n$, the matrix $latex (\sigma_{ij}(x))_{1\le i,j\le n}$ is a symmetric and non negative matrix. Our... | |
| | | | |
djalil.chafai.net
|
|
| | | | | This post is mainly devoted to a probabilistic proof of a famous theorem due to Schoenberg on radial positive definite functions. Let us begin with a general notion: we say that \( {K:\mathbb{R}^d\times\mathbb{R}^d\rightarrow\mathbb{R}} \) is a positive definite kernel when \[ \forall n\geq1, \forall x_1,\ldots,x_n\in\mathbb{R}^d, \forall c\in\mathbb{C}^n, \quad\sum_{i=1}^n\sum_{j=1}^nc_iK(x_i,x_j)\bar{c}_j\geq0. \] When \( {K} \) is symmetric, i.e. \( {K(x,y)=K(y,x)} \) for... | |
| | | | |
francisbach.com
|
|
| | | | | [AI summary] This article explores the properties of matrix relative entropy and its convexity, linking it to machine learning and information theory. It discusses the use of positive definite matrices in various contexts, including concentration inequalities and kernel methods. The article also includes a lemma on matrix cumulant generating functions and its proof, as well as references to relevant literature. | |
| | | | |
cstheory-events.org
|
|
| | | July 29 - August 1, 2024 DIMACS Center, Rutgers University, CoRE Building, 96 Frelinghuysen Road Piscataway, NJ 08854 http://dimacs.rutgers.edu/events/details?eID=2785 Submission deadline: May 19, 2024 Registration deadline: May 19, 2024 This summer DIMACS will hold an advanced workshop for graduate students in complexity theory! Our goal is to bring together up-and-coming complexity researchers and to introduce... | ||