You are here |
nicholas.carlini.com | ||
| | | |
tiao.io
|
|
| | | | We propose a framework that lifts the capabilities of graph convolutional networks (GCNs) to scenarios where no input graph is given and increases their robustness to adversarial attacks. We formulate a joint probabilistic model that considers a prior distribution over graphs along with a GCN-based likelihood and develop a stochastic variational inference algorithm to estimate the graph posterior and the GCN parameters jointly. To address the problem of propagating gradients through latent variables draw... | |
| | | |
francisbach.com
|
|
| | | | ||
| | | |
tiao.io
|
|
| | | | This paper is a follow-up to our working paper, previously presented at the NeurIPS2019 Graph Representation Learning Workshop, now with significantly expanded experimental analyses. embed = new SlidesLiveEmbed('presentation-embed-38937946', { presentationId: '38937946', autoPlay: false, // change to true to autoplay the embedded presentation verticalEnabled: true }); | |
| | | |
fabricebaudoin.blog
|
|
| | In this section, we consider a diffusion operator $latex L=\sum_{i,j=1}^n \sigma_{ij} (x) \frac{\partial^2}{ \partial x_i \partial x_j} +\sum_{i=1}^n b_i (x)\frac{\partial}{\partial x_i}, $ where $latex b_i$ and $latex \sigma_{ij}$ are continuous functions on $latex \mathbb{R}^n$ and for every $latex x \in \mathbb{R}^n$, the matrix $latex (\sigma_{ij}(x))_{1\le i,j\le n}$ is a symmetric and non negative matrix. Our... |