Explore >> Select a destination


You are here

jeremykun.wordpress.com
| | nhigham.com
10.3 parsecs away

Travel
| | The Cayley-Hamilton Theorem says that a square matrix $LATEX A$ satisfies its characteristic equation, that is $latex p(A) = 0$ where $latex p(t) = \det(tI-A)$ is the characteristic polynomial. This statement is not simply the substitution ``$latex p(A) = \det(A - A) = 0$'', which is not valid since $latex t$ must remain a scalar...
| | www.ayoub-benaissa.com
8.1 parsecs away

Travel
| | This is the first of a series of blog posts about the use of homomorphic encryption for deep learning. Here I introduce the basics and terminology as well as link to external resources that might help with a deeper understanding of the topic.
| | www.jeremykun.com
1.9 parsecs away

Travel
| | In this article I'll derive a trick used in FHE called sample extraction. In brief, it allows one to partially convert a ciphertext in the Ring Learning With Errors (RLWE) scheme to the Learning With Errors (LWE) scheme. Here are some other articles I've written about other FHE building blocks, though they are not prerequisites...
| | vxlabs.com
52.1 parsecs away

Travel
| I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time.