 
      
    | You are here | jeremykun.wordpress.com | ||
| | | | | nhigham.com | |
| | | | | The Cayley-Hamilton Theorem says that a square matrix $LATEX A$ satisfies its characteristic equation, that is $latex p(A) = 0$ where $latex p(t) = \det(tI-A)$ is the characteristic polynomial. This statement is not simply the substitution ``$latex p(A) = \det(A - A) = 0$'', which is not valid since $latex t$ must remain a scalar... | |
| | | | | www.ayoub-benaissa.com | |
| | | | | This is the first of a series of blog posts about the use of homomorphic encryption for deep learning. Here I introduce the basics and terminology as well as link to external resources that might help with a deeper understanding of the topic. | |
| | | | | www.jeremykun.com | |
| | | | | In this article I'll derive a trick used in FHE called sample extraction. In brief, it allows one to partially convert a ciphertext in the Ring Learning With Errors (RLWE) scheme to the Learning With Errors (LWE) scheme. Here are some other articles I've written about other FHE building blocks, though they are not prerequisites... | |
| | | | | vxlabs.com | |
| | | I have recently become fascinated with (Variational) Autoencoders and with PyTorch. Kevin Frans has a beautiful blog post online explaining variational autoencoders, with examples in TensorFlow and, importantly, with cat pictures. Jaan Altosaar's blog post takes an even deeper look at VAEs from both the deep learning perspective and the perspective of graphical models. Both of these posts, as well as Diederik Kingma's original 2014 paper Auto-Encoding Variational Bayes, are more than worth your time. | ||