|
You are here |
siddhartha-gadgil.github.io | ||
| | | | |
almostsuremath.com
|
|
| | | | | The aim of this post is to motivate the idea of representing probability spaces as states on a commutative algebra. We will consider how this abstract construction relates directly to classical probabilities. In the standard axiomatization of probability theory, due to Kolmogorov, the central construct is a probability space $latex {(\Omega,\mathcal F,{\mathbb P})}&fg=000000$. This consists... | |
| | | | |
kuruczgy.com
|
|
| | | | | [AI summary] The article explores the intersection of functional programming and logic through the lens of dependent types. It begins with foundational concepts like type constructors and inductive types, then delves into the Curry-Howard isomorphism, which links programs to mathematical proofs. The discussion covers how types represent propositions, functions as implications, and inductive types as proof strategies. Examples include defining logical relations like less than or equal to and equality, and demonstrating how to prove properties like universal quantification and mathematical identities. The article concludes with an overview of resources for further study in proof assistants like Coq and Idris, emphasizing the practical applications of dependent... | |
| | | | |
mattbaker.blog
|
|
| | | | | In my previous post, I presented a proof of the existence portion of the structure theorem for finitely generated modules over a PID based on the Smith Normal Form of a matrix. In this post, I'd like to explain how the uniqueness portion of that theorem is actually a special case of a more general... | |
| | | | |
yang-song.net
|
|
| | | This blog post focuses on a promising new direction for generative modeling. We can learn score functions (gradients of log probability density functions) on a large number of noise-perturbed data distributions, then generate samples with Langevin-type sampling. The resulting generative models, often called score-based generative models, has several important advantages over existing model families: GAN-level sample quality without adversarial training, flexible model architectures, exact log-likelihood ... | ||