|
You are here |
statisticsblog.com | ||
| | | | |
www.johnmyleswhite.com
|
|
| | | | | I'm currently reading Luce's "Response Times". If you don't know anything about response times, they are very easily defined: a response time is the length of time it takes a person to respond to a simple request, measured from the moment when the request is made to the moment when the person's response is recorded. In principle, you can measure response times when asking people to indicate that they've heard a tone as easily as you can measure them when you've asked people to solve a problem in calculus. | |
| | | | |
opguides.info
|
|
| | | | | 10 - Probability/Stats # Seeing Theory: A visual Introduction to probability and statistics Why, where are these used, etc. bring up music things, part failure rates, tolerances, etc. Basics # For the following, Ill be using a die roll example, where the events are the total of two die. The Sample Space of this is \(S = \{2,3,4,5,6,7,8,9,10,11,12\}\) Note, that 1 isnt possible as the lowest is both die being 1. | |
| | | | |
erikbern.com
|
|
| | | | | I made a New Year's resolution: every plot I make during 2018 will contain uncertainty estimates. Nine months in and I have learned a lot, so I put together a summary of some of the most useful methods. | |
| | | | |
iclr-blogposts.github.io
|
|
| | | This blog post explores the interplay between the Data Processing Inequality (DPI), a cornerstone concept in information theory, and Function-Space Variational Inference (FSVI) within the context of Bayesian deep learning. The DPI governs the transformation and flow of information through stochastic processes, and its unique connection to FSVI is employed to highlight FSVI's focus on Bayesian predictive posteriors over parameter space. The post examines various forms of the DPI, including the KL divergence based DPI, and provides intuitive examples and detailed proofs. It also explores the equality case of the DPI to gain a deeper understanding. The connection between DPI and FSVI is then established, showing how FSVI can measure a predictive divergence independent of parameter symmetries. The post relates FSVI to knowledge distillation and label entropy regularization, highlighting the practical relevance of the theoretical concepts. Throughout the post, theoretical concepts are intertwined with intuitive explanations and mathematical rigor, offering a comprehensive understanding of these complex topics. By examining these concepts in depth, the post provides valuable insights for both theory and practice in machine learning. | ||