Explore >> Select a destination


You are here

blog.richmond.edu
| | alexanderetz.com
3.5 parsecs away

Travel
| | [This post has been updated and turned into a paper to be published in AMPPS] Much of the discussion in psychology surrounding Bayesian inference focuses on priors. Should we embrace priors, or should we be skeptical? When are Bayesian methods sensitive to specification of the prior, and when do the data effectively overwhelm it? Should...
| | jaketae.github.io
4.3 parsecs away

Travel
| | So far on this blog, we have looked the mathematics behind distributions, most notably binomial, Poisson, and Gamma, with a little bit of exponential. These distributions are interesting in and of themselves, but their true beauty shines through when we analyze them under the light of Bayesian inference. In today's post, we first develop an intuition for conditional probabilities to derive Bayes' theorem. From there, we motivate the method of Bayesian inference as a means of understanding probability.
| | inventingsituations.net
6.7 parsecs away

Travel
| | Suppose you're buildinga widget that performs some simple action, which ends in either success or failure. You decide it needs to succeed 75% of the time before you're willing to release it. You run tentests, and seethat it succeeds exactly 8times. So you ask yourself, is that really good enough? Do you believe the test...
| | scorpil.com
24.0 parsecs away

Travel
| In Part One of the "Understanding Generative AI" series, we delved into Tokenization - the process of dividing text into tokens, which serve as the fundamental units of information for neural networks. These tokens are crucial in shaping how AI interprets and processes language. Building upon this foundational knowledge, we are now ready to explore Neural Networks - the cornerstone technology underpinning all Artificial Intelligence research. A Short Look into the History Neural Networks, as a technology, have their roots in the 1940s and 1950s.