You are here |
blog.analytics-toolkit.com | ||
| | | |
aurimas.eu
|
|
| | | | a.k.a. why you should (not ?) use uninformative priors in Bayesian A/B testing. | |
| | | |
errorstatistics.com
|
|
| | | | Georgi Georgiev Author of Statistical methods in online A/B testing Founder of Analytics-Toolkit.com Statistics instructor at CXL Institute In online experimentation, a.k.a. online A/B testing, one is primarily interested in estimating if and how different user experiences affect key business metrics such as average revenue per user. A trivial example would be to determine if... | |
| | | |
easystats.github.io
|
|
| | | | TLDR: BayestestR currently uses a 89% threshold by default for Credible Intervals (CI). Should we change that? If so, by what? Join the discussion here. Magical numbers, or conventional thresholds, have bad press in statistics, and there are many of them. For instance, .05 (for the p-value), or the 95% range for the Confidence Interval (CI). Indeed, why 95 and not 94 or 90? One of the issue that traditional confidence intervals are often being interpreted as a description of the uncertainty surrounding a parameter's value. | |
| | | |
www.usertesting.com
|
|
| | We asked some of the most respected senior research leaders for their recommendations on successfully integrating research with product development as early as possible. |