|
You are here |
blog.analytics-toolkit.com | ||
| | | | |
www.analytics-toolkit.com
|
|
| | | | | Adjust p-values from multiple significance tests to control the Family-Wise Error Rate (FWER) or False Discovery Rate (FDR) in cases when multiple test metrics can lead to action. Suitable for A/B testing practitioners. | |
| | | | |
errorstatistics.com
|
|
| | | | | Georgi Georgiev Author of Statistical methods in online A/B testing Founder of Analytics-Toolkit.com Statistics instructor at CXL Institute In online experimentation, a.k.a. online A/B testing, one is primarily interested in estimating if and how different user experiences affect key business metrics such as average revenue per user. A trivial example would be to determine if... | |
| | | | |
easystats.github.io
|
|
| | | | | TLDR: BayestestR currently uses a 89% threshold by default for Credible Intervals (CI). Should we change that? If so, by what? Join the discussion here. Magical numbers, or conventional thresholds, have bad press in statistics, and there are many of them. For instance, .05 (for the p-value), or the 95% range for the Confidence Interval (CI). Indeed, why 95 and not 94 or 90? One of the issue that traditional confidence intervals are often being interpreted as a description of the uncertainty surrounding a parameter's value. | |
| | | | |
research.google
|
|
| | | Posted by Ming-Wei Chang and Kelvin Guu, Research Scientists, Google Research Recent advances in natural language processing have largely built upo... | ||