|
You are here |
fharrell.com | ||
| | | | |
www.fharrell.com
|
|
| | | | | Observational data from electronic health records may contain biases that large sample sizes do not overcome. Moderate confounding by indication may render an infinitely large observational study less useful than a small randomized trial for estimating relative treatment effectiveness. | |
| | | | |
errorstatistics.com
|
|
| | | | | Stephen Senn Head of Competence Center for Methodology and Statistics (CCMS) Luxembourg Institute of Health Twitter @stephensenn Being a statistician means never having to say you are certain A recent discussion of randomised controlled trials[1] by Angus Deaton and Nancy Cartwright (D&C) contains much interesting analysis but also, in my opinion, does not escape rehashing... | |
| | | | |
www.rdatagen.net
|
|
| | | | | Simulation can be super helpful for estimating power or sample size requirements when the study design is complex. This approach has some advantages over an analytic one (i.e.one based on a formula), particularly the flexibility it affords in setting up the specific assumptions in the planned study, such as time trends, patterns of missingness, or effects of different levels of clustering. A downside is certainly the complexity of writing the code as well as the computation time, which can be a bit painful. My goal here is to show that at least writing the code need not be overwhelming. | |
| | | | |
poissonisfish.com
|
|
| | | Someof the most fundamental functions in R, in my opinion, are those that deal with probability distributions. Whenever you compute a P-value you relyon a probability distribution, and there are many types out there. In this exercise I will cover four: Bernoulli, Binomial, Poisson, and Normal distributions. Let me begin with some theory first: Bernoulli... | ||