R package “statcheck”: Extract statistics from articles and recompute p values (Epskamp & Nuijten, 2016)
Conclusions in experimental psychology often are the result of null hypothesis significance testing. Unfortunately, there is evidence that roughly half of all published empirical psychology articles contain at least one inconsistent p-value, and around one in seven articles contain a grossly inconsistent p-value that makes a non-significant result seem significant, or vice versa. Often these reporting errors are in line with the researchers’ expectations, which means these errors introduce systematic bias. To get an idea of the prevalence of reporting errors and as a tool to check your own work before submitting, we created the R package statcheck (Epskamp & Nuijten, 2016). This package can be used to automatically extract statistics from articles and recompute p values.
For more information about the prevalence of reporting errors in psychology and the validity of statcheck, see our paper (Nuijten, Hartgerink, Van Assen, Epskamp, & Wicherts, 2016).
For a quick installation manual, see the quick install page.
For detailed instructions for the installation and use of statcheck, see the manual.
Epskamp, S. & Nuijten, M. B. (2016). statcheck: Extract statistics from articles and recompute p values. Retrieved from http://CRAN.R-project.org/package=statcheck. (R package version 1.2.2)
Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985-2013). Behavior Research Methods, 48(4), 1205-1226. DOI: 10.3758/s13428-015-0664-2 PDF OSF