R package “statcheck”: Extract statistics from articles and recompute p values (Epskamp & Nuijten, 2016)


Conclusions in experimental psychology often are the result of null hypothesis
significance testing. Unfortunately, there is evidence that roughly half of all published empirical psychology articles contain at least one inconsistent p-value, and around one in eight articles contain a grossly inconsistent p-value that makes a non-significant result seem significant, or vice versa. Often these reporting errors are in line with the researchers’ expectations, which means these errors introduce systematic bias (Nuijten, Hartgerink, Van Assen, Epskamp, & Wicherts, 2016). To get an idea of the prevalence of reporting errors and as a tool to check your own work before submitting, we created the R package statcheck (Epskamp & Nuijten, 2016). This package can be used to automatically extract statistics from articles and recompute p values.

For more information about the prevalence of reporting errors in psychology and the validity of statcheck, see our paper (Nuijten et al., 2016).

Quick Install

For a quick installation manual, see the quick install page.


For detailed instructions for the installation and use of statcheck, see the manual.

Web App

If you are unfamiliar with R you can also make use of the point-and-click web app at Here you can easily upload an article and download the statcheck results in a csv file.

Note that the statcheck web app only has the default options (so for instance no automated one-tailed test detection). Many thanks to Sean Rife for programming the web app!


Epskamp, S. & Nuijten, M. B.  (2016). statcheck: Extract statistics from articles and recompute p values. Retrieved from (R package version 1.2.2)

Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985-2013). Behavior Research Methods, 48(4), 1205-1226 DOI: 10.3758/s13428-015-0664-2 PDF OSF