New Preprint: Data Sharing & Statistical Inconsistencies

We just published the preprint of our new study “Journal Data Sharing Policies and Statistical Reporting Inconsistencies in Psychology” at https://osf.io/preprints/psyarxiv/sgbta.

In this paper, we ran three independent studies to investigate if data sharing is related to fewer statistical inconsistencies in a paper. Overall, we found no relationship between data sharing and reporting inconsistencies. However, we did find that journal policies on data sharing are extremely effective in promoting data sharing (see the Figure below).

EffectivenessOpenDataPolicy

We argue that open data is essential in improving the quality of psychological science, and we discuss ways to detect and reduce reporting inconsistencies in the literature.

statcheck 1.2.2 now on CRAN & statcheck manual on RPubs

The new statcheck 1.2.2* is now on CRAN!

Main updates:

  • Improved the regular expressions to avoid that statcheck wrongly recognizes weird statistics with subscripts as chi-squares
  • You can now choose whether to count “p = .000” as incorrect (this was default in the previous version)
  • The statcheck plot function now renders a plot in APA style (thanks to John Sakaluk for writing this code!)
  • Give pop-up window to choose a file when there is no file specified in “checkPDF()” or “checkHTML()”

For the full list of adaptations, see the History page on GitHub.

Besides the new updated package, I also created a detailed manual with instructions for installation and use of statcheck, including many examples and explanation of the output. You can find the manual on RPubs here.

* For the people who actually know what this numbering stands for: you may have noticed that the previous version on CRAN was version 1.0.2, so this seems like a weird step. It is. It’s because at first I had no idea what these numbers stood for (MAJOR.MINOR.PATCH), so I was just adding numbers at random. Actually the previous version should have been 1.1.x, which means that I’m now at 1.2.x. The last two PATCHES were because I messed up the R CMD check and had to fix some last minute things 🙂

How can editors help prevent statistical errors? My new essay.

March 2016

There are too many statistical inconsistencies in published papers, and unfortunately they show a systematic bias towards reporting statistical significance.

Statistical reporting errors are not the only problem we are currently facing in science but at least it seems like one that is relatively easy to solve. I believe journal editors can play an important role in achieving change in the system, in order to slowly but steadily decrease statistical errors and improve scientific practice.

Nuijten, M.B. (2016). Preventing statistical errors in scientific journals. European Science Editing, 42, 1, 8-10.

You can find the post-print here.

“The prevalence of statistical reporting errors in psychology (1985-2013)” published at Behavior Research Methods

October 2015

In this paper we use the automated procedure “statcheck” to extract over 250.000 p-values from 30.000 psychology articles and check whether they are consistent.

We find that half of the articles contain at least one inconsistency, and 1 in 8 articles contains a gross inconsistency that affects the statistical conclusion. The prevalence of inconsistencies seems to be stable over time.

The article is Open Access and available here: http://link.springer.com/article/10.3758/s13428-015-0664-2

“The prevalence of statistical reporting errors in psychology (1985-2013)” accepted for publication at Behavior Research Methods

September 2015

In this manuscript we use the R package statcheck (by Sacha Epskamp & me) to examine the prevalence of statistical reporting errors in 8 major psychology journals from 1985 to 2013. We find that half of all articles contains at least one inconsistency and 1 in 8 articles contains a grossly inconsistent p-value that could have changed the statistical conclusion. We find no evidence that the prevalence of inconsistencies is increasing over the years.

You can find the post-print here: PDF

The Replication Paradox is now published!

June 2015

My latest paper “The replication paradox:  Combining studies can decrease accuracy of effect size estimates” is now published in Review of General Psychology. You can find a postprint of the paper here. The full reference is:

Nuijten, M. B., Van Assen, M. A. L. M., Veldkamp, C. L. S., & Wicherts, J. M. (2015).The replication paradox: Combining studies can decrease accuracy of effect size estimates. Review of General Psychology, 19(2), 172-182. http://dx.doi.org/10.1037/gpr0000034

The R package statcheck is now on CRAN!

November 2014

Statcheck is an R package that Sacha Epskamp and I wrote together. Statcheck extracts statistics and recomputes p-values. Extremely handy to check your own papers for accidental slip-ups in the result section, but it can also be used to estimate error prevalence across a wide range of scientific articles. So far, statcheck can only read results that are reported exactly in APA style. Note that in order to scan PDF files, you need to have the program pdf-to-txt installed and on your path variable. Furthermore, if you want to run statcheck on a Mac, you need to install XQuartz. For more info, see the project page.