In Press: Practical Tools and Strategies for Researchers to Increase Replicability

I wrote an invited review for Developmental Medicine & Child Neurology about “Practical tools and strategies for researchers to increase replicability”.

Problems with replicability have been widely discussed over the last years, especially in psychology. By now, a lot of promising solutions have been proposed, but my sense is that researchers are sometimes a bit overwhelmed by all the possibilities.

My goal in this review was to make a list of some of the current recommendations that can be easily implemented. Not every solutions is always feasible for every project, so my advice is: copy best practices from other fields, see what works on a case-by-case basis, and improve your research step by step.

The preprint can be found here: https://psyarxiv.com/emyux.

Advertisements

New Preprint: Effect Sizes, Power, and Biases in Intelligence Research

Our new meta-meta-analysis on intelligence research is now online as a preprint at https://psyarxiv.com/ytsvw.

We analyzed 131 meta-analyses in intelligence research to investigate effect sizes, power, and patterns of bias. We find a typical effect of r = .26 and a median sample size of 60.

The median power seems low (see figure below), and we find evidence for small study effects, possibly indicating overestimated effects. We don’t find evidence for a US effect, decline or early-extremes effect, or citation bias.

MedianPowerPerTypeAndOverallRandomEffects

Comments are very welcome and can be posted on the PubPeer page https://pubpeer.com/publications/9F209A983618EFF9EBED07FDC7A7AC.

 

New Preprint: statcheck’s Validity is High

In our new preprint we investigated the validity of statcheck. Our main conclusions were:

  • statcheck’s sensitivity, specificity, and overall accuracy are very high. The specific numbers depended on several choices & assumptions, but ranged from:
    • sensitivity: 85.3% – 100%
    • specificity: 96.0% – 100%
    • accuracy: 96.2% – 99.9%
  • The prevalence of statistical corrections (e.g., Bonferroni, or Greenhouse-Geisser) seems to be higher than we initially estimated
  • But: the presence of these corrections doesn’t explain the high prevalence of reporting inconsistencies in psychology

We conclude that statcheck’s validity is high enough to recommend it as a tool in peer review, self-checks, or meta-research.

statcheck-01

New Preprint: Data Sharing & Statistical Inconsistencies

We just published the preprint of our new study “Journal Data Sharing Policies and Statistical Reporting Inconsistencies in Psychology” at https://osf.io/preprints/psyarxiv/sgbta.

In this paper, we ran three independent studies to investigate if data sharing is related to fewer statistical inconsistencies in a paper. Overall, we found no relationship between data sharing and reporting inconsistencies. However, we did find that journal policies on data sharing are extremely effective in promoting data sharing (see the Figure below).

EffectivenessOpenDataPolicy

We argue that open data is essential in improving the quality of psychological science, and we discuss ways to detect and reduce reporting inconsistencies in the literature.

statcheck 1.2.2 now on CRAN & statcheck manual on RPubs

The new statcheck 1.2.2* is now on CRAN!

Main updates:

  • Improved the regular expressions to avoid that statcheck wrongly recognizes weird statistics with subscripts as chi-squares
  • You can now choose whether to count “p = .000” as incorrect (this was default in the previous version)
  • The statcheck plot function now renders a plot in APA style (thanks to John Sakaluk for writing this code!)
  • Give pop-up window to choose a file when there is no file specified in “checkPDF()” or “checkHTML()”

For the full list of adaptations, see the History page on GitHub.

Besides the new updated package, I also created a detailed manual with instructions for installation and use of statcheck, including many examples and explanation of the output. You can find the manual on RPubs here.

* For the people who actually know what this numbering stands for: you may have noticed that the previous version on CRAN was version 1.0.2, so this seems like a weird step. It is. It’s because at first I had no idea what these numbers stood for (MAJOR.MINOR.PATCH), so I was just adding numbers at random. Actually the previous version should have been 1.1.x, which means that I’m now at 1.2.x. The last two PATCHES were because I messed up the R CMD check and had to fix some last minute things 🙂

How can editors help prevent statistical errors? My new essay.

March 2016

There are too many statistical inconsistencies in published papers, and unfortunately they show a systematic bias towards reporting statistical significance.

Statistical reporting errors are not the only problem we are currently facing in science but at least it seems like one that is relatively easy to solve. I believe journal editors can play an important role in achieving change in the system, in order to slowly but steadily decrease statistical errors and improve scientific practice.

Nuijten, M.B. (2016). Preventing statistical errors in scientific journals. European Science Editing, 42, 1, 8-10.

You can find the post-print here.