Science Insider: statcheck’s Accuracy is High

In the latest Science Insider written by Dalmeet Singh Chawla I argue that statcheck does exactly what it’s supposed to do: check the consistency of APA reported NHST results.

Read the entire piece here.

Science

Advertisements

Nature Comment: Share Analysis Plans and Results

Nature published a series of comments all focused on ways to fix our statistics. In my comment, I argue that the main problem is the flexibility in data analysis, combined with incentives to find significant results. A possible solution would be to preregister analysis plans, and to share data.

Read the entire piece here, for the following set of solutions:

  • Jeff Leek: Adjust for human cognition
  • Blake McShane, Andrew Gelman, David Gal, Christian Robert, and Jennifer Tackett: Abandon statistical significance
  • David Colquhoun: State false-positive risk, too
  • Michèle Nuijten: Share analysis plans and results
  • Steven Goodman: Change norms from within

 

Nature

Illustration by David Parkins

 

New Preprint: statcheck’s Validity is High

In our new preprint we investigated the validity of statcheck. Our main conclusions were:

  • statcheck’s sensitivity, specificity, and overall accuracy are very high. The specific numbers depended on several choices & assumptions, but ranged from:
    • sensitivity: 85.3% – 100%
    • specificity: 96.0% – 100%
    • accuracy: 96.2% – 99.9%
  • The prevalence of statistical corrections (e.g., Bonferroni, or Greenhouse-Geisser) seems to be higher than we initially estimated
  • But: the presence of these corrections doesn’t explain the high prevalence of reporting inconsistencies in psychology

We conclude that statcheck’s validity is high enough to recommend it as a tool in peer review, self-checks, or meta-research.

statcheck-01

statcheck Runner-Up for Sentinel Award

Screen-Shot-2017-08-02-at-4.05.46-PM-2Publons announced the winner of the Sentinel Award for outstanding advocacy, innovation or contribution to scholarly peer review, and I am proud to announce that statcheck was crowned runner-up!

I am honored that the judges considered statcheck a useful contribution to the peer review system. In the end, one of the things I hope to achieve is that all Psychology journals will consider it standard practice to quickly “statcheck” a paper for statistical inconsistencies to avoid publishing them.

A very warm congratulations to the winner of the award: Irene Hames. Irene spent most of her career on improving the quality of peer review and it is great that her work is recognized in this way! Also congratulations to the rest of the Sentinel Award nominees: Retraction WatchAmerican Geophysical UnionORCiDF1000ResearchThe Committee on Publication Ethics (COPE)Kyle Martin and Gareth Fraser.

For more information about the award, the winner, and the finalists, see this page.

The Guardian’s Science Weekly Podcast feat. statcheck

This week the Guardian’s Science Weekly podcast focuses on statistical malpractice and fraud in science. We talk about the role of statcheck in detecting statistical inconsistencies, and discuss the causes and implications of seemingly innocent rounding errors.

This podcast also offers fascinating insights from consultant anaesthetist John Carlisle about the detection of data fabrication, and president of the Royal Statistical Society David Spiegelhalter about the dangers of statistical malpractice.

statcheck Shortlisted for Publon Sentinel Award!

Proud to announce that I’ve been shortlisted for the Publon Sentinel Award for my work on statcheck. The Sentinel Award is an award for outstanding advocacy, innovation or contribution to scholarly peer review.

At this point, statcheck is used in the peer review process of two major psychology journals (Psychological Science and the Journal for Experimental Social Psychology) and an increasing number of journals are recommending using statcheck on your own manuscript before submitting it.

For more information about the award and the other great candidates, see this page.

Peer_review_week_finalist

New Preprint: Data Sharing & Statistical Inconsistencies

We just published the preprint of our new study “Journal Data Sharing Policies and Statistical Reporting Inconsistencies in Psychology” at https://osf.io/preprints/psyarxiv/sgbta.

In this paper, we ran three independent studies to investigate if data sharing is related to fewer statistical inconsistencies in a paper. Overall, we found no relationship between data sharing and reporting inconsistencies. However, we did find that journal policies on data sharing are extremely effective in promoting data sharing (see the Figure below).

EffectivenessOpenDataPolicy

We argue that open data is essential in improving the quality of psychological science, and we discuss ways to detect and reduce reporting inconsistencies in the literature.

Awarded a Campbell Methods Grant

I am honored to announce that Joshua R. Polanin and I were awarded a $20,000 methods grant from the Campbell Collaboration for the project “Verifying the Accuracy of Statistical Significance Testing in Campbell Collaboration Systematic Reviews Through the Use of the R Package statcheck”.

The grant is part of the Campbell Collaboration’s program to supporting innovative methods development in order to improve the quality of systematic reviews. It is great that we (and statcheck!) can be a part of this effort.

For more information about the grant and the three other recipients, see their website here.