Young eScientist Award

In December 2020, Willem Sleegers and I were awarded the Young eScientist Award from the Netherlands eScience Center for our proposal to improve statcheck’s searching algorithm. Today marks the start of our collaboration with the eScience Center and we are very excited to get started!

In this project, we plan to extend statcheck’s search algorithm with natural language processing algorithms, in order to recognize more statistics than just the ones reported perfectly in APA style (a current restriction). We hope that this extension will expand statcheck’s functionality beyond psychology, so that statistical errors in, e.g., biomedical and economics papers can also be detected and corrected.

More information about the award can be found here.

Image result for escience center netherlands

NWO Veni Grant for the 4-Step Robustness Check

I’m thrilled to announce that I won a €250,000 NWO Veni Grant for my 4-Step Robustness Check! The next 3 years I’ll be working on methods to assess and improve robustness of psychological science.

To check the robustness of a study could replicate it in a new sample. However, in my 4-Step Robustness Check, you first verify if the reported numbers in the original study are correct. If they’re not, they are not interpretable and you can’t compare them to the results of your replication.

Specifically, I advise researchers to do the following:

  1. Check if there are visible errors in the reported numbers, for example by running a paper through my spellchecker for statistics: statcheck
  2. Reanalyze the data following the original strategy to see if this leads to the same numbers
  3. Check if the result is robust to alternative analytical choices
  4. Perform a replication study in a new sample
The 4-Step Robustness Check can be used to efficiently assess robustness of results

This 4-step check provides an efficient framework to check if a study’s findings are robust. Note that the first steps take way less time than a full replication and might be enough to conclude a result is not robust.

The proposed framework can also be used as an efficient checklist for researchers to improve robustness of their own results:

  1. Check the internal consistency of your reported results
  2. Share your data and analysis scripts to facilitate reanalysis
  3. Conduct and report your own sensitivity analyses
  4. Write detailed methods sections and share materials to facilitate replication

Ultimately, I aim to create interactive, pragmatic, and evidence-based methods to improve and assess robustness, applicable to psychology and other fields.

I would like to wholeheartedly thank my colleagues, reviewers, and committee members for their time, feedback, and valuable insights. I’m looking forward to the next three years!

Seed Funding for COVID-19 Project

I am happy to announce that Robbie van Aert, Jelte Wicherts, and I received seed funding from the Herbert Simon Research Institute for our project to screen COVID-19 preprints for statistical inconsistencies.

Inconsistencies can distort conclusions, but even if inconsistencies are small, they negatively affect the reproducibility of a paper (i.e., where did a number come from?). Statistical reproducibility is a basic requirement for any scientific paper.

We plan to check a random sample of COVID-19 preprints from medRxiv and bioRxiv for several types of statistical inconsistencies. E.g., does a percentage match the accompanying fraction? Do the TP/TN/FP/FN rates match the reported sensitivity of a test?

We have 3 main objectives:

  1. Post short reports with detected statistical inconsistencies underneath the preprint
  2. Assess the prevalence of statistical inconsistencies in COVID-19 preprints
  3. Compare the inconsistency-rate in COVID-19 preprints with the inconsistency-rate in similar preprints on other topics

We hypothesize that high time pressure may have led to a higher prevalence of statistical inconsistencies in COVID-19 preprints as opposed to preprints on less time sensitive issues.

We thank our colleagues at the Meta-Research Center for their feedback and help in developing the coding protocol.

See the full proposal here.

Awarded a Campbell Methods Grant

I am honored to announce that Joshua R. Polanin and I were awarded a $20,000 methods grant from the Campbell Collaboration for the project “Verifying the Accuracy of Statistical Significance Testing in Campbell Collaboration Systematic Reviews Through the Use of the R Package statcheck”.

The grant is part of the Campbell Collaboration’s program to supporting innovative methods development in order to improve the quality of systematic reviews. It is great that we (and statcheck!) can be a part of this effort.

For more information about the grant and the three other recipients, see their website here.