I am a PhD student at Tilburg University. My research focuses on meta-science, including topics such as replication, publication bias, statistical errors, and questionable research practices.
I am part of the Meta-Research Center at Tilburg University: http://metaresearch.nl.
I am honored to announce that Joshua R. Polanin and I were awarded a $20,000 methods grant from the Campbell Collaboration for the project “Verifying the Accuracy of Statistical Significance Testing in Campbell Collaboration Systematic Reviews Through the Use of the R Package statcheck”.
The grant is part of the Campbell Collaboration’s program to supporting innovative methods development in order to improve the quality of systematic reviews. It is great that we (and statcheck!) can be a part of this effort.
For more information about the grant and the three other recipients, see their website here.
I’m very happy to present statcheck’s very own logo! With over 6000 downloads & over 6000 visits to the web app, and a running pilot in Pyschological Science, I thought statcheck deserved one. Also, this might settle the upper case/lower case confusion once and for all: it’s not Statcheck, StatCheck, statCheck, or even STATCHECK, but:
The long-read in the Guardian today by Stephen Buranyi featured our work at the Meta-Research Center. Specifically, it focuses on the work of Chris Hartgerink and Marcel van Assen on detecting fabricated data, and how the development and use of statcheck played a role in their research.
Read the full article here.
I’m honored and proud to announce that Sacha Epskamp and I won the 2016 Leamer-Rosenthal Prize for Open Social Science! This award is an initiative of the Berkeley Initiative for Transparency in the Social Sciences (BITSS), and comes with a prize of
We are invited to attend the 2016 BITSS annual meeting to receive our prize, along with
seven other researchers and educators.
For more information about our nomination and the prize itself see: http://www.bitss.org/2016/12/02/sneak-peek-announcing-two-of-nine-winners-of-the-2016-leamer-rosenthal-prizes-for-open-social-science-michele-nuijten-and-sacha-epskamp/
Monya Baker wrote a nice article on statcheck in Nature and discusses the pros and cons of its uses.
You can find the article here.
I’m very excited to announce that the statcheck web app created by Sean Rife and sponsored by Rackspace is online and ready for use! You can find the app at http://statcheck.io.
Below you can find a short interview (Dutch with English subs) in which I explain what the app does.
The new statcheck 1.2.2* is now on CRAN!
- Improved the regular expressions to avoid that statcheck wrongly recognizes weird statistics with subscripts as chi-squares
- You can now choose whether to count “p = .000” as incorrect (this was default in the previous version)
- The statcheck plot function now renders a plot in APA style (thanks to John Sakaluk for writing this code!)
- Give pop-up window to choose a file when there is no file specified in “checkPDF()” or “checkHTML()”
For the full list of adaptations, see the History page on GitHub.
Besides the new updated package, I also created a detailed manual with instructions for installation and use of statcheck, including many examples and explanation of the output. You can find the manual on RPubs here.
* For the people who actually know what this numbering stands for: you may have noticed that the previous version on CRAN was version 1.0.2, so this seems like a weird step. It is. It’s because at first I had no idea what these numbers stood for (MAJOR.MINOR.PATCH), so I was just adding numbers at random. Actually the previous version should have been 1.1.x, which means that I’m now at 1.2.x. The last two PATCHES were because I messed up the R CMD check and had to fix some last minute things 🙂
Thanks to Felix Schönbrodt’s code to track CRAN R package downloads, I was able to see how often my packages statcheck and BayesMed were downloaded.
It turns out: quite a lot!
BayesMed: 9665 downloads since January 2014
statcheck: 4930 downloads since November 2014
Furthermore, it is quite clear when most academics are on holiday:
BayesMed is a package to perform a default Bayesian hypothesis test for mediation, correlation, and partial correlation. For more information, click here.
statcheck extracts statistics from articles and recalculates the p-value. For more information, click here.
There are too many statistical inconsistencies in published papers, and unfortunately they show a systematic bias towards reporting statistical significance.
Statistical reporting errors are not the only problem we are currently facing in science but at least it seems like one that is relatively easy to solve. I believe journal editors can play an important role in achieving change in the system, in order to slowly but steadily decrease statistical errors and improve scientific practice.
Nuijten, M.B. (2016). Preventing statistical errors in scientific journals. European Science Editing, 42, 1, 8-10.
You can find the post-print here.
I wrote a guest post for Retraction Watch about the development of Sacha’s and my R package “statcheck”, which automatically extracts statistical results and recomputes p-values.
Read it here: http://retractionwatch.com/2015/11/17/making-it-easier-and-more-automated-to-find-errors-a-guest-post-from-the-co-developer-of-statcheck/