Home

I am an Assistant Professor at Tilburg University. My research focuses on meta-science, including topics such as replication, publication bias, statistical errors, and questionable research practices.

I am part of the Meta-Research Center at Tilburg University: http://metaresearch.nl.

Contact
Email: m.b.nuijten@uvt.nl
Work phone: (+31) (0) 13 466 2053

linkedintwitterround-google-sholar

researchgate-logo

github

 

Advertisements

Dr. Nuijten

Wednesday May 30, 2018, I successfully defended my PhD thesis, which means that I can now finally call myself Dr. Nuijten!

85

I thank my promotors Jelte Wicherts and Marcel van Assen for all their advice over the last 5 years, and my committee – Chris Chambers, Eric-Jan Wagenmakers, Rolf Zwaan, and Marjan Bakker – for their interesting (and fun!) questions.

My full thesis “Research on research: A meta-scientific study of problems and solutions in psychological science” can be found here.

99.jpg

New Preprint: Effect Sizes, Power, and Biases in Intelligence Research

Our new meta-meta-analysis on intelligence research is now online as a preprint at https://psyarxiv.com/ytsvw.

We analyzed 131 meta-analyses in intelligence research to investigate effect sizes, power, and patterns of bias. We find a typical effect of r = .26 and a median sample size of 60.

The median power seems low (see figure below), and we find evidence for small study effects, possibly indicating overestimated effects. We don’t find evidence for a US effect, decline or early-extremes effect, or citation bias.

MedianPowerPerTypeAndOverallRandomEffects

Comments are very welcome and can be posted on the PubPeer page https://pubpeer.com/publications/9F209A983618EFF9EBED07FDC7A7AC.

 

Talking Text & Data Mining at the European Commission

“The right to read is the right to mine”. That was the motto of yesterday’s meeting at the European Commission, where we discussed how new European copyright laws would affect text and data mining (TDM) research.

The new proposal would seriously impede the use of TDM for businesses; effectively they would not have the right to mine content they already have legal access to, which is of course very strange.

The proposal does include an exemption for non-commercial research organizations – which includes universities, and with that, my work – but this is still not sufficient. For one, it would prevent scientists to commercialize any breakthroughs based on TDM research. On top of that, an increasing number of scientists seaks collaboration with businesses (for example, to increase the chances of getting a Horizon 2020 grant).

For updates on this legislation, and more info on the TDM restrictions, see the website, including an open letter, of the European Alliance for Research Excellence (EARE).

Nature Comment: Share Analysis Plans and Results

Nature published a series of comments all focused on ways to fix our statistics. In my comment, I argue that the main problem is the flexibility in data analysis, combined with incentives to find significant results. A possible solution would be to preregister analysis plans, and to share data.

Read the entire piece here, for the following set of solutions:

  • Jeff Leek: Adjust for human cognition
  • Blake McShane, Andrew Gelman, David Gal, Christian Robert, and Jennifer Tackett: Abandon statistical significance
  • David Colquhoun: State false-positive risk, too
  • Michèle Nuijten: Share analysis plans and results
  • Steven Goodman: Change norms from within

 

Nature

Illustration by David Parkins

 

New Preprint: statcheck’s Validity is High

In our new preprint we investigated the validity of statcheck. Our main conclusions were:

  • statcheck’s sensitivity, specificity, and overall accuracy are very high. The specific numbers depended on several choices & assumptions, but ranged from:
    • sensitivity: 85.3% – 100%
    • specificity: 96.0% – 100%
    • accuracy: 96.2% – 99.9%
  • The prevalence of statistical corrections (e.g., Bonferroni, or Greenhouse-Geisser) seems to be higher than we initially estimated
  • But: the presence of these corrections doesn’t explain the high prevalence of reporting inconsistencies in psychology

We conclude that statcheck’s validity is high enough to recommend it as a tool in peer review, self-checks, or meta-research.

statcheck-01

statcheck Runner-Up for Sentinel Award

Screen-Shot-2017-08-02-at-4.05.46-PM-2Publons announced the winner of the Sentinel Award for outstanding advocacy, innovation or contribution to scholarly peer review, and I am proud to announce that statcheck was crowned runner-up!

I am honored that the judges considered statcheck a useful contribution to the peer review system. In the end, one of the things I hope to achieve is that all Psychology journals will consider it standard practice to quickly “statcheck” a paper for statistical inconsistencies to avoid publishing them.

A very warm congratulations to the winner of the award: Irene Hames. Irene spent most of her career on improving the quality of peer review and it is great that her work is recognized in this way! Also congratulations to the rest of the Sentinel Award nominees: Retraction WatchAmerican Geophysical UnionORCiDF1000ResearchThe Committee on Publication Ethics (COPE)Kyle Martin and Gareth Fraser.

For more information about the award, the winner, and the finalists, see this page.