I wrote an invited review for Developmental Medicine & Child Neurology about “Practical tools and strategies for researchers to increase replicability”.
Problems with replicability have been widely discussed over the last years, especially in psychology. By now, a lot of promising solutions have been proposed, but my sense is that researchers are sometimes a bit overwhelmed by all the possibilities.
My goal in this review was to make a list of some of the current recommendations that can be easily implemented. Not every solutions is always feasible for every project, so my advice is: copy best practices from other fields, see what works on a case-by-case basis, and improve your research step by step.
We analyzed 131 meta-analyses in intelligence research to investigate effect sizes, power, and patterns of bias. We find a typical effect of r = .26 and a median sample size of 60.
The median power seems low (see figure below), and we find evidence for small study effects, possibly indicating overestimated effects. We don’t find evidence for a US effect, decline or early-extremes effect, or citation bias.
Comments are very welcome and can be posted on the PubPeer page https://pubpeer.com/publications/9F209A983618EFF9EBED07FDC7A7AC.
My latest paper “The replication paradox: Combining studies can decrease accuracy of effect size estimates” is now published in Review of General Psychology. You can find a postprint of the paper here. The full reference is:
Nuijten, M. B., Van Assen, M. A. L. M., Veldkamp, C. L. S., & Wicherts, J. M. (2015).The replication paradox: Combining studies can decrease accuracy of effect size estimates. Review of General Psychology, 19(2), 172-182. http://dx.doi.org/10.1037/gpr0000034
In the latest edition of “De Psychonoom”, the magazine of the Dutch Association for Psychonomy, I talk about the ‘replication paradox’ and explain why replications do not necessarily decrease bias in effect size estimates. Read the interview here (in Dutch).
Will integrating original studies and published replications always improve the reliability of your results? No! Replication studies suffer from the same publication bias as original studies… Read the full blog here.