Science & ClimateApril 19, 2026

World's Largest Research Reliability Study Finds Only About Half of Published Findings Can Be Replicated

Only about half of published scientific results can be replicated, according to the SCORE project in Nature.

Science & Climate Writer

TweetLinkedIn
World's Largest Research Reliability Study Finds Only About Half of Published Findings Can Be Replicated

**TL;DR**: A massive collaboration found that only about half of published scientific results hold up when other teams try to repeat them. The SCORE project, published in Nature, examined 3,900 papers from 2009-2018 across 62 journals to test reproducibility, robustness and replicability.

The Systematizing Confidence in Open Research and Evidence (SCORE) effort brought together 865 researchers from institutions including the Center for Open Science, Karolinska Institutet in Sweden, the University of Melbourne in Australia, and Pennsylvania State University in the United States. Coordinated by the Center for Open Science, the team sampled claims from a broad set of fields such as political science, education, finance and health. They then applied three complementary checks: whether another group could get the same result from the original data (reproducibility), whether the original data yielded the same outcome under different analytical methods (robustness), and whether a fresh experiment could reproduce the finding (replicability). The results appeared as a trio of papers in the journal Nature on 1 April.

Only about 50% of the claims passed the replicability test, while reproducibility and robustness showed similar but not identical patterns. The project covered 3,900 papers published between 2009 and 2018, drawn from 62 journals. Gustav Nilsonne of Karolinska Institutet called SCORE the world’s largest research project examining the reliability of scientific results and a model for large‑scale collaboration. Tim Errington of the Center for Open Science noted that verifying discoveries demands substantial effort before they can serve as foundations for further work. Fiona Fidler of the University of Melbourne added that the work raised new questions about how to evaluate research in practice.

These results suggest that the scientific community should expect a substantial proportion of published findings to need independent confirmation. Funders, journals and institutions may increasingly require replication attempts as part of the review process. The data and materials from SCORE are openly available, enabling other scientists to build on the findings. Watch for expanded open‑data policies, larger replication consortia, and updated guidelines that aim to improve confidence in research outputs.

TweetLinkedIn

Reader notes

Loading comments...