Replication crisis: Difference between revisions
No edit summary |
CSV import |
||
| Line 44: | Line 44: | ||
[[Category:Research methods]] | [[Category:Research methods]] | ||
[[Category:Science and society]] | [[Category:Science and society]] | ||
<gallery> | |||
File:Ioannidis (2005) Why Most Published Research Findings Are False.pdf|Replication crisis | |||
File:Distributions of the Observed signal strength v2.svg|Replication crisis | |||
File:Interaction between sample size, effect size, and statistical power.svg|Replication crisis | |||
File:Barriers to conducting replications of experiment in cancer research.jpg|Replication crisis | |||
File:Distribution of statistically significant estimates in the presence of added error.svg|Replication crisis | |||
File:P-hacking by early stopping.svg|Replication crisis | |||
File:Semi-automated testing of reproducibility and robustness of the cancer biology literature by robot.jpg|Replication crisis | |||
File:Tenets of open science.svg|Replication crisis | |||
</gallery> | |||
Latest revision as of 01:37, 20 February 2025
The replication crisis refers to a growing concern within the scientific community regarding the reliability and reproducibility of published research findings. It involves a collective recognition that many scientific studies, particularly in fields such as psychology, medicine, and social sciences, are difficult to replicate or reproduce independently, casting doubt on the validity of the original findings.
Background[edit]
The term "replication crisis" began gaining attention in the early 2010s, but the issue has been present for a longer period. The concern arose when researchers, often attempting to build upon existing work or verify its conclusions, found that they were unable to replicate the results of a substantial number of studies.
Causes[edit]
- There are several factors that contribute to the replication crisis:
- Publication Bias: Journals often favor publishing novel or positive results, which discourages researchers *from submitting, or journals from accepting, replication studies or studies with negative or null results.
- P-hacking: The manipulation of data or statistical analyses to achieve a statistically significant p-value, often at the expense of the integrity of the data.
- Small Sample Sizes: Studies with small sample sizes may produce results that are not representative of the population and are more likely to be a result of random variation.
- Poor Methodology: Insufficiently rigorous research methods can produce unreliable results.
- Researcher Bias: Researchers may unconsciously or consciously manipulate experiments or data in ways that favor their hypotheses or desired outcomes.
- Economic and Career Incentives: The “publish or perish” culture in academia places pressure on researchers to produce a large number of publications, which can lead to rushed or compromised research practices.
Consequences[edit]
- The replication crisis has significant implications for science and society:
- Loss of Confidence: It undermines the public’s confidence in scientific research.
- Wasted Resources: Time and resources are wasted on studies that are based on non-replicable research.
- Policy Misdirection: Policies or clinical practices based on non-replicable research can be ineffective or harmful.
Addressing the Crisis[edit]
- Various measures have been proposed and are being implemented to address the replication crisis:
- Open Science: Encouraging the sharing of data, methods, and materials to increase transparency.
- Preregistration: Requiring researchers to preregister their studies, including planned analyses, to curb p-hacking and data dredging.
- Replication Studies: Encouraging the publication of replication studies and providing funding for such studies.
- Changing Publication Incentives: Shifting away from a focus on novel, positive results to valuing methodological rigor and reproducibility.
- Education and Training: Better education and training in research methods and statistics.
See Also[edit]
References[edit]
<references>
- Open Science Collaboration. "Estimating the reproducibility of psychological science." Science 349, no. 6251 (2015): aac4716.
- Ioannidis, John PA. "Why most published research findings are false." PLoS medicine 2, no. 8 (2005): e124.
- Baker, Monya. “1,500 scientists lift the lid on reproducibility.” Nature 533, (2016): 452-454.
- Munafò, Marcus R., Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn et al. "A manifesto for reproducible science." Nature Human Behaviour 1, no. 1 (2017): 0021.
</references>


