The results of most scientific publications are most probably wrong is the conclusion of a recent study of John Ioannidis, a Greek epidemiologist.
Dr Ioannidis, who works at the University of Ioannina, in northern Greece, makes his claim in
PLoS Medicine, an online journal published by the
Public Library of Science.
His thesis that many scientific papers come to false conclusions is not
new. Science is a Darwinian process that proceeds as much by refutation
as by publication. But until recently, no one has tried to quantify the
matter.
Dr Ioannidis began his study by reviewing 49 research articles
printed in widely read medical journals between 1990 and 2003. Each of
these articles had been cited by other scientists in their own papers
1,000 times or more. However, 14 of them—almost a third—were later
refuted by other work. Having established the reality of his point, he
then designed a mathematical model that tried to take into account and
quantify sources of error. Again, these are well known in the field.
One is an unsophisticated reliance on “statistical significance”. To
qualify a result as statistically significant, the probability that it
is the result of pure coincidence should be smaller than 1:20. But,
as Dr Ioannidis points out, adhering to this standard means that simply
examining 20 different hypotheses at random is likely to give you one
statistically significant result. In fields where thousands of
possibilities have to be examined, such as the search for genes that
contribute to a particular disease, many seemingly meaningful results
are bound to be wrong just by chance.
In this framework, a research finding is less likely to be true when
the studies conducted in a field are smaller; when effect sizes are
smaller; when there is a greater number and lesser preselection of
tested relationships; where there is greater flexibility in designs,
definitions, outcomes, and analytical modes; when there is greater
financial and other interest and prejudice; and when more teams are
involved in a scientific field in chase of statistical significance.
When Dr Ioannidis ran the numbers through a simulation, his model
predicted that even a large, well-designed study with little
researcher bias has only an 85% chance of being right. An underpowered,
poorly performed drug trial with researcher bias has but a 17% chance
of producing true conclusions. Overall, the model predicts that more
than 50% of all published research is probably wrong.
It should be noted that Dr Ioannidis's study suffers from its own
particular bias. Important as medical science is, it is not the be-all
and end-all of research. Other sciences, such as physics and chemistry,
with more certain theoretical foundations and well-defined methods and
endpoints, probably can do better than medicine. Still, he makes a good
point—and one that lay readers of scientific results would do well
to bear in mind.
With respect to analytical chemistry, researchers do have a better
chance to create accurate results. Although the researcher has no means
to know the truth for an unknown sample, he can often cross-check his
methodology either with known samples (certified reference materials)
or he can compare his results with different methods, including
definitive methods. It is EVISA's aim to encourage all researchers
working in the field of speciation analysis, to make consistent
use of available methodology to assure the quality of research
results and to avoid poor experimental design.
Related Studies
John P. A. Ioannidis, Why Most Published Research Findings Are False, PLoS Medicine, 2/8 (2005) e124. DOI:10.1371/journal.pmed.0020124
The PLoS Medicine Editors, Minimizing Mistakes and
Embracing Uncertainty, PLoS Med 2(8) (2005) e272. DOI: 10.1371/journal.pmed.0020272
S. Goodman, S. Greenland, Why Most Published Research Findings Are
False: Problems in the Analysis, PLoS Med., 4/4 (2007)
e168. DOI: 10.1371/journal.pmed.0040168
J.P.A. Ioannidis, Why Most Published Research Findings Are False:
Author's Reply to Goodman and Greenland, PLoS Med 4/6 (2007)
e215. DOI: 10.1371/journal.pmed.0040215
Related Publications
Kevin A. Francesconi, Michael Sperling, Speciation analysis with HPLC-mass spectrometry: time to take stock, Analyst (London), 130/7 (2005) 998-1001. DOI: 10.1039/b504485p
M. Valcárcel, A. Rios, Required and delivered analytical information: the need for consistency, Trends Anal. Chem. (Pers. Ed.), 19/10 (2000) 593-598. DOI: 10.1016/S0165-9936(00)000344-3
Peter T. Kissinger, Jean-Michel Kauffmann, Quality manuscripts in analytical chemistry, Talanta, 57/3 (2002) 601-603. DOI:10.1016/S0039-9140(02)00051-6
Related EVISA Resources
Link page: All about quality of measurements
Brief summary: Error sources in speciation analysis
Brief summary: Speciation Analysis - Striving for Quality
Related News
Scientific American, February 27, 2007: The Science of Getting It Wrong: How to Deal with False Research Findingslast time modified: October 15, 2024