Statistical Errors Invalidate Findings


There are three kinds of lies:
lies, damned lies, and statistics.

– Mark Twain


by the Editors


A study of articles published in top science and medical journals shows that a large proportion contain statistical errors. These are primarily caused by transcription or typesetting errors, or the incorrect rounding of figures.

A study published in BMC Medical Research Methodology1 claims that a high proportion of papers published in leading scientific journals contain statistical errors.

Not all of these errors led to erroneous conclusions, but the authors found that some of them may have caused non-significant findings to be misrepresented as being significant.

Again we find evidence that what is published in scientific and medical journals may not be as true as it should be.

Emili Garcia-Berthou and Carles Alcaraz, researchers at the University of Girona in Spain, have found that 38% from a sample of 32 papers in Nature, and four out of 12 sampled in the British Medical Journal (BMJ) contained one or more statistical errors. They examined the numbers within each, to see whether the data presented actually led to the statistical conclusion the authors drew, and also whether there was anything fishy about the numbers themselves.

Apart from simple typographical errors, the problem appears to arise through careless rounding of results. In unedited of data, the last digits in the numbers recorded will tend to have the values 0-9 at random, since these digits represent small values, and are thus the ones that are hardest to measure. If those numbers are rounded carelessly, however, 4s and 9s (which tend to get rounded up to the nearest half or whole number) will be rarer than they should be. The two researchers duly discovered that 4s and 9s were, indeed, rarer than chance would predict in many of the papers under scrutiny.

It was difficult to establish to what extent false conclusions were reported because of this type of error. The researchers estimated that in 4% of papers, the conclusions were invalidated to some extent. In one case there was no doubt. A number supposed to be statistically significant was explicitly misstated, and a false inference drawn in the paper’s conclusion. There is no suggestion that any of the errors observed was a deliberate fraud. But there do seem to have been rather a lot of them.

As with everything in science, readers of scientific and medical journals rely on the process of peer review to ensure the reliability of what they are reading. Yet the review and fact checking process is itself imperfect. Journals like BMJ and Nature are able to devote only a limited amount of time to checking and recalculating the numbers, otherwise the review process would grind to a halt. However, in the light of these findings leading journals will be raising the awareness of their reviewers to issues of statistical accuracy.

A suggested remedy is for researchers to publish raw data as well as statistical analysis and conclusions. That way, anyone who wishes to do so can check the calculations. For some years, Nature has offered supplementary information online to accompany its papers, although this is probably of interest only to workers within the field.

The real answer lies with the researchers themselves. Far too many scientists have only a shaky grasp of the statistical techniques they use. They look up the formulae, do the calculations and trust that it will be all right on the ‘night’.

Workers in mainstream science would do well to emulate the standards of statistical rigour which have been achieved for many years in parapsychology (see A Field Guide to Skepticism by Dean I. Radin).


1. Emili Garcia-Berthou and Carles Alcaraz, Statistical Errors, BMC Medical Research Methodology 2004, 4:13.

New Browser Icon

© 2014 The Association for Skeptical Investigation. All rights reserved.