Rupert Sheldrake

Experimenter Effects in the Sciences

 

Could Experimenter Effects Occur
in the Physical and Biological Sciences?

by Rupert Sheldrake

 


Appeared originally in Skeptical Inquirer
May/June 1998


 
Probably most skeptics would agree with Michael Mussachia (Skeptical Inquirer Nov/Dec 1995) that “our beliefs, desires and expectations can influence, often subconsciously, how we observe and interpret things”.

In psychology and clinical medicine these principles are widely recognized, which is why experiments in these subjects are often carried out under blind or double-blind conditions.

In a double-blind clinical trial, for example, some patients are given tablets of a drug and others are given similar-looking placebo tablets, pharmacologically inert. Neither clinicians nor patients know who gets what.

In such experiments, the largest placebo effects usually occur in trials in which both patients and physicians believe a powerful new treatment is being tested (Roberts et al., 1993). The inert tablets tend to work like the treatment being studied, and can even induce its characteristic side-effects (White et al., 1985). Likewise, experimenter expectancy effects are well know in experimental psychology, and also show up in experiments on animal behaviour (Rosenthal, 1976).

How widespread are experimenter expectancy effects in other branches of science? No one seems to know. I have attempted to quantify the attention paid to experimenter effects in different fields of science by means of two surveys.

The first survey was of experimental papers recently published in leading scientific journals, including Nature and The Proceedings of the National Academy of Sciences (Sheldrake, 1998). In the physical sciences, no blind experiments were found among the 237 papers reviewed. In the biological sciences, there were 7 blind experiments out of 914 (0.8%); in the medical sciences, 6 out of 102 (5.9%); in psychology and animal behavior, 7 out of 143 (4.9%). By far the highest proportion (but the smallest sample) was in parapsychology, 23 out of 27 (85.2%).

The second survey was of science departments at 11 British Universities (including Oxford, Cambridge, London, and Edinburgh). It confirmed that blind procedures are rare in most branches of the physical and biological sciences. They are neither used nor taught in 22 out of 23 physics and chemistry departments, or in 14 out of 16 biochemistry and molecular biology departments (Sheldrake, 1998). By contrast, blind methodologies are practised and taught in 4 out of 8 genetics departments, and in 6 out of 8 physiology departments. Even so, in most of these departments they are used occasionally rather than routinely, and are mentioned only briefly in lectures.

Only in exceptional cases are blind techniques used routinely. This survey revealed three examples. All three involved commercial contracts, according to which the university scientists were required to analyze or evaluate coded samples without knowing their identity.

When academic scientists were interviewed for this survey, some did not know what was meant by the phrase “blind methodology”. Most were aware of blind techniques, but thought that they were necessary only in clinical research or psychology. They believed that their principal purpose was to avoid biases introduced by human subjects, rather than by experimenters. The commonest view expressed by physical and biological scientists was that blind methodologies are unnecessary in their fields because “nature itself is blind”, as one professor put it. Some admitted the theoretical possibility of bias by experimenters, but thought it of no importance in practice. And one chemist added, “Science is difficult enough as it is without making it even harder by not knowing what you are working on.”

The assumption by most “hard” scientists that blind techniques are unnecessary in their own field is so fundamental that it deserves to be tested empirically (Sheldrake, 1994). Not just in psychology and medicine but in all branches of experimental science we can ask: Can the expectations of experimenters introduce a bias, conscious or unconscious, into the way they carry out their procedures, make observations or select data?

I suggest the following empirical investigation. Take a typical experiment involving a test sample and a control, for example the comparison of an inhibited enzyme with an uninhibited control in a biochemical experiment. Then carry out the experiment both under open conditions, and also under blind conditions, with the samples labelled A and B. In student practical classes, for instance, half the class would do the experiment blind, while the other half would, as usual, know which sample is which.

If such tests show no significant differences, then for the first time there will be evidence that blind techniques are unnecessary. On the other hand, significant differences between results under blind and open conditions would reveal the existence of experimenter effects. Further research would then be needed to find out how the experimenters’ expectations were influencing the data.

The more independent investigations, the better. It cannot be healthy for the supposed objectivity of regular science to rest on untested assumptions. Perhaps it will turn out, after all, that “hard” scientists need not bother with blind techniques. They may indeed be exceptions to the principle that “our beliefs, desires and expectations can influence, often subconsciously, how we observe and interpret things.” On the other hand they may be like everybody else, including experimenters in psychology, parapsychology and medicine. Who knows?

 
References:

Roberts, A.H., Kewman, D.G., Mercier, L. & Hovell, H. 1993. The power of nonspecific effects in healing: implications for psychosocial and biological treatments. Clinical Psychology Review 13: 375.

Rosenthal, R. 1976. Experimenter Effects in Behavioral Research. New York: John Wiley.

Mussachia, M. 1995. Objectivity and repeatability in science. Skeptical Inquirer 19 (6):33-5, 56.

Sheldrake, R. 1994. Seven Experiments that Could Change the World, Chapter 7. London: Fourth Estate.

Sheldrake, R. 1998. Experimenter effects in scientific research: How widely are they neglected? Journal of Scientific Exploration 12: 1-6.

White, L., Tursky, B. & Schwartz, G. (eds) 1985. Placebo: Theory, Research and Mechanisms. New York: Guilford Press.

 
Rupert Sheldrake, Ph.D., taught biochemistry at Cambridge University and was a Research Fellow of the Royal Society. He is the author of Seven Experiments that Could Change the World (Riverhead, New York, 2002).

 
 
 
New Browser Icon

© 2014 The Association for Skeptical Investigation. All rights reserved.