We hear this almost every day: “A new study has shown that XY treatment is effective”. Usually we don’t question the claim, and we assume that the XY treatment is indeed effective. What we rarely consider is that the study in question might prove nothing at all. As being misled in this way leads to poor decisions and therefore can be detrimental, it can be helpful to try to understand some common pitfalls in clinical research. If these biases can of course be found in all fields of medicine, my thirty years of experience in the study of so-called alternative therapies have taught me that the vast majority of studies in this field are unfortunately affected by one or the other of these subterfuges.
Bias no. 1: Doing without a control group
If someone wanted to cheat you with the results of a clinical trial, the easiest option would be to conduct a study without a control group. In a clinical trial, the control group consists of patients who do not receive the treatment being tested. They serve as a standard against which the results of the product or treatment can be compared. Depending on the specific research question, they may, for example, receive no treatment at all or receive a placebo.
One of the most cited studies on homeopathy shows that 71% of patients experienced clinical benefit after homeopathic treatment. Its authors proudly concluded that “homeopathic intervention offered positive health changes to a substantial proportion of a large cohort of patients with a wide range of chronic conditions.” I know many people who were impressed by this discovery and thought that homeopathy must be effective.
What they didn’t realize was that such an outcome could be due to a host of factors, e.g. the placebo effect, the natural history of the disease (many conditions improve, even if we don’t treat them at all), regression to the mean (extremes tend to move closer to the mean) or treatments that patients have self-administered while taking homeopathic remedies. It is therefore quite misleading to make causal inferences from clinical trials without a control group.
Bias n°2: Forget the “double blind”
A less obvious trick to generating false-positive results in a clinical trial is to omit “blinding,” that is, preventing participants from knowing whether they were assigned to the treatment or control group. The purpose of patient, therapist, and assessor blinding is to ensure that their expectations cannot affect the outcome. Patients who hope for a cure regularly get better, even if the therapy they receive is useless, and researchers could see the results with rose-colored glasses. For example, according to a recent study conducted in India, acupuncture would have a beneficial effect on improving the cognitive function of students. But since the participants knew they were being treated, it is more likely that it was not the acupuncture but the students’ expectations that influenced the results.
Bias no. 3: Not distributing participants randomly between the groups studied
Another important source of bias is the lack of randomisation (ie dividing patients into treated and control groups not by choice but by a random procedure). It can easily pass off a worthless therapy as an effective therapy when tested in a clinical trial. If we allow patients or trial leaders to choose who will receive the drug and who will receive the control treatment, the two groups are likely to differ in several variables (eg disease severity). This can then easily have an impact on the result. If, for example, researchers freely divide patients in a clinical trial between the treated patient group and the control group, they may, intentionally or not, select from the former those who are more likely to respond and affect those who do not respond to the second. Consider, for example, a study that suggested homeopathic medicine was a useful part of integrated symptomatic therapy for the common cold. As it was not randomized, the result is probably not due to homeopathy but to a bias. Only randomization can ensure that comparable groups of patients are compared, and failure to randomize is likely to produce misleading results.
Bias n°4: Acting as if the placebo effect did not exist
In alternative medicine, one of the most popular tricks to create a false positive result (a result that suggests an ineffective therapy is effective) is the use of the “A+B versus B” study design. Imagine that you have an amount of money “A” and your friend has the same amount plus another amount “B”. Who has the most money? This is, of course, your friend: A+B will always be superior to A. For the same reason, “pragmatic” trials that test skincare plus standard treatment versus standard treatment alone will always yield positive results. For example, acupuncture plus the usual treatment is more than the usual treatment and therefore will produce a better result. This would be true even if acupuncture is pure placebo – because in reality, a placebo is more than nothing, and the placebo effect will impact the outcome.
Bias n°5: Include too few patients in the study
Some clinical studies do not check whether the treatment is superior to a placebo (the experts speak of superiority trials), but they evaluate whether it is equivalent to a therapy whose effectiveness is generally recognized (the experts speak of trials of equivalence). The idea is that, if both treatments produce similar positive results, they must both be effective. These trials offer a whole range of possibilities to mislead the public. Researchers may include too few patients, so statistics are unable to detect a difference between treatment and control group that, in fact, exists. A good example is a study claiming to show that Ibuprofen and homeopathy Belladonna 6C are both effective and provide adequate analgesia, with no statistically significant difference. As the trial included too few patients, the result is most likely due to its lack of power to detect the difference. The control group may also receive an effective but underdosed treatment. The results of such a study would imply that the results of ineffective treatment are equal to those of established therapy. We would thus have the impression that the tested product is effective even if it is not.
Bias n°6: Retaining only the results that suit the authors
Another option to trick the public is to select the results. Most clinical trials use multiple outcome measures to quantify effects. For example, a study of acupuncture for pain control might quantify pain in half a dozen different ways – how long treatment lasts until pain is gone, how much medication patients took in addition to acupuncture, days off work due to pain, partner’s impression of patient’s health status, patient’s quality of life, frequency of sleep disturbed by pain, etc. If the researchers then assess all of these variables, they are likely to find that one or two of them have evolved in the direction they hoped. This will most often be a statistical effect due to chance. To deceive the public, the researchers need only focus their publication on the results which, by chance, appeared as they had hoped. This would fool many consumers into believing that even an ineffective therapy is effective.
And if all that isn’t enough…
And finally, if all else fails, there is always the possibility of outright fraud. Seekers are human and are not immune to temptation. They often have conflicts of interest or may find that positive results are easier to publish than negative ones. Thus, faced with the disappointing results of a study, they may decide to “embellish” them or even to invent new ones, more pleasing to their convictions, to their peers or to their sponsors.
The next time you hear “a new study has shown that XY therapy works,” it might be worth asking a few questions and considering the many ways researchers can fool you with seemingly rigorous clinical studies. In case you find this all too complicated or tedious, just remember this: if it sounds too good to be true, it probably is.