Deciphering Drug Study Tricks-of-the-Trade

To a significant extent the debate about the benefits of Medication-Assisted Treatment (MAT) and the arguments suggesting we should shift most of our drug-treatment efforts to MAT revolve around interpretations of various scientific studies.


Some MAT proponents claim that the science has demonstrated it is the only effective method of addressing the opioid crisis, suggesting that other approaches should be phased out or required to shift to MAT. Several arguments have been raised against such an “all-or-nothing” re-orientation of treatment priorities including questions about what the science is telling us.

In order to accurately understand what a specific study means, it is important to understandhow studies can be manipulated or compromised. It is dangerous to blindly assume that all studies are created equal and should be given equal weight. In the case of studies related to high-profile or vital subjects such as opioid addiction, it is particularly important we analyze their results because it is unfortunately not uncommon for media reports of some studies to over-simplify or misrepresent results or conclusions.

The 2010 book “The Emperor’s New Drugs” by Irving Kirsch, Ph.D. describes his trail of investigation into the efficacy of antidepressants and the studies used to support their use. His investigation began with an interest in the role the placebo effect might have in treating depression but it lead to some startling discoveries that helped to kick off a national debate.

We are not focusing in this article on antidepressants, but instead on the types of problems Kirsch found with antidepressant studies and their representation to the public. In some cases those issues seem to be systemic. Therefore, understanding what might be referred to as the “tricks-of-the-trade” when it comes to gaining approval for and subsequently marketing drugs, may be useful as we try to unpack and analyze MAT-related studies.

Some of these tactics are less likely to apply to testing MAT drugs than to antidepressant studies, but are included here. The compilation of this list should not be taken to imply these apply to any specific MAT study. The intention is to provide a better understanding of the kinds of things can occur to help us better understand the value of the studies used to market new drugs. Finally, this list is not intended to replace Kirsch’s book which is a valuable and insightful analysis but to suggest that some of his discoveries can be applied to the analysis of MAT studies.

Breaking Blind

A keynote of modern scientific studies is the use of “double-blind” studies, i.e. those in which neither the participants nor the investigators know who receives the medication and who receives the placebo. If the medication under study has side effects, such as with antidepressants, then despite study controls, it can become obvious to both participants and investigators who has received the medication. Breaking blind can significantly impact a study’s outcome.

A common way to avoid this problem is to use “active placebos”, i.e. placebo’s that do not have any therapeutic effect but do have recognizable side effects.

Unpublished Studies

An apparently common tactic used by drug companies working to win regulatory approval for a medication is to publish only those studies which show drug benefits and leave negative studies unpublished.

An apparently common tactic used by drug companies working to win regulatory approval for a medication is to publish only those studies which show drug benefits and leave negative studies unpublished. Kirsch notes that “a report by authorities at the Medical Products Agency (MPA) in Sweden suggests that as many as 40 per cent of clinical trials of antidepressants are not published.”

It is possible, using the Freedom of Information Act, to gain access to the full set of data submitted to the FDA which can be significantly larger than the published studies on which clinical decisions are often made.

Statistical versus Clinical Significance

Studies that claim to have produced “significant” improvements or outcomes may be referring to “statistical significance” or “clinical significance.” According to Kirsch, statistical significance “refers to whether an effect — the difference between a drug and a placebo, for example — is real, or whether it has just occurred by chance. It tells you how likely you are to get the same results if you do the same study over again. But it does not tell you how large or important the effect is.”

In contrast, clinical significance “refers to the size of the effect. It addresses whether it is likely to make a meaningful difference in anyone’s life.”

This distinction is important, for example, when investigating antidepressants because, as Kirsch points out, some studies show a statistically significant difference between a medication and a placebo but do not show a clinically significant difference.

Variable-dose versus Dose-Response Trials

Kirsch explains that some trials “allow physicians to adjust the dose of the drug for each individual patient, just as they would in normal clinical practice.” In these studies, physicians may adjust doses based on their determination of the effectiveness of the dose and/or the appearance of side effects.

In contrast, in dose-response trials patients are “randomly assigned to receive low, moderate or high doses of the drug — or no drug at all in the placebo condition.”

The use of one type over another is not necessarily a tactic for obfuscating results, but may be important to understand the value of a particular study.

Pharmaceutical Company Handling of Study Results

Kirsch identifies four general strategies used by drug companies to slant results in their favor, specifically they:

  • “Withheld negative studies from publication
  • “Published positive studies multiple times
  • “Published only some of the results from multi-site studies
  • “Published data that was different from what they submitted to the FDA.”

Kirsch explains that a particular tactic used to proliferate positive studies even has a name. Whereas negative tests are typically not published, “the positive trials were published many times, a practice known as ‘salami slicing’, and this was often done in ways that would make it difficult for reviewers to know that the studies were based on the same data.”

Another tactic was to publish “only some of the data from a clinical trial, a manoeuvre that researchers call cherry-picking the data.”

FDA Approval Criteria

Kirsch points out that the “criterion used by drug regulators requires two ‘adequate and well-controlled’ clinical trials showing that a drug is better than a placebo.” However, this benchmark is stacked heavily in the favor of drug companies due to some “catches” built into the process. Again, from Kirsch “the first catch is that there is no limit to the number of studies that can be run in order to find the two showing a statistically significantly effect. Negative trials just don’t count.”

The second “catch” is that the difference between the drug and placebo effect only needs to be statistically significant, “its clinical significance … is not considered.”

“Assay Sensitivity” Issues

This tactic involves the inclusion of an already-approved drug in addition to placebo as part of the study of a new drug in the same category. If the study shows the new drug performs better than the placebo, then the study is deemed positive confirmation for the new drug.

On the other hand, if the new drug doesn’t perform as well as placebo — and the older drug doesn’t show well either — the company may deem the test did not have sufficient “assay sensitivity.”

According to Kirsch “An assay is an analysis or assessment. So if a trial lacks assay sensitivity, it means that it is not sufficiently sensitive to analyse the effectiveness of the drug, and that therefore the study should not be counted as evidence against the new drug.”

In terms of new MAT drugs, comparisons may be made against methadone.

Sponsor Bias of Drug Studies

Intuitively, it makes sense that drug company sponsored studies would likely favor the sponsoring company’s drugs. But that conclusion does not need to be left to intuition. Kirsch cites a study by a team at the Beth Israel Medical Center in New York that “examined the outcome of clinical trials as a function of who had sponsored them.”

The study found that approximately “75 per cent of drug-company studies showed favorable results for their own drugs, but only 25 per cent of them showed favorable results for the product of a competing company.” In contrast, studies not sponsored by drug companies show a success rate of “approximately 50 per cent.”

Short Trial Length

Kirsch mentions that short-term studies tend to be norm for a variety of reasons. It will be important, when examining the usefulness of MAT medications, to also see the results of long-term studies to fully identify and understand their consequences. For example, one study referenced on this site covers longer-term effects of methadone. See Ten Years of Abstinence in Former Opiate Addicts.

Placebo ‘Run-In’ Phase

…excluding people who respond to placebos is likely to bias the study in favor of the drug company…

Kirsch describes the “run-in” phase as a period “after people are assessed for inclusion in the trial, they are all given a placebo for a week or two. After this run-in period, the patients are reassessed, and anyone who has improved is excluded from the trial.”

Obviously, excluding people who respond to placebos is likely to bias the study in favor of the drug company since the criteria used to judge a trial’s success is whether the drug fairs better than placebo.


Reproducibility itself is not an issue raised by Kirsch but is worth mentioning. As covered elsewhere on this website, when it comes to studies examining behavior, scientific outcomes are prone to suffer from a problem with reproducibility. A study of psychological studies published in Science in 2015 entitled “Estimating the reproducibility of psychological science“, found that less than 40% of psychological studies stood up when attempts were made to replicate them. A 2016 article in Nature confirmed this finding: 1,500 scientists lift the lid on reproducibility.


The above are tactics that can be used, and apparently have been used extensively, by drug companies to improve the perceived results of drug studies. There is no reason to assume these have been used by all companies to improve the results for all drugs. But, there is also no reason not to use an understanding of these tactics to help analyze and validate drug study results. In fact there has been no widespread, systemic cleanup of drug company tactics – it is far safer to assume these methods are in full use than to suffer the consequences of blindly accepted misleading study “findings.”

Additional questions have been raised about the science surrounding MAT including drug company post-test marketing tactics and even the success criteria used to define positive outcomes. This list does not touch on those MAT-specific issues but instead focuses on general tactics and faults that can lessen the value of individual studies.

When speaking about drug company marketing, history clearly cautions us, as buyers, to beware.