The New York Times reported last week that 59,000 people died from drug overdoses in 2016, in the latest sign that America’s prescription painkiller epidemic is only getting worse. Yet the more shocking news about the scourge of opioids came a few days earlier, in a note published in the New England Journal of Medicine by a team of researchers in Canada. That note shows how a tiny blurb that first appeared in the journal’s January 1980 issue helped reshape—and distort—conventional wisdom on pain management, tilting doctors in favor of giving out addictive drugs.I was in college around this time and recall how compelling this evidence seemed to be. "Did you see that recent study . . . "
Back in 1979, Boston University Medical Center researchers Jane Porter and Hershel Jick found that just a handful of the patients who’d been treated with narcotics at a set of six hospitals went on to develop drug dependencies. Their single-paragraph summary of this result would be published as a letter to the editor in the NEJM under the heading, “Addiction Rare in Patients Treated With Narcotics.”
According to the recent correspondence in NEJM, this single paragraph was cited hundreds of times in the 1990s and 2000s to support the claim that prescription painkillers weren’t that addictive. It was during this period that doctors started treating pain much more aggressively than they had before and handing out potent drugs with little circumspection. (For a good history of the changing use of painkillers, see this piece in Vox.)
The original paragraph from Porter and Jick, just 101 words in all, read as follows:
Recently, we examined our current files to determine the incidence of narcotic addiction in 39,946 hospitalized medical patients who were monitored consecutively. Although there were 11,882 patients who received at least one narcotic preparation, there were only four cases of reasonably well documented addiction in patients who had no history of addiction. The addiction was considered major in only one instance. The drugs implicated were meperidine in two patients, Percodan in one, and hydromorphone in one. We conclude that despite widespread use of narcotic drugs in hospitals, the development of addiction is rare in medical patients with no history of addiction.Most citations of this note seemed to overlook its narrow scope. The blurb gives a rate of drug addiction for patients with restricted access to the drugs in question and no stated definition of what it means to be “addicted.” Thirty-seven years ago, when Porter and Jick’s letter first appeared, opioids were carefully controlled; the patients they described may have taken painkillers while they were at the hospital, but they weren’t going home with them. That meant the addiction rate they found (four in 11,882) had little bearing on the more important question—now sadly resolved—of whether it’s safe to prescribe opioids to patients outside the hospital setting. Despite these limitations, the stature of this tiny research project seemed to only grow as time went on, like a scholarly fish tale. In 1990, Scientific American described Porter and Jick’s paragraph as a “an extensive study.” By 2001, Time had promoted it to the status of “a landmark study.”
Engber goes on to discuss many other instances where researchers start with only a partial understanding of the circumstances of a study, its provenance or its quality, assume into place the level of quality they would wish that the study had and then use it to buttress whatever point they are trying to make.
In academic publishing, references are meant to buttress arguments, establish facts, and dole out credit where it’s due. In practice, they often do the opposite, hiding more than they show. In a disturbing and delightful series of papers on this topic, Norwegian social anthropologist Ole Bjorn Rekdal has shown how easily and often citations are abused. When you try to trace the provenance of any given, referenced fact—on addiction rates, for example—you may well find yourself tangled in a nest of secondary sources, with each paper claiming to have pulled the fact from another. These daisy-chained citations make it very hard—and at times impossible—to locate original source material. They also lead to a game of research telephone, in which the context of a fact gets stripped away, and its meaning morphed as it gets transmitted from one citation to the next.It is the same with quotations. Try finding an actual source for many quotations and verify that the person actually said that - it is astonishing how happily people share so many unsourced and inaccurate quotes.
“Most scientists probably have some awareness [of] quotation error and citation copying in the scientific literature,” Wetterer wrote, “but I believe few have much appreciation for how common or important these problems may be.” He goes on to summarize some broader surveys of the problem: One study compared more than 1,000 direct quotes in scholarly papers with their original sources and found that 44 percent contained at least one mistake; another looked at how mistakes like typos propagate through bibliographies and concluded that at least 70 percent of all scientific citations are copied from the bibliographies of other secondary sources.And technology, which should be a solution, is, so far, merely an amplifier of cognitive pollutuon.
Other researchers have found error rates as high as 67 percent in the journals of specific fields. Rekdal notes that entire books are routinely cited as the source for specific facts, without the help of page numbers. “At times, I get the feeling that references have been placed in quantities and with a degree of precision reminiscent of last minute oregano flakes being sprinkled over a pizza on the way to the oven,” he writes.
Meanwhile, the same digital tools that might be used to clean up the literature also make it easier to rank scholars according to their “bibliometrics”—i.e., the number of times their papers have been cited in the literature. This, in turns incentivizes researchers to use (and abuse) their bibliographies as a way of advancing their careers. I mean, why not stuff your paper full of vacuous pointers to your own work or that of your colleagues?Transparency, consequences and a market-place of ideas. It sounds so easy but we are a long ways from the ideal.
The stuffing of journals with trash citations has clogged a vital channel of scientific communication, by overwhelming useful references with those that are trivial, inaccurate, or plagiarized. The recent flap over Porter and Jick’s paragraph from 1980 shows how this knowledge jamming can even, in some cases, be a matter of life or death.