pageok
pageok
pageok
Balancing Publication Bias:

It is often observed that scientific publications suffer from a "publication bias" against the publication of studies that generate negative results. A study purporting to show a link between a given chemical and given health problem is more likely to be published than a study that finds no correlation. Similarly, a study purporting to show that a given drug helps a given medical condition is more likely to be published than a study that finds the drug is no more effective than a placebo. The result, some claim, is a subtle bias in the scientific literature. But this may start to change.

In today's WSJ, Sharon Begley reports (link for subscribers) on the rise of publications that specialize in publishing studies with "negative" findings.

guardians of scientific probity are fighting back. A handful of journals that publish only negative results are gaining traction, and new ones are on the drawing boards.

"You hear stories about negative studies getting stuck in a file drawer, but rigorous analyses also support the suspicion that journals are biased in favor of positive studies," says David Lehrer of the University of Helsinki, who is spearheading the new Journal of Spurious Correlations.

"Positive" means those showing that some intervention had an effect, that some gene is linked to a disease -- or, more broadly, that one thing is connected to another in a way that can't be explained by random chance. A 1999 analysis found that the percentage of positive studies in some fields routinely tops 90%. That is statistically implausible, suggesting that negative results are being deep-sixed. As a result, "what we read in the journals may bear only the slightest resemblance" to reality, concluded Lee Sigelman of George Washington University. . . .

. . . studies that dispute connections between a gene and a disease are among the most important negative results in biomedicine. They undercut the simplistic idea that genes inevitably cause some condition, and show instead that how a gene acts depends on the so-called genetic background -- all of your DNA -- which affects how individual genes are activated and quieted. But you seldom see such negative results in top journals.

Hence, Dr. Olsen's journal, which is full of studies disputing reported links between gene variations and disease. The Sod1 gene and inherited forms of Lou Gehrig's disease? Probably not. MTHFR and the age at which Huntington disease strikes? Uh-uh. PINK-1 and late-onset Parkinson's disease? No.

Hopefully, each of these reports kept researchers, including those at drug companies, from wasting time looking for ways to repair the consequences of the supposed genetic association. But it isn't clear that any would have been published without the new journal.

MnZ (mail):
This problem has been well understood for centuries. The old excuse was: "It is too costly to publish every negative result." However, with the advent of lower cost publishing and the Internet, this argument no longer holds water.
9.15.2006 11:23am
jimbino (mail):
This is analogous to the problem with insurance reporting. You always read about the poor folks who suffered a disaster but were not insured; you never read about it when a non-disaster occurs and everyone has wasted all those insurance premiums. This bias accounts for the fact that people on average spend 5 times for insurance premiums in their lifetimes than they ever collect in disaster pay-outs.
9.15.2006 11:52am
jimbino (mail):
This is analogous to the problem with insurance reporting. You always read about the poor folks who suffered a disaster but were not insured; you never read about it when a non-disaster occurs and everyone has wasted all those insurance premiums. This bias accounts for the fact that people on average spend 5 times for insurance premiums in their lifetimes than they ever collect in disaster pay-outs.
9.15.2006 11:53am
jimbino (mail):
This is analogous to the problem with insurance reporting. You always read about the poor folks who suffered a disaster but were not insured; you never read about it when a non-disaster occurs and everyone has wasted all those insurance premiums. This bias accounts for the fact that people on average spend 5 times for insurance premiums in their lifetimes than they ever collect in disaster pay-outs.
9.15.2006 11:54am
Mikeyes (mail):
This concept is a long time coming to fruition.

The problem of detrmining efficacy of drugs is manifold, but one element is that the drug companies that fund the studies will often not allow the researchers to publish negative results. Hence, any negative results from these studies (which are the most common studies made) are rare. As a practicing physician I have to rely on networking and the opinions of those physicians who work for other companies. Not the best way to decide what medications work best (but not the worst either as you get a fairly good assesment from pharmacology lists as to what works and what doesn't.)

This is not a matter of no news is no news. There is a built in bias from the researcher/drug company basis that dominates clinical studies now a days and that is on top of the editorial bias to not print negative results (plus journals profit from the drug companies via journal advertising.) A journal that publishes negative results is very welcome and may encourage academics and researchers to pay more attention to those results.
9.15.2006 12:08pm
sam24 (mail):
Negative results can have a profound effect provided the subject itself has has profound significance. The Michelson-Morley experiment is a good example. As academia began to have ever increasing pressure to publish as a measure of accomplishment, the volume predictably increases with equally predictable shabby work. Who wants to publish and disprove something that has no significance in the first place? Until academia really returns to its intellectual roots, nothing of significance will change. The concept that Prof X has the academic right to be a fool is not a defensible position. He has the right to be a fool, but not as an academic right.
Perhaps the efforts that Professor Adler cites are a legitimate first step.

MD south of fly over country
9.15.2006 1:06pm
sam24 (mail):
[This problem has been well understood for centuries. The old excuse was: "It is too costly to publish every negative result." However, with the advent of lower cost publishing and the Internet, this argument no longer holds water]
MnZ
The water flows in both directions. The shear volume increases, but to what effect? The computer can be likened to the chain saw. Time was it took some effort to cut down a tree with an ax or cross cut saw. The result was some thought was put into the decision as to whether to cut down this tree. Not so now. Witness what I have just done as an examlple.

MD south of fly over country
9.15.2006 1:22pm
A. Zarkov (mail):
Whoops, make that 1970 into 1960 as Carson was dead by 1964. But while I'm here read what NRDC still says about Carson today.

Carson, a renowned nature author and a former marine biologist with the U.S. Fish and Wildlife Service, was uniquely equipped to create so startling and inflammatory a book.

Silent Spring took Carson four years to complete. It meticulously described how DDT entered the food chain and accumulated in the fatty tissues of animals, including human beings, and caused cancer and genetic damage.

One of the landmark books of the 20th century, Silent Spring's message resonates loudly today, even several decades after its publication.

Her careful preparation, however, had paid off. Anticipating the reaction of the chemical industry, she had compiled Silent Spring as one would a lawyer's brief, with no fewer than 55 pages of notes and a list of experts who had read and approved the manuscript.

From Wikipedia on Silent Spring.


The book attracted hostile attention from scientists, commentators and the chemical industry. In general, her book did not receive positive reviews from the science field. One of Carson's claims was that DDT is a carcinogen. Subsequent studies have failed to prove a link between DDT and cancer.
9.15.2006 2:21pm
Jack S. (mail) (www):
The lay person is unlikely to read any of the journals were real scientists would report negative or positive information about the effect of X on Y.

The public is more likely to get wind of such negative reports through the MSM.

So maybe it's not a problem with what scientists are publishing but rather the MSM reporting correctly on both sides and trying to provide some balance. Maybe because the MSM wants to sell magazines, newspapers and television programs that the journalists are biased as to what sells and what does not. Sensationalized stories about how Y died because of exposure to X sells much better to the American public that X had nothing to do with Y's death.
9.15.2006 2:45pm
Tony2 (mail):
An even more effective approach is to require the "registration" of hypothesis-testing experiments before the experiment begins in order for the experiment to qualify for later publication. The registration process could be very simple, just a few paragraphs describing what you're going to do. If the results are boring, you file a paper in the "Journal of Negative Results". If the results are interesting, you get into Science or Nature. Either way, even if nothing at all gets published, you at least know that the experiment has been tried before and can contact the scientists involved.

This also helps avoid unnecessarily replicating work; if you know that your proposed experiment has been tried before and didn't generate positive results, you can move on to something else.
9.15.2006 2:52pm
Bryan DB:
Isn't this just a problem with "proving a negative?" Given that these are (almost?) always statistical studies, how do you know that a "not related" really means no relationship, as opposed to "not measurable by this technique?" At least with a positive correlation, you can see that the measurement technique gives a result that can be expressed as a correlation, outside the effects of false positives.
9.15.2006 2:58pm
NRWO:
The frequency of publication bias (in favor of positive results) may be journal or discipline specific, See, e.g., http://jama.ama-assn.org/cgi/content/abstract/287/21/2825. The linked study indicates no bias in favor of positive results for clinical trials submitted to and published in JAMA.
9.15.2006 3:04pm
Dr. T (mail) (www):
A related problem is when studies with mostly negative results are spun to look positive. Yesterday's New England Journal of Medicine had an article on the use of coronary artery shunts inserted immediately after myocardial infarction (heart attack). Shunts could be plain or impregnated with a drug that inhibits blood clotting. In this big study, the post-shunt death rates and reinfarction rates were nearly identical (slightly lower for plain shunts) but post-shunt revascularization rates (an indicator that the shunt is getting blocked) were much higher for plain shunts. The authors somehow combined these results and concluded that the drug-impregnanted (and far more expensive) shunts were beneficial. That's not the conclusion I drew from the study.
9.15.2006 6:33pm
JerryW (mail):
The problem of detrmining efficacy of drugs is manifold, but one element is that the drug companies that fund the studies will often not allow the researchers to publish negative results.


In 16 years of directly working for Big Pharma sponsoring studies and 20 years of actually doing such studies while in private practice never once have I asked or been asked not to publish any result. AAMOF, each protocol specifically states the decision of whether or not to publish lies with the investigator.

It is just most investigators find that publishing negative results is not very emotionally satisfying and certainly won't engender awe from your colleagues
.
9.15.2006 6:41pm
Fub:
JerryW wrote:
In 16 years of directly working for Big Pharma sponsoring studies and 20 years of actually doing such studies while in private practice never once have I asked or been asked not to publish any result.
Would that our own government was so scrupulous. This item from yesterday's news isn't about pharms, but it arguably could be construed as an unpublished negative result if one framed the hypothesis in the negative:

Updated: 12:08 a.m. PT Sept 14, 2006

WASHINGTON - The Federal Communications Commission ordered its staff to destroy all copies of a draft study that suggested greater concentration of media ownership would hurt local TV news coverage, a former lawyer at the agency says.
9.15.2006 8:24pm
cathyf:
It's not just a publication bias, but I see this same bias in the practice of medicine. As a computer programmer, I'm used to diagnosing and fixing problems in software. There is a whole procedure you follow. You think of possibilities as for what could be causing the symptoms you see. You think about how to test for different things. You rank things from most promising to least promising, where "promising" is a combination of how easy it is to rule out and how likely that its the problem. Then you dive in and start looking. As you collect information you constantly revise your ideas of how to proceed -- some things you can rule out completely, others suddenly get much more likely. You keep going until you find the problem, or you fail and give up.

Contrast this to the practice of medicine. A patient comes in a reports symptoms. You come up with the "most promising" thing to look for, and you order a medical test. If the test shows the problem, the lab contacts you immediately and you contact the patient immediately and get them back in to start treatment. On the other hand, if your first guess is wrong and the test is negative, somehow the results go nowhere. The patient is completely forgotten, and unless he/she goes through the maze of gatekeepers to make another appointment, the entire diagnostic process is over. So if you are a patient, you had better find the doctor who guesses right on the first try.

Also, the whole tradeoff between the test itself and the probability of it being right is gone -- you have to go for the highest probability if you only have one shot. Imagine a symptom where there is a 20% chance that the problem is some disease that can be detected or ruled out with a $10 blood test, and an 80% chance that it is a disease that can only be detected or ruled out with a $1000 test with a 1% chance of significant morbidity/mortality. In a medical world where you pay attention to negative results and automatically keep looking for the answer when you get one, you would do the $10 test first and only go on to the $1000 test when the $10 test is negative. But in the "one chance" world you have to start with the $1000 test because you have to put your money on the 80% probability.
9.15.2006 9:13pm
Steve McKay (mail):
I don't really understand the point of your post, cathyf. "Maze of gatekeepers" hasn't been my experience, and I'd certainly hope the doctor doesn't bug me about scheduling a follow-up. I can find the phone just fine, and I don't need any more people making it ring when I'm trying to put my daughter to sleep.
9.16.2006 12:39am
Steven Jens (mail) (www):
I'm curious about early commenter Jimbino's statistic that "people on average spend 5 times for insurance premiums in their lifetimes than they ever collect in disaster pay-outs." The reviews of the property-casualty industry that I've seen suggest loss ratios around 70%, rather than 20%. Progressive auto insurance (see page 23) shows recent loss ratios of 65.8 and 69.1, and they aren't horribly mismanaged. Is this not very representative of insurance generally?
9.16.2006 8:54pm