Finding value in the knowledge generated from life science research

 

Research knowledge: Disentangling fact from fiction

 

Submitted by LAK

LAK was not surprised but nonetheless disappointed, and even shocked by what Tom Siegfried’s had to say in his article in the ScienceNews (March 27, 2010; vol.177 #7). He points out that all too frequent failures in research designs, analysis of results and the inappropriate conclusions that are drawn in much of the life science research literature. He reaches his conclusions based on evidence and a cool evaluative perspective and provides strong evidence that “countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing” and ‘“that in modern research, false findings may be the majority or even the vast majority of published research claims.” False knowledge is worse than no knowledge.

A great deal of public and private money is spent in support of life science research. For example the NIH research budget is 30 billion dollars a year. I would not be surprised that upwards of 500 billion dollars a year is spent on life science research worldwide (and a major chunk of that is devoted to mind-brain themes).

What we get for this amount of money is disappointing not so much in the amount of data and published articles but the quality of the reported science. Individual scientist have a tough time obtaining funds and feel that to be competitive, they must work and publish quickly and then publicize their work as important and cutting edge. Nevertheless, in general, they should be more careful and sanguine about their research findings and the significance of their work. What is needed is more quality and less flash.

One of several examples of bogus science reported in Siegfried’s article left us particularly breathless. Siegfried starts off by pointing out that “Statistical significance is not always statistically significant”. I thought that rather than provide a synopsis of what he had to say I would quote what he had to say about how failure to think about one recent set of findings led to absurd conclusions that were headlined in lots of news stories just one year ago.

“It is common practice to test the effectiveness (or dangers) of a drug by comparing it to a placebo or sham treatment that should have no effect at all. Using statistical methods to compare the results, researchers try to judge whether the real treatment’s effect was greater than the fake treatments by an amount unlikely to occur by chance.” But common statistical analysis is not a substitute for common sense so….….. real-life example arises in studies suggesting that children and adolescents taking antidepressants face an increased risk of suicidal thoughts or behavior. Most such studies show no statistically significant increase in such risk, but some show a small (possibly due to chance) excess of suicidal behavior in groups receiving the drug rather than a placebo. One set of such studies, for instance, found that with the antidepressant Paxil, trials recorded more than twice the rate of suicidal incidents for participants given the drug compared with those given the placebo. For another antidepressant, Prozac, trials found fewer suicidal incidents with the drug than with the placebo. So it appeared that Paxil might be more dangerous than Prozac.

But actually, the rate of suicidal incidents was higher with Prozac than with Paxil. The apparent safety advantage of Prozac was due not to the behavior of kids on the drug, but to kids on placebo — in the Paxil trials, fewer kids on placebo reported incidents than those on placebo in the Prozac trials. So the original evidence for showing a possible danger signal from Paxil but not from Prozac was based on data from people in two placebo groups, none of whom received either drug. Consequently it can be misleading to use statistical significance results alone when comparing the benefits (or dangers) of two drugs.”

This is marvelous theatre absurd played out on a science stage.

Comments are closed.