Wednesday, September 25, 2013

Publication Bias


Publication Bias

Something really struck me this past week after reading the article by Fewtrell et al. (2005) which analyzed different health interventions with use of meta-analysis: the mention of possible publication bias and the possibility that health interventions that produced negative or non-significant results weren’t submitted or published.

 When a researcher is conducting research or writing an article that involves analysis of data, or testing effectiveness of interventions, we are usually looking for significant evidence to support our alternative hypothesis, or for a positive outcome resulting from the intervention. Not only do we want to be correct in our assumptions, but we know that the likelihood of our research being published, or received well at a conference, is highly dependent on the significant/nonsignificant findings. This isn’t an irrational fear. It has been demonstrated by numerous studies that the likelihood of being published is higher with statistically significant results (Dirnagl & Lauritzen 2010; Hopewell et al. 2009; Dwan et al. 2008; Rothstein et al. 2005; Weber et al. 1998). Further, for clinical trials, negative results take longer to get published (8-9 years) as opposed to positive results (4-5 years) (Hopewell et al. 2009).

 Not only is this an issue for the studies conducting meta-analysis of interventions on a certain topic, such as the one we read in class last week (citation), but what about the important information that could be passed on to other researchers about null, or even negative, results? Altman and Bland (1995) stated “Absence of evidence is not evidence of absence”, and I think that is absolutely true.

 I came across an article that was specifically addressing the shortcomings, theoretical and specific reasons for the failure of a performance based contract pilot study implemented in in Uganda (Ssengooba et al. 2012). Why do we not see more articles like this? In the discussion section of the article, the author addresses the amount of successful case studies on PBC (performance-based contracting) that is found in the literature, but the lack of the popularity of this Uganda case. The author mentions how long it took to be considered for publication, and also attributes the lack of popularity in this unsuccessful case to the differing results from other countries, such as Rwanda and Cambodia, who showed successful results from PBC (Ssengooba et al. 2012).

How can we contribute to real progress in academic fields if negative results are deemed unworthy (or less worthy) of publication? This not only discourages researchers and scientists from trying to publish finished studies that showed non-significance or an ineffective intervention, but it also doesn’t present information to fellow researchers that could inform them about possible different outcomes. I think this is especially important to international/global health researchers, as we have learned that different countries face different structural and cultural issues that may cause an intervention to be ineffective.

Failures or varying levels of success in health interventions should be known and taken into account when creating an intervention, but that is nearly impossible when we aren’t able to access that information. I believe there should be an effort made on not only the editors and reviewers of journals’ parts, but also authors and researchers, to reconsider the value of mixed or negative findings, and find the courage to submit and publish that work anyway.

What do you guys think?

 

References

Altman D, Bland M. 1995. “Absence of evidence is not evidence of absence.” Br Med J 311:485.

Dirnagl U,  Lauritzen M. 2010. “Fighting publication bias: introducing the Negative Results section” Journal of Cerebral Blood Flow & Metabolism 30: 1263–1264.

Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D, Ioannidis JP, Simes J, Williamson PR. 2008. “Systematic review of the empirical evidence of study publication bias and outcome reporting bias.” PLoS One 3:e3081

Fewtrell L, Kaufmann R, Kay D, Enanoria W, Haller L, Colford Jr, J. 2005. “Water, Sanitation, and Hygiene interventions to reduce Diarrhoea in less developed countries: a systematic review and meta-analysis.” Lancet Infect Dis 5:42-52.

Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. 2009. “Publication bias in clinical trials due to statistical significance or direction of trial results”. Cochrane Database of Systematic Reviews 2009, Issue 1.

Rothstein H.R., A.J. Sutton, M. Borenstein (Eds.).2005.” Publication bias in meta-analysis: Prevention, assessment and adjustments” Wiley, Chichester, England .

Ssengooba, Freddie, Barbara McPake, and Natashe Primer. 2012. “Why Performance –based contracting failed in Uganda- An “open-box” evaluation of a complex health system intervention.” Social Science & Medicine 75: 377-383.

Weber EJ, Callaham ML, Wears RL, Barton C, Young G. 1998. “Unpublished research from a medical specialty meeting: why investigators fail to publish”. JAMA 280:257–9.

4 comments:


  1. I completely agree that even studies that do not show significant results should still have the same opportunity to be published as studies that had in significant results. I believe there is still something to be taken away and learned from studies with non-significant results. Non-significant does not mean unimportant. Perhaps as researchers we can look at the study, examine what they did or did not find and build new studies from that. Every study has evidence of something, whether it is that a relationship or association does exist or whether it is that one does not. We can still use that information to further our knowledge. We have to know what does not exist before we can pursue what does exist. We must learn from past studies, and if we only have access to studies that found significant results, then we are only learning part of the whole story, or in other words our learning has become biased whether we are aware of it or not.

    ReplyDelete
  2. From my biochemistry experience, in order to publish, principal investigators (PIs) have to pay a publication fee (not sure on the exact numbers as it is dependent on the journal, its impact factor, AND if there are color figures). With the recent NIH funding cuts, you can only imagine how more and more stringent PIs have become with their grant money, and many of them would rather you do research so they can get more proposals funded. What I'm getting at is that some of the failed experiment results do make it eventually into the literature but they're usually summarized as a sentence as part of a big discussion of the results that worked (there are no details on the experimental procedure because it's not part of the meat of the paper) so that information gets lost. One answer to this is get rid of the publications altogether and start a blog-like Wikipedia-like site where everyone (regardless of topic area) can submit their good or bad data/findings but still keep the reviewer system. As far as I know, information about what didn't work can be obtained through unpublished results (i.e. graduate theses) although since more and more universities make them available online, PIs are reluctant to have their students put unpublished results in there in fear of getting scooped.

    I agree that in order to progress forward we all need to know what worked and what didn't so no one is wasting time (or money) repeating failed attempts. I think the hardest part about public/international health is that change does not happen overnight so these failed experiments of which we speak are discovered years later. Additionally, it is human nature to want to be praised for a successful intervention; no one wants to be recognized for a failure!

    ReplyDelete
  3. This is especially concerning considering the waste that occurs in global health. When multiple organizations implement programs that have not been informed by the research of ineffective programs, it not only eliminates the possibility of feedback and hypothesis testing about how to improve upon and overcome the previous failures, but it also could serve to prevent the waste or resources that seems to plague most global health endeavors. I wonder how many organizations have tried the same (or similar) approach with equal or lesser results only to eventually improve their programs from learning from their own mistakes instead of those of others. Could they have got it right the first time if informed by previous failures or marginal victories? If only the "successes" of global health are published, does this establish them as the "best-methods" approach thus implying that no further research is necessary? Or, should, perhaps, research be published (perhaps in different formats as articulated by Aurelie) based on the relevancy and insightfulness of the article rather than the "significance" of the results. As pointedly mentioned by Lindsey, this severely limits the dissemination of knowledge and creates a system of redundancy and waste. Should not articles that have the potential of informing and improving the performance of global health be disseminated to improve the working knowledge-base of the discipline? Surely, there is significance to that.

    ReplyDelete
  4. This is especially concerning considering the waste that occurs in global health. When multiple organizations implement programs that have not been informed by the research of ineffective programs, it not only eliminates the possibility of feedback and hypothesis testing about how to improve upon and overcome the previous failures, but it also could serve to prevent the waste or resources that seems to plague most global health endeavors. I wonder how many organizations have tried the same (or similar) approach with equal or lesser results only to eventually improve their programs from learning from their own mistakes instead of those of others. Could they have got it right the first time if informed by previous failures or marginal victories? If only the "successes" of global health are published, does this establish them as the "best-methods" approach thus implying that no further research is necessary? Or, should, perhaps, research be published (perhaps in different formats as articulated by Aurelie) based on the relevancy and insightfulness of the article rather than the "significance" of the results. As pointedly mentioned by Lindsey, this severely limits the dissemination of knowledge and creates a system of redundancy and waste. Should not articles that have the potential of informing and improving the performance of global health be disseminated to improve the working knowledge-base of the discipline? Surely, there is significance to that.

    ReplyDelete

Note: Only a member of this blog may post a comment.