“An expert is one who knows more and more about less and less until he knows absolutely everything about nothing.”

On Site: Registering Clinical Trials Can Make Positive Findings Disappear


The ClinicalTrials.gov registry, which was introduced in 2000, appears to have had a noticeable impact on the number of reported positive and negative effects of heart disease treatments before and after that year, according to a recent PLoS ONE study. A 1997 U.S. law required the trial registry’s creation, mandating researchers from 2000 on to record their trial methods and outcome measures before collecting data — a requirement not necessary before that date. Researchers found that, in a sample of 55 randomized controlled trials, funded by the U.S. National Heart, Lung and Blood Institute (NHLBI), 57% of those studies (17 out of 30) published before 2000 reported positive effects from the treatments. Between 2000 and 2012, however, that figure sank to just 8% (only two among 25) with positive effects in clinical trials, as the majority of studies had negative findings. The dramatic change suggests declaring outcomes in randomized clinical trials and the adoption of transparent reporting standards, as required by ClinicalTrials.gov, “may have contributed to the trend toward null findings,” wrote Veronica L. Irvin, a health scientist at Oregon State University, and her co-author Robert Kaplan, chief science officer at the Agency for Healthcare Research and Quality in Rockville, Md. They are the researchers of the PLoS ONE study titled “Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.” In examining the clinical research results from the 55 large trials, the authors found no change in the proportion of trials that compared treatment to placebo versus an active comparator. They also ruled out possible sponsorship issues but saw a change after the newly required regulations went into effect in 2000. “Industry co-sponsorship was unrelated to the probability of reporting a significant benefit,” they wrote. “Pre-registration in ClinicalTrials.gov was strongly associated with the trend toward null findings.” The study also noted that industry co-sponsorships, unfortunately, were not always reported prior to the year 2000, going back to 1970, and medical journals did not uniformly require disclosure. A closer look at the disclosures revealed a financial consulting relationship between at least one author and industry in all of the cases. “Industry influence would produce a bias in favor of positive results, so connections between investigators and industry is not a likely explanation of the trend toward null results in recent years,” the study states. The final explanation for the trend toward null reports is that none of the trials were prospectively registered prior to 2000. After that year, all large NHLBI trials were registered prospectively in ClinicalTrials.gov, which included statements about the primary and secondary outcome variables. Having to state their methods and measurements before launching the trials makes it more difficult for investigators to selectively report some outcomes and exclude others once the study is over. Another possible explanation for the decreasing rate of positive results is that medical care and supportive therapy have improved since 2000, making it difficult to demonstrate treatment effects because new approaches must compete with higher quality medical care. “We do recognize that the quality of background cardiovascular care continues to improve, making it increasingly difficult to demonstrate the incremental value of new treatments,”


the study states. “The improvement in usual cardiovascular care could serve as an alternative explanation for the trend toward null results in recent years.” The study stressed that their analysis is limited to large NHLBI-funded trials and to studies in cardiovascular outcomes in adults, and does not allow causal inferences to other clinical trials. One reaction to the study raised concerns about similar problems with other analyses that are not highly rigorous and lead to many false positives in the literature. The current pressure to publish in academia exacerbates them. “This study should be a wake-up call,” wrote Steven Novella, M.D., a neurologist at Yale University, in his NeuroLogica Blog. He called the study “encouraging” but also “a bit frightening” because it casts doubt on previous positive results. “Loose scientific methods are leading to a massive false-positive bias in the literature,” wrote Novella. “I do not go as far to say that science is broken. In the end it does work, it just takes a lot longer to get there than it should because we waste incredible resources and time chasing false positive outcomes.”


Ronald Rosenberg
This news story was featured in CenterWatch Weekly, one of several newsletters published by CenterWatch, the global source for clinical trials information, timely news, in-depth analysis, study grant and career opportunities, and the largest listing of industry-funded clinical trials on the Internet. For more information, visit http://www.centerwatch.com.

0 comments:

Post a Comment

About Blogger:

Hi,I,m Basim from Canada I,m physician and I,m interested in clinical research feild and web development.you are more welcome in our professional website.all contact forwarded to basimibrahim772@yahoo.com.


Let's Get Connected: Twitter | Facebook | Google Plus| linkedin

Blog Tips

Subscribe to us