One such lead popped up in 2012 in a study by Chen-Yu Zhang of Nanjing University and colleagues about the cross-kingdom transfer of microRNA from plants to mammals. Essentially, the paper showed that a microRNA in rice could regulate genes in the liver of mice that had eaten the rice (Cell Research, 22:107-26, 2012). “It was a huge thing,” Marshall recalls. Immediately, thoughts of transgenic, therapeutic crops came to mind, and his team set about trying to reproduce the results.
But Marshall’s group, in collaboration with scientists from Monsanto, was unsuccessful in reproducing Zhang’s results, and the researchers concluded that the published findings must have resulted from a nutritional imbalance as a result of the experimental diet fed to the mice. Although Marshall had contacted Zhang at the outset, his team worked independently and published its results in Nature Biotechnology last year (31:965-67, 2013).
To Marshall, the case seemed fairly cut and dried: one group had published some research that pointed to one conclusion, and another group, his own, had subsequently published research that questioned that conclusion. But the experience was not without repercussions. The failed replication event sparked a flurry of accusations about how Monsanto’s involvement in Marshall’s study may have biased the results and about what some have framed as the company’s oppressive relationship with scientists.
Moreover, Zhang says, Marshall’s experimental design was flawed, and the positive attention given to Marshall’s study in a Nature Biotechnology editorial was “unfair and unprofessional.” In the end, it hurt “the nascent field of extracellular RNA and tarnished my own reputation,” he says.
Although Marshall intended none of this, replication studies often cause a kerfuffle—whether publicly in letters to the editor or in blog posts, or silently in bruised egos or hurt feelings. In recent years, there has been a surge in interest in what some have called a “reproducibility crisis” in science—from high-profile analyses exposing that few cancer studies can be replicated to a massive, crowdsourced effort to repeat studies in social psychology. Addressing this crisis has yielded a greater appreciation for replication projects, yet their side effects raise the question: What is the best way to go about replicating the work of others?
Engaging the original authors
Zhang is not alone in feeling slighted by the handling of a replication attempt. As part of the large, crowdsourced initiative in social psychology called the Reproducibility Project: Psychology, coordinated by the Charlottesville, Virginia–based Center for Open Science, researchers failed to replicate the findings of a study on cleanliness and people’s moral judgments by Simone Schnall, a senior lecturer at the University of Cambridge in the U.K. As Schnall has described it, tweets, e-mails, and a blog post announcing the failure were broadcast to the field before she had a chance to address the discrepancy. “I feel like a criminal suspect who has no right to a defense and there is no way to win,” Schnall wrote of her experience in a blog post last spring: “The accusations that come with a ‘failed’ replication can do great damage to my reputation, but if I challenge the findings I come across as a ‘sore loser.’”
To avoid such unfair judgments, Nobel prize–winning psychologist Daniel Kahneman earlier this year proposed a new etiquette for replication. He suggested that certain actions could be taken by replicating labs to avoid what he calls adversarial replication. These include contacting the original lab and discussing the protocol; inviting the original author to comment on the proposed replication experiments; discussing any amendments to the protocol; and allowing reviewers to read the correspondence. “The rules are designed to motivate both author and replicator to behave reasonably even when they are thoroughly irritated with each other,” Kahneman wrote in a commentary outlining his suggested guidelines, posted to Scribd in May.
Zhang agrees that Marshall’s replication attempt would have been handled better had the group discussed the experimental details with him.
Kahneman’s model for replication represents just one of many ways labs can go about trying to reproduce the work of others. In fact, some researchers don’t agree that it’s always beneficial to fully involve the original lab in a replication attempt. Marshall at miRagen says his group will sometimes contact the original authors to ask for help if information is missing from the paper, “but we really want to be able to do this completely separately and independently.” For his firm to invest resources into a technology or a procedure, Marshall wants to be reassured that the findings are robust enough to hold up in an independent lab.
Tim Errington, a project manager at the Center for Open Science, agrees: “I would make the argument that you can learn a lot from not contacting the authors,” such as whether there’s sufficient information in the paper to follow a protocol.
Andy Marshall, chief editor at Nature Biotechnology, says replication attempts do not have to follow one standard method to offer valuable insight. “Whether one sits in the lab with the researcher who originally did the experiments side-by-side or goes away and tries to distill everything from the paper and does everything independently are two sides of the same coin,” he says. In other words, replication can take myriad forms, and that’s a good thing.
Which strategy will prove most fruitful remains to be seen. “We still really don’t know what the best way of addressing this issue is,” says Sean Morrison, a senior editor at eLife and the director of the Children’s Medical Center Research Institute at University of Texas Southwestern. “All of these efforts are experiments, and it remains to be seen how the experiments play out.”
Large-scale replication
effort to the Reproducibility Project: Psychology, called the Reproducibility Project: Cancer Biology, will soon begin to roll out the results from a massive set of replication attempts. The largest coordinated effort of its kind, the cancer project is attempting to redo the main experiments from the 50 most-cited papers in cancer biology from 2010 to 2012. Over the coming months, results from the project will be published in eLife.
The Reproducibility Project: Cancer Biology operates similarly to Kahneman’s proposal with regard to the original authors’ participation. In each case, the replication protocol goes through peer review before the experiments ever start, and an original author is always a reviewer. After the work is complete, the data are posted, and the final manuscript is again peer-reviewed before publication.
“We’re completely open and transparent,” says Errington. In addition to identifying potential discrepancies in the protocol, Errington says, it’s essential to get the original authors on board to source materials necessary for the experiments. For instance, reagents, cell lines, or animal models may have been custom-made for the project by the original lab, and without access to them, the project could hit a wall.
Errington says that, for the most part, the authors have cooperated with the project’s replication efforts. Ari Melnick of Weill Cornell Medical College, for one, was pleased his paper was chosen. “They’re choosing papers that are highly visible,” he says. “I felt a bit of pride.” Melnick also expects that his paper will be easy to reproduce; he makes it a point to include as much information as possible to help other labs replicate his results or otherwise make use of his work. Still, he says, it’s valuable to engage the original authors in a replication attempt, especially when journals’ space is limited, “because there are always things that get lost in communication, no matter how hard you try.”
Elizabeth Iorns, the cofounder and CEO of Science Exchange, which is coordinating the experiments conducted as part of the Reproducibility Project: Cancer Biology, says the tone of the project’s approach has kept everybody focused on the data, rather than on the scientists. “We’re not saying if a replication fails they did anything wrong; it’s just the results we got,” she says. In fact, the new results are not meant to replace the old ones, but to add to them. “I think because of our attitude about it, [authors] have been really open and engaged in the process,” says Iorns.
Remember to share
Once the replication results are in, the next challenge is to get them published. Journals traditionally have not been keen on using up pages to print repeat experiments. But more and more, publications are recognizing the value of replications and agreeing to publish such studies. When publishing Bill Marshall’s repeat of Zhang’s experiments, Nature Biotechnology used it as an example of the journal’s receptive stance on publishing replication attempts, while taking the opportunity to hurl a jab at Cell Research for turning down Marshall’s paper. (The editors wrote in an editorial that “the best practice is to publish such replication failures in the journal where the original findings were published.”)
Mark Patterson, the executive director of eLife, says the Reproducibility Project: Cancer Biology is a self-contained exercise in the value of replication, and does not dictate the journal’s future policy for publishing such studies. “We’re not viewing this as every replication henceforth we’ll publish ineLife,” he says. “If someone does a repeat of a piece of work which has been previously published but it’s a really important replication, or challenge to an existing piece of work, then that will be judged on its own merits.”
In sharing replication results, it’s also important to keep one’s ego out of the process, Morrison says, and to have “humility before the data.” Divorcing oneself from one’s data may be difficult, however, and as Iorns also points out, there’s a tremendous pressure to maintain a pristine public profile and not alert the world to problems that may arise. But as replication attempts become more commonplace, correcting the literature or adding conflicting data to the discussion may become less acrimonious.
Indeed, Melnick argues that being open to others replicating your work—and being open to adjusting your understanding of the biology, if need be—reflects positively on you as a scientist. “I think it enhances your reputation to be transparent and committed to doing the right thing in science,” he says. “If a result you have doesn’t hold up, in the end it’s one of thousands of results. Keep that in perspective, then we all benefit.”
Correction (November 4, 2014): The original version of this story referred to the Reproducibility Project: Cancer. The correct name of the initiative is Reproducibility Project: Cancer Biology. The Scientist regrets the error.
0 comments:
Post a Comment