Crispr’s Next Big Debate: How Messy Is Too Messy? – WIRED

Posted: June 4, 2017 at 11:45 pm

Slide: 1 / of 1. Caption: Getty Images

When it comes toCrispr, the bacterial wnderenzyme that allows scientists to precisely edit DNA, no news is too small to stir up some drama. On Tuesday morning, doctors from Columbia, Stanford, and the University of Iowa published a one-page letterto the editor of Nature Methodsan obscure but high-profile journaldescribing something downright peculiar. About a year ago, they used Crispr to edit out a blindness-causing genetic defect in mice, curing two of their cohort. Later, they decided to go back and sequence their genomes, just to see what else Crispr did while it was in there.

A lot, it turned out. With their method, the researchers observed close to 2,000 unintended mutations throughout each mouses genome, a rate more than 10 times higher than anyone had previously reported. If that holds up, Crispr-based therapies are in for some serious trouble. No one wants to go in for a vision-restoring treatment, only to wind up with cancer because of it.

The ensuing headlines were gleefully apocalyptic: Crispr May Not Be Nearly as Precise as We Thought, Crack in Crispr Facade after Unanticipated In Vivo Mutations Arise, and my personal favorite, Small Study Finds Fatal Flaw in Gene Editing Tool Crispr. And then the biotech stocks went into a tailspin. The big three Crispr-based tech companies got hit the hardest. By the close of trading Tuesday, Editas Medicine was down nearly 12 percent, Crispr Therapeutics fell more than 5 percent, and Intellia Therapeutics had plunged to just over 14 percent.

This was far from just a blip in the nerdy news cycle. A reaction to a single scientific publication on this scale raises important questions about sciences incentive structure, its processes for publicly evaluating evidence, and what happens when those butt up against the prevailing philosophies of other professionsnamely, medicine.

A decade ago, most of the conversations about this letter would have happened in laboratory hallways. But this week, geneticists, microbiologists, and molecular bioengineers took to Twitter to digest the paper in public. While some experts decried the paper as unnewsworthy (everyones known about Crispr off-target mutations forever!) the majority of threads ticked off the experiments flaws: Tiny sample size! Insufficient controls! Weird Crispr delivery! Out of date/inefficient version of Crispr! The list goes on. Many doubted if it had been peer-reviewed. (It had.) The hashtag #fakenews even made a few appearances.

To be sure, the results do not match up well with whats already in the literature on this subject. And, as the paper itself says, The unpredictable generation of these variants is of concern. Which is to say, the authors have no idea why or how these mutations are happening. Derek Lowe, a longtime pharmaceutical industry researcher who writes a blog on the subject for Science, had enough doubt in the results that he bought up some Editas and Crispr Therapeutics stocks while they were down.

But most scientists, while skeptical of the results, were more disappointed in the way the paper was blown out of proportion. Its critically important to look closely at genomes being edited with the Crispr system, ideally with a method sensitive enough to detect even rare off-target events, says Stephen Floor, a biophysicist who worked in Crispr creator Jennifer Doudnas lab at UC Berkeley before beginning his own gene editing cancer research at UCSF. Saying Crispr is 100 percent accurate or grossly inaccurate isnt helpful. What scientists need to understand is which sites are being cut, what rules govern which sites get cut, and how to emphasize only cutting at sites you want. It will be interesting to watch subsequent validation that gets to the bottom of why this report found such a surprisingly high rate of mutation, he says.

The key word there, if you didnt catch it, is validation. Its pretty much the foundational tenet of science. You have an idea, you test it, you test it again, you eliminate confounding factors as best as you can and then you validate your results. All the critiques of the Nature Methods paper assumed the authors were operating with that same premise.

But in this case, the authors werent scientists: They were doctors. And in medicine, theres a different guiding principle that places a premium on sharing significant results at face value.

The history of the case study is long and celebrated in medicine. The first recorded report of what would one day be known as HIV/AIDS was published by the CDC as five strange cases of pneumonia in gay men in Los Angeles. Vinit Majahan, an opthamologist at Stanford and co-author of Tuesdays Crispr paper, says it was in that spirit that he and his collaborators submitted their results to the journal. I dont have any money in Crispr, I only have patients, he says. The culture and pressures of science right now push people to not share results that arent a splashy cure. But in medicine you cant do that. If you make an observation thats important enough to share with your community, youre obligated to do that right away.

Since Majahans team is working on turning its previous work into a human treatment, they saw it as irresponsible to take their results, small as they were, and sweep them under the rug. Crispr is most often described as molecular scissors, but doctors like Majahan tend to think of it more like a drug. And the more successes Crispr haslike curing mouse blindnessthe more doctors start asking the next logical questions about things like dosing and formulations and side effects. How long can you have the enzyme floating around your cells before it cuts somewhere it shouldnt? Whats the right enzyme for the job?

Matthew Taliaferro, who studies gene expression and gene editing at MIT, thinks the paper will get more scientists thinking about those kinds of questions. Crispr definitely has off-targets. But a lot of people use it assuming no other mutations get introduced during the process, he says. So getting people to talk about the need for controls is a good outcome of this whole thing. And while he was surprised by the lack of some straightforward controls, Taliaferro is awarethat his initial reactions were colored by some of the Twitter threads hed already absorbed before tracking down the paper himself. I think the data is perfectly fine, he says. Its just the interpretation of it that to me seems odd. Namely, that every Crispr application is deeply flawed.

Which was never Majahans intention in the first place. We didnt write the headlines, he says. We dont think Crispr is bad, we think its great. But he didnt get the opportunity to tell people that, because for one thing, hes not on Twitter. When asked how he was responding to the criticisms from the scientific community, he laughed and said, Can you read some to me? Ive heard theres some nasty stuff out there.

The amplifications (and denigrations) of those interpretations around Science Twitter may not have been as knee-jerk as all the Crispr Is Terrible and Broken Forever headlines. But still, they were an overreactionbecause after all, this was just a single paper. No one should presume a standalone study can predict the future of an entire technique. At most, it indicates that Crispr is entering its inevitable adolescence, when shiny silver bullet technologies get banged up and battle worn by new data. That doesnt mean it isnt the real deal. Just that it should be looked at real hard every step of the way.

Read this article:

Crispr's Next Big Debate: How Messy Is Too Messy? - WIRED

Related Posts

Comments are closed.

Archives