Fight over COVID’s origins renews debate on risks of lab work
By Carl Zimmer and James Gorman
At a Senate hearing on efforts to combat COVID-19 last month, Sen. Rand Paul of Kentucky asked Dr. Anthony Fauci whether the National Institutes of Health had funded “gain-of-function” research on coronaviruses in China.
“Gain-of-function research, as you know, is juicing up naturally occurring animal viruses to infect humans,” the senator said.
Fauci, the nation’s top infectious disease expert, flatly rejected the claim: “Sen. Paul, with all due respect, you are entirely and completely incorrect, that the NIH has not ever and does not now fund gain-of-function research in the Wuhan Institute.”
This exchange, and the bit of scientific jargon at the heart of it, has gained traction in recent weeks, usually by people suggesting that the coronavirus was engineered, rather than having jumped from animals to humans, the explanation favoured by most experts on coronaviruses. The uproar has also drawn attention back to a decade-long debate among scientists over whether certain gain-of-function research is too risky to allow.
Spurred by some contested bird flu experiments in 2012, the US government adjusted its policies for oversight of certain types of pathogen studies. But some critics in the scientific community say that the policy is overly restrictive and that its enforcement has been far from transparent.
The stakes of the debate could not be higher. Too little research on emerging viruses will leave us unprepared for future pandemics. But too little attention to the safety risks will increase the chances that an experimental pathogen may escape a lab through an accident and cause an outbreak of its own.
Sorting out the balance of risks and benefits of the research has proved over the years to be immensely challenging. And now, the intensity of the politics and rhetoric over the lab leak theory threatens to push detailed science policy discussions to the sidelines.
“It’s just going to make it harder to get back to a serious debate,” said Marc Lipsitch, an epidemiologist at the Harvard T.H. Chan School of Public Health who has urged the government to be more transparent about its support of gain-of-function research.
In the 1970s, researchers were learning for the first time how to move genes from one organism to another to make bacteria produce human insulin. From the start, critics worried that such experiments could accidentally create deadly pathogens if they escaped from labs.
Tinkering with genes is not the only way that a scientist can give an organism new abilities. Researchers can also stage evolutionary experiments, in which pathogens are grown in the cells of an unfamiliar host species. At first, they do not replicate well. But new mutations can help them adapt, gradually improving their performance.
A decade ago, researchers used serial passage, as this procedure is known, to learn how new strains of influenza evolve. Flu strains start off in the guts of birds, and sometimes manage to mutate into a form that can spread among people.
Two teams of researchers — one from the University of Wisconsin in Madison and the other at Erasmus Medical Centre in Rotterdam, the Netherlands — designed experiments to identify which genetic mutations were essential for a successful jump from birds to people. They injected bird flu viruses into the noses of ferrets, waited for the viruses to replicate, and then transferred the new viruses to new ferrets. Soon the viruses evolved to become better at replicating in the ferrets.
When news of the experiments broke in late 2011, a controversy exploded. Some critics said the research was reckless and should not be published, for fear that other researchers would copy the work and accidentally release a new pandemic strain of flu.
A year later, the US Department of Health and Human Services held a meeting to consider what it called “gain-of-function research.” The name took hold, but scientific experts have grown increasingly frustrated with it ever since.
“It’s a horribly imprecise term,” said Gigi Gronvall, a senior scholar at the Johns Hopkins Centre for Health Security.
Many gain-of-function experiments could never pose an existential threat; instead, they have provided huge benefits to humanity. In 1937, researchers found that when they passed the yellow fever virus through chicken cells, it lost the ability to cause disease in humans — a discovery that led to a vaccine for yellow fever. Likewise, herpes viruses have been engineered to gain a new function of their own: attacking cancer cells. They are now an approved treatment for melanoma.
But the bird flu experiments raised concerns that certain gain-of-function studies could pose a tiny but real risk of dangerous outbreaks. In 2014, US officials announced that 18 such studies would be paused. The experiments were not just on influenza viruses, but on the coronaviruses that caused SARS and MERS.
Three years later, the government rolled out a new policy — the ‘P3CO framework’ — for research on ‘enhanced potential pandemic pathogens’. The rule requires the agencies under the HHS umbrella, like NIH and its several institutes, to carry out a special review of grant applications for any research on “a credible source of a potential future human pandemic”. In 2019, after conducting such a scientific review, the agency gave the green light for two influenza projects to restart, triggering more debate about whether its policy was thorough enough.
When questioning Fauci last month, Paul brought up one of the most cited examples of gain-of-function research, a study of coronaviruses done by Ralph Baric at the University of North Carolina published in Nature Medicine in 2015. Working with data sent from Shi Zhengli, the director of the Wuhan Institute of Virology, Baric and his colleagues built a new coronavirus from an existing one. All of the work was done in the North Carolina lab, and neither Zhengli nor members of her lab participated.
The so-called chimeric virus that resulted was not more pathogenic than the parental virus, Baric said. “This work was approved by the NIH, was peer-reviewed, P3CO reviewed and approved,” Baric wrote in an email last month. The work also “involved a very different strain of beta coronavirus than the one that causes COVID-19,” he said and was considered low risk because of the particular strain in question.
In the paper, he and his colleagues cautioned others about similar research. “The potential to prepare for and mitigate future outbreaks must be weighed against the risk of creating more dangerous pathogens,” they wrote.
The P3CO policy has a significant shortcoming, according to David Relman, a member of the U.S. National Science Advisory Board for Biosecurity and a microbiologist at Stanford University: It only applies to the grant process in agencies that are part of HHS. Grants from the National Science Foundation, the Pentagon or other agencies could include dangerous research and also need oversight, he said. Then there is the thornier question of private research.
Relman has also criticized the government’s process for screening and approving gain-of-function research. At a January 2020 meeting of the advisory board, he objected to the lack of information released about how two research proposals were approved.
However, the “star chamber” nature of the process was not its biggest problem, said Richard Ebright, a molecular biologist at Rutgers, the state university of New Jersey, who has also been one of the most vocal proponents of the lab leak theory, and a long-time advocate of stricter control of research on dangerous pathogens. An even bigger issue, he said, was that gain-of-function research was simply not being screened in accordance with the policy established by HHS, which includes the National Institute of Allergy and Infectious Diseases, run by Fauci.
The ideal solution, he said, would be the creation of an independent body to provide the oversight of dangerous pathogen research, similar to what the Nuclear Regulatory Commission does for studies of radioactive materials.
In the United States, “there are no bio-safety rules or regulations that have the force of law,” he said. “And this is in contrast to every other aspect of biomedical research.” There are enforceable rules, for example, for experiments with human subjects, vertebrate animals, radioactive materials, and lasers, but none for research with disease-causing organisms.
-New York Times