Scientists were Modeling infectious disease outbreaks since at least the early 1900s, when Nobel Prize winner Ronald Ross used mosquito reproduction rates and parasite incubation times to predict the spread of malaria. Over the past few decades the UK and several other European countries have managed to make predictions a routine part of their infectious disease control programs. So why have forecasting remained an afterthought at best in the United States? First of all, the quality of any particular model or the resulting forecast depends heavily on the quality of the data it contains, and in the United States, good data on infectious disease outbreaks are hard to come by: poorly collected in the first place; not easily shared between different facilities such as testing centers, hospitals and health departments; and difficult to access or interpret for academic modelers. “For modeling, understanding how the data was generated and what the strengths and weaknesses of each data set are,” says Caitlin Rivers, epidemiologist and deputy director of the CFA. Even simple metrics like test positivity rates or hospital stays can be fraught with ambiguity. The more fuzzy these numbers are and the less modelers understand this fuzziness, the weaker their models will be.
Another fundamental problem is that the scientists who create models and the officials who use those models to make decisions are often at odds. Health officials concerned with protecting their data may hesitate to share it with scientists. And scientists, who typically work in academic centers rather than government offices, often do not consider the realities health officials face in their work. Misaligned incentives also prevent the two from working together effectively. Science tends to favor advances in research while public health officials need practical solutions to real problems. And they have to implement these solutions on a large scale. “There is a gap between what academics need for success, i.e. to publish, and what it takes to have a real impact, namely building systems and structures,” says Rosenfeld.
These shortcomings have hitherto hampered any response to outbreaks in the real world. For example, during the 2009 H1N1 pandemic, scientists struggled to communicate effectively with decision-makers about their work and, in many cases, failed to access the data they needed to make useful predictions about the spread of the virus. They still built many models, but almost none of them could influence the reaction effort. Five years later, model makers faced similar hurdles with the Ebola outbreak in West Africa. They managed to lead successful vaccine studies by pinpointing the times and places where cases were likely to increase. However, they have not been able to establish a coherent or permanent system for working with health authorities. “The existing network is very ad hoc,” says Rivers. “A lot of work is based on personal relationships. And the bridges you build during a particular crisis tend to fade once that crisis is resolved. “
Scientists and health officials have made many attempts to fill these loopholes. They have launched several programs, collaborations, and initiatives over the past two decades – each designed to improve the science and practice of real-world outbreak modeling. How successful these efforts have been depends on who you ask: one of these efforts changed course after its founder retired, some ran out of funding, others still exist but are too limited in scope to include the upcoming ones Overcoming challenges. Marc Lipsitch, Infectious Disease Epidemiologist at Harvard and the CFA’s director of science, says everyone has contributed to the current initiative: “These earlier efforts helped lay the groundwork for what we are doing now.”
For example, at the start of the pandemic, modelers relied on the lessons they learned from FluSight, an annual challenge where scientists develop real-time flu predictions that are then collected on the CDC’s website and compared to make a Covid-focused one System they called the Covid-19 Forecast Hub. In early April 2020, this new hub posted weekly projections on the CDC’s website that would eventually include deaths, case numbers, and hospital admissions at both state and national levels. “This was the first time modeling on such a large scale had been formally included in the agency’s response,” said George, the director of operations at the CFA. “It was a huge thing. Instead of an informal network of individuals, there were somewhere in the range of 30 to 50 different modeling groups who consistently and systematically helped with Covid. ”
But if those predictions were arduous and humble – scientists ultimately decided that any predictions beyond two weeks were too uncertain to be useful – they also weren’t up to the demands of the moment. As the coronavirus epidemic turned into a pandemic, calls of all kinds were flooded with scientists. School and health officials, mayors and governors, business leaders and event organizers all wanted to know how long the pandemic would last, how it would develop in their respective communities, and what steps they should take to contain it. “People just freaked out, scoured the internet and shouted every name they could find,” Rosenfeld told me. Not all of these questions could be answered: there was little data and the virus was new. There was only so much to model with confidence. But when the model makers opposed these demands, others stepped into the void.