A widely followed model for projecting Covid-19 deaths in the U.S. is producing results that have been bouncing up and down like an unpredictable fever, and now epidemiologists are criticizing it as flawed and misleading for both the public and policy makers. In particular, they warn against relying on it as the basis for government decision-making, including on “re-opening America.”
“It’s not a model that most of us in the infectious disease epidemiology field think is well suited” to projecting Covid-19 deaths, epidemiologist Marc Lipsitch of the Harvard T.H. Chan School of Public Health told reporters this week, referring to projections by the Institute for Health Metrics and Evaluation at the University of Washington.
Others experts, including some colleagues of the model-makers, are even harsher. “That the IHME model keeps changing is evidence of its lack of reliability as a predictive tool,” said epidemiologist Ruth Etzioni of the Fred Hutchinson Cancer Center, who has served on a search committee for IHME. “That it is being used for policy decisions and its results interpreted wrongly is a travesty unfolding before our eyes.”
The IHME projections were used by the Trump administration in developing national guidelines to mitigate the outbreak. Now, they are reportedly influencing White House thinking on how and when to “re-open” the country, as President Trump announced a blueprint for on Thursday.
The chief reason the IHME projections worry some experts, Etzioni said, is that “the fact that they overshot will be used to suggest that the government response prevented an even greater catastrophe, when in fact the predictions were shaky in the first place.” IHME initially projected 38,000 to 162,000 U.S. deaths. The White House combined those estimates with others to warn of 100,000 to 240,000 potential deaths.
That could produce misplaced confidence in the effectiveness of the social distancing policies, which in turn could produce complacency about what might be needed to keep the epidemic from blowing up again.
Believing, for instance, that measures well short of what China imposed in and around Wuhan prevented a four-fold higher death toll could be disastrous.
The most notable bounces in the IHME projections have been for the eventual total of U.S. deaths by early August, which is when many epidemiologists believe the outbreak will be tailing off. (Many expect daily deaths in the U.S. to fall to 10 or fewer by early June, from 2,000 or so in April.) Death projections for individual states have also fluctuated significantly.
The IHME website explains that, “As data continue to come in, our estimates may change. Specifically, new death data … have changed our projections.”
Its model differs from those used by almost all other epidemiologists.
There are two tried-and-true ways to model an epidemic. The most established, dating back a century, calculates how many people are susceptible to a virus (in the case of the new coronavirus, everyone), how many become exposed, how many of those become infected, and how many recover and therefore have immunity (at least for a while). Such “SEIR” models then use what researchers know about a virus’s behavior, such as how easily it spreads and how long it takes for symptoms of infection to appear, to calculate how long it takes for people to move from susceptible to infected to recovered (or dead).
“The fundamental concept of infectious disease epidemiology is that infections spread when there are two things: infected people and susceptible people,” Lipsitch said.
Newer, “agent-based models” are like the video game SimCity, but with a rampaging pathogen: using computing power unimagined even a decade ago, they simulate the interactions of millions of individuals as they work, play, travel, and otherwise go about their lives. Both of these approaches have often nailed projections of, for instance, U.S. cases of seasonal flu.
IHME uses neither a SEIR nor an agent-based approach. It doesn’t even try to model the transmission of disease, or the incubation period, or other features of Covid-19, as SEIR and agent-based models at Imperial College London and others do. It doesn’t try to account for how many infected people interact with how many others, how many additional cases each earlier case causes, or other facts of disease transmission that have been the foundation of epidemiology models for decades.
Instead, IHME starts with data from cities where Covid-19 struck before it hit the U.S., first Wuhan and now 19 cities in Italy and Spain. It then produces a graph showing the number of deaths rising and falling as the epidemic exploded and then dissipated in those cities, resulting in a bell curve. Then (to oversimplify somewhat) it finds where U.S. data fits on that curve. The death curves in cities outside the U.S. are assumed to describe the U.S., too, with no attempt to judge whether countermeasures —lockdowns and other social-distancing strategies — in the U.S. are and will be as effective as elsewhere, especially Wuhan.
“We are becoming more confident that the shape of the curve [is accurately] informed by locations outside the U.S.,” said Theo Vos, professor of health metrics science at IHME.
According to a critique by researchers at the London School of Hygiene & Tropical Medicine and Imperial College London, published this week in Annals of Internal Medicine, the IHME projections are based “on a statistical model with no epidemiologic basis.”
“Statistical model” refers to putting U.S. data onto the graph of other countries’ Covid-19 deaths over time under the assumption that the U.S. epidemic will mimic that in those countries. But countries’ countermeasures differ significantly. As the epidemic curve in the U.S. changes due to countermeasures that were weaker or later than, say, China’s, the IHME modelers adjust the curve to match the new reality.
Each run of the model, updated with new U.S. data, produces estimates of future and total deaths, ICU use, and other outcomes, with uncertainty bounds. That is, IHME says the actual number of deaths and other outcomes has a 95% likelihood of falling between a stated upper limit and lower limit. In late March, for example, IHME projected that there will be a total of 81,114 Covid-19 deaths in the U.S. over the next four months, but that number came with a caveat: The actual number could be as few as 38,242 and as many as 162,106.
“This appearance of certainty is seductive when the world is desperate to know what lies ahead,” Britta Jewell of Imperial College and her colleagues wrote in their Annals paper. But the IHME model “rests on the likely incorrect assumption that effects of social distancing policies are the same everywhere.” Because U.S. policies are looser than those elsewhere, largely due to inconsistency between states, U.S. deaths could remain at higher levels longer than they did in China, in particular.
While other epidemiologists disagree on whether IHME’s deaths projections are too high or too low, there is consensus that their volatility has confused policy makers and the public:
— Last week IHME projected that Covid-19 deaths in the U.S. would total about 60,000 by August 4; this week that was revised to 68,000, with 95% certainty that the actual toll would be between 30,188 and 175,965.
— On March 27, it projected that New York would see 10,243 deaths (and that the total had a 95% chance of falling between 5,167 to 26,444) by early August. Three days later, the New York projection was 15,546, and on April 3 it was 16,262, Jewell and her colleagues pointed out in another analysis, published in JAMA on Thursday.
— On April 8, IHME projected 5,625 deaths for Massachusetts by August; on April 13, it was 8,219.
Such changes, Vos said, “are well within the uncertainty bounds we predicted.” In addition to reflecting more recent data, the projections are now based on a moving average of daily deaths rather than one-day numbers.
Although IHME says its approach has always been to change projections when new data become available, critics say that underlines the model’s flaws, namely its need to constantly re-calibrate rather than, as true epidemiology models do, use basic outbreak parameters such as a disease’s infectiousness to project the course of an epidemic in a way that policy makers can use as a lode star, not a strobe light that flares and dims repeatedly.
“Since they started with very little U.S. data, when they add some, their projections move a lot,” said the Hutch’s Etzioni.
Even the predictions of daily deaths “have been highly inaccurate,” said statistician Sally Cripps of the University of Sydney, who led a team that examined IHME’s up-and-down projections. “It performs poorly even when it predicts the number of next-day deaths: The true number of next-day deaths has been outside the 95% intervals 70% of the time.” If the 95% calculation correctly reflects a model’s uncertainty, then textbook statistics say the true numbers can fall outside that range no more than 5% of the time.
Lipsitch and some other experts worry that by failing to include disease transmission, IHME’s projections of deaths could be too low. But more and more models are projecting a less dire future. Three weeks ago a SEIR model from researchers at the Massachusetts Institute of Technology projected that total U.S. cases will plateau later this week, reaching 600,000 and then adding ever-fewer cases each day. So far it’s pretty much on the money, with the U.S. case count at 650,000 on Thursday and new daily cases remaining mostly flat.
A different, data-driven model from researchers at the University of Washington predicts “about 1 million cases in the U.S. by the end of the epidemic, around the first week in June, with new cases peaking in mid-April,” said UW applied mathematician Ka-Kit Tung, who led the work. “By the first week of June, we project that the number of new cases will be close to zero if current social distancing policies are maintained.” That model predicted two weeks ago that the number of new daily cases would peak around now, as seems to be the case.
Helen Branswell contributed reporting.
This story has been updated to include the earliest IHME projections.
An imperfect model is better than nothing especially for something capable of exponential growth. If a model make a prediction and you make modification to policies it should be obvious to all that the initial prediction is no longer valid. And the parameters of the model will need to be adjusted and a new prediction made.
John M. Drake, professor at the University of Georgia Odum School of Ecology and director of the Center for the Ecology of Infectious Diseases has published an informative piece on the pitfalls and value of models at fivethirtyeight.com.
“Why One Expert Is Still Making COVID-19 Models, Despite The Uncertainty.”
Comments are closed.