Since the first Covid-19 cases emerged in the U.S. in early 2020, attitudes toward coronavirus testing have evolved as we’ve learned more about the novel coronavirus and how it spreads.
Initially, testing was intended to be diagnostic and confirm infections in people with symptoms such as cough, fever, shortness of breath, and fatigue. At that time, there was no understanding of the risk of silent transmission within the community via infected people who had not yet developed symptoms or who never developed them.
Given that diagnostic focus, coronavirus testing capacity emphasized the use of highly sensitive polymerase chain reaction (PCR) tests that rely on amplifying DNA copies of the RNA in SARS-CoV-2, the virus that causes Covid-19. PCR has remained the “gold standard” for testing in the U.S. and the around world.
What we describe here is a combination of the screening and surveillance approaches to coronavirus testing outlined in a new report by Duke University’s Margolis Center for Health Policy and The Rockefeller Foundation. This approach is appropriate when there is no particular reason — like symptoms or possible exposure to someone with Covid-19 — to suspect that the individuals being tested are likely to be infected.
PCR testing has existed since the 1980s and is widely used. Yet there are practical limitations to its use: PCR requires at least an hour to run on specialized equipment that relies on specific reagents. Shortages of these reagents during the summer led to significant delays in returning test results, reducing their utility to guide actionable decisions at both the individual level, such as self-isolation to prevent infecting others, and at the population level, such as using real-time data to steer local government decisions about things like travel quarantine policies.
Each PCR test typically costs between $50 and $100, which adds up when scaling to hundreds of millions of tests. In light of these issues, Verily, the company that we work for, filed for and recently received an emergency use authorization (EUA) from the U.S. Food and Drug Administration (FDA) for pooled PCR testing, which allows testing double or quadruple the number of samples at once. We are deploying these tests as part of Verily’s Project Baseline Covid-19 testing and Healthy at Work programs.
While PCR testing continues to be important, what the U.S. needs now is to deploy mass screening tests that provide real-time information about the spread of Covid-19, much as we deploy widespread sensors on ocean buoys and space satellites to enable weather forecasting that detects hurricanes days before they hit.
The Covid-19 equivalent is a large second tsunami of cases that could engulf the country as the weather gets colder and activities move indoors. That surge would also coincide with flu season, making it even harder to follow the symptom trail in uncovering Covid-19 outbreaks.
Fortunately, the national and global investment in laboratory testing is paying off. At the end of August, Abbott Laboratories announced the BinaxNOW Covid-19 Ag Card, a $5 nasal-swab test for Covid-19 that returns results in 15 minutes. This test detects antigens, substances that induce an immune response in the body.
Other companies have emergency use authorizations from the FDA for similar tests, and the National Institutes of Health continues to invest hundreds of millions of dollars to support development of new coronavirus testing technologies.
While not as sensitive as PCR, these antigen tests can be tremendously helpful in surveillance of new outbreaks. Their low cost and ease of use means they can be deployed frequently and for large numbers of people outside the clinic, such as at nursing homes, colleges, and workplaces, to quickly deliver results that prevent nascent outbreaks from spreading.
To be sure, no test is perfect. Each test has metrics of how well it detects disease. The science of understanding how these tests differ guides their use in tracking disease.
One key metric, specificity, measures how often people who are not infected have tests that come back as negative. Because the test is imperfect, some non-infected people will test positive. Specificity is very important when deploying large-scale screening since the “false positive” is the main source of error that will typically be confronted.
A 1% drop from a perfect specificity of 100% means that 1% of uninfected people will test positive. When the infection rate is low, this means that the overwhelming majority of positive tests will be false positives, making it less obvious what steps should be taken by individuals and local health authorities based on the test results. In fact, false positives may make it seem as if using such tests is simply not helpful.
That’s not so. Carefully understanding specificity will allow for robust implementation of mass coronavirus testing programs. The metric that experts look at next to quantify how strongly to believe the result of a positive test is called the positive predictive value. This is an example of Bayes’ theorem in action, as Jeffrey Schnipper and Paul Sax recently described in STAT.
Let’s take the BinaxNOW test as an example. It has a reported specificity of 98.5%. While this sounds remarkably high, in practice it means that 1.5% of uninfected people will test positive. This needs to be considered in conjunction with the rate of infection in the population being tested.
In areas of the country where the rate of active infection (prevalence) is low, such as 1 in 1,000 people or lower (a prevalence of 0.1%), then the positive predictive value will be just 6.1% and the majority (93.9%) of positive test results will actually be false positives.
|Metric||Definition||Abbott test at 10-fold-increasing prevalence of Covid-19|
|Prevalence of the disease (infection rate)||Probability of someone being infected with the disease||0.1%||1.0%||10.0%|
|Sensitivity of the test for the disease||Probability of someone infected with the disease testing positive for it||97.10%||97.10%||97.10%|
|Specificity of the test for the disease||Probability of someone not infected with the disease testing negative for it||98.5%||98.5%||98.5%|
|Positive predictive value (PPV) of the test for the disease||Probability of someone who tests positive actually being infected with the disease||6.1%||39.5%||87.8%|
|Negative predictive value (NPV) of the test for the disease||Probability of someone who tests negative actually being not infected with the disease||100.0%||100.0%||99.7%|
|Expected positivity rate||Average fraction of tests that will be positive if the population is sampled at random||1.60%||2.46%||11.06%|
While surprising at first glance, this can be viewed as a good thing in the aggregate, since it means there aren’t more positives than expected simply due to the imperfect specificity of the test. That is, if the expected positivity rate is low (at or below 1.6%) for mass coronavirus testing deployed uniformly in the population, then the majority of positive tests are false positives. This helps scientists and health professionals confirm that the population is not harboring large outbreaks, enabling policymakers to make informed decisions on keeping local economies and social activities open.
On the other hand, if the prevalence of Covid-19 among those being tested is 1% or higher, then the situation starts to change. More than half of the positive test results (60.5%) arise from noninfected individuals, with a positive predictive value of 39.5%. But it would make sense to refer those individuals for PCR testing to confirm or refute their infection status. Those with positive PCR tests would need to isolate, and mitigating actions could be taken to stop the spread of the outbreak, such as shutting down nonessential indoor facilities such as bars.
When there is little reason to suspect that the individuals being tested are likely to be infected with SARS-CoV-2, the rate at which people who are actually infected test as positive (the sensitivity of the test) does not have a large impact on how individuals and local health authorities should interpret test results. Specificity is the key metric to consider for Covid-19 testing being used in such circumstances.
The U.S. needs to add fast, low-cost coronavirus testing to its Covid-19 toolbox, even if it isn’t as accurate as the PCR gold standard. This will provide essential information for staying up and running this fall and into the winter as we await the development and deployment of safe and effective Covid-19 vaccines. Geographically localized real-time metrics based on fast Covid-19 testing will make possible laser-focused identification of cases and their subsequent isolation. Coupled with closing specific types of businesses or activities, this can stop outbreaks early and often. Complete shutdowns would then remain only as measures of last resort.
Menachem Fromer is a data scientist, R&D lead for Covid-19 population health and mental health at Verily Life Sciences, and associate professor of genetics and genomic sciences and psychiatry at Icahn School of Medicine at Mount Sinai in New York. Paul Varghese is a cardiologist and clinical lead for health informatics at Verily Life Sciences. Robert M. Califf is a cardiologist, head of clinical policy and strategy for Verily and Google Health and was formerly the vice chancellor for health data science for the Duke University School of Medicine and director of Duke Forge, Duke’s center for health data science. Califf served as deputy commissioner for medical products and tobacco in the U.S. Food and Drug Administration from 2015 to 2016 and was the commissioner of the FDA from 2016 to 2017.