Uncontrolled spread, p.25
Uncontrolled Spread, page 25
Chapter 13
The Information Desert
In a crisis, we often don’t have time to wait for deep analysis. We’re forced to make the most effective decisions we can on the best information we have available at the moment. In a moment of public health crisis, that information is always going to turn on some complement of raw data and conjecture.
However, the CDC isn’t in the business of providing this kind of real-time, actionable, but often notional information. It’s counter to the agency’s culture. While the CDC does provide some near-real-time data streams like their flu surveillance program, their best reporting comes from longer-term investigations that are often wrapped into their Morbidity and Mortality Weekly Reports, the CDC’s epidemiological digest that provides deep analysis of disease trends.1
After an especially devastating flu epidemic struck the US in the winter of 2018, it took the CDC a year to establish just how deadly it had been, and start drawing inferences on why it was so lethal.2 I was running the FDA during that 2017–18 flu season and, during that winter, I wanted to share our concerns with the public, and warn them about the unusual severity of the flu, and our agency’s belief that the vaccine might not have been protecting people as well as it had in past years. The FDA wanted to urge people to be cautious and take added precautions to avoid infection.
The CDC tried to stop my communications, appealing to the Office of the Secretary of Health and Human Services to intervene and block me from issuing my warning. The CDC was arguing that it, and not the FDA, should be commenting on the severity of the flu. They said it was the CDC’s job to discuss disease trends with the public, and they weren’t ready to draw any firm conclusions about that particular flu season. The CDC was still gathering and analyzing their data.
I released my warning anyway, except that instead of issuing it as a public health advisory commenting on the flu season, I fashioned it as a safety update on the vaccine. The FDA regulated the flu vaccine, not the CDC. The CDC couldn’t stop me from commenting on an FDA-regulated product and the FDA’s belief that the vaccine might not be protecting patients from flu as well as intended.3
Because we didn’t know for certain the reason behind the rising death and disease from flu, and whether reduced effectiveness of the vaccine was definitely the culprit, the CDC didn’t want to say anything that was potentially wrong. But at the FDA, I wanted to warn the public based on what we knew, while explaining that we weren’t certain. The CDC wanted to wait and do more investigations. Later analysis would show that the vaccine was only about 25 percent effective against the predominant strain of H3N2 influenza A that was circulating that season.4
It isn’t a question of competency or public health commitment. CDC officials are deeply devoted to their public health mission. It’s just a question of institutional style, structure, and culture.
The CDC’s reporting is reflective and aims to provide definitive analysis. It isn’t geared to providing the sort of real-time information that’s often early, imprecise, and incomplete, but an essential currency in helping inform policymaking in a crisis. The CDC’s output is more akin to the journal articles issued by an academic department of epidemiology than a battlefield report issued by the Joint Special Operations Command. Providing timelier and more actionable reporting would require a reengineering of the CDC’s commitments, capabilities, and ethos.
During COVID, the general refrain, quite appropriately, was that our policy decisions should be guided by, and shouldn’t get ahead of, the science. However, what slowed action, and left our decisions less well informed, was that this common refrain is actually a lot more complex when applied to our response to COVID. It wasn’t the scientific data that the CDC didn’t want policymakers to ignore or second-guess; it was the CDC’s interpretation of that data.
The problem was that the CDC’s analysis was never timely or complete. The CDC’s edicts, contained in careful journal papers published in the Morbidity and Mortality Weekly Report, were often thorough, data driven, well researched, and late. The agency issued recommendations on the risk of coronavirus transmission through touching contaminated surfaces (fomites), measures to improve safety in schools, the role of respiratory aerosols and droplets, and countless other aspects of COVID’s pathogenicity and spread that took many months to generate; and were often published long after decisions based on these important parameters had to be made by patients, providers, and policymakers.5
The resulting information desert that plagued real-time decision making impacted more than policy, it was also felt in clinical practice. Even after we were months into the COVID epidemic, there was no reliable information on doctors’ collective experience in treating COVID patients. We had not tracked and reported what treatments and interventions medical providers were relying on and the outcomes of patients who received different forms of care. The best information linking different approaches to care with outcomes was coming from Italy and China. These countries had earlier experiences with the virus, but they also did a much better job than our CDC in tracking and collecting clinical experience, and sharing timely, bottom-line data with providers.
For clinical data, the CDC largely relied on information derived from death certificates, but reporting from these documents was usually delayed by one to two months. That’s why a lot of the early clinical data that informed doctors in the US and helped them determine which patients were most likely to have the worst outcomes was coming from other countries. Nations in Europe derived this information from their electronic health records in a near real-time fashion. They thus had better information about patient experiences to guide clinical practice.
We also lacked timely, reliable epidemiological data derived from effective contract tracing and used to inform policymakers on the settings where spread was most likely to occur. These data could have helped us better target mitigation to the highest-risk venues. Studies show that people will generally follow public health directives for about two weeks, and then compliance will break down. Without better ways to focus our interventions on regions where there was the greatest spread, we eroded public trust and squandered the social and political capital needed to maintain consent.6
Bars were closed basically on a hunch that they were a significant source of spread. There was some contact tracing to support this policy, but the data were imperfect. Early in the epidemic, we left many schools and day care sites open for essential workers, but we didn’t systematically monitor whether those settings became sources of spread, or which interventions helped reduce the likelihood of outbreaks. So we missed a critical chance to collect data that could have informed future policymaking on the issue of opening schools. Lots of other gaps could have been plugged, risks reduced, and hardships avoided with timelier and more complete information.
Could the quarantine period after a COVID exposure have been shortened?
Almost a year after the pandemic began, the CDC shortened the recommended quarantine for those exposed to the virus from fourteen to ten days, and then seven days if a person tested negative for the virus seven days after exposure.7
Was six feet the right minimum distance?
The 2006 pandemic playbook had discussed the use of three feet of separation. The six-foot rule led to situations in which people congregating indoors, in spaces that were poorly ventilated and created easy conditions for spread, but who were kept six feet apart, felt they could take their masks off if they remained outside a six-foot circle.
Was fifteen minutes of exposure the right measure to determine when someone who was in contact with an infected individual might have contracted the virus?
It seemed like an arbitrary judgment and led to manipulation of the agency’s advice. The CDC eventually changed its guidance from fifteen minutes of sustained exposure to fifteen minutes of cumulative exposure, but how could you measure your cumulative exposure to a person?8 It was almost as if the CDC was treating exposure to the virus like exposure to radiation and trying to measure a cumulative dose. It was reported that some establishments moved people around at the fourteenth minute to avoid passing the fifteen-minute regulatory threshold.9
As sociologist Dr. Zeynep Tufekci noted in The Atlantic, “None of this made any practical sense. What happened at minute 16? Was five feet okay? Faux precision isn’t more informative; it’s misleading.”10 The evidence behind these recommendations was always shaky, and the measures hard to implement. The CDC had to start somewhere, and so some of the initial recommendations were based on imprecise evidence, but the agency should have been more transparent on just how weak some of the data were. People could have made more informed judgments about where to apply limited resources, focusing on the guidance that had the greatest chance of reducing risk or the strongest scientific foundation. Good epidemiological surveillance, conducted by the CDC, could have informed more precise recommendations, and the CDC could have provided us with more information about the conditions that contributed the most to spread, improving our ability to keep people safe. Without reliable and actionable data, and good systems for collecting and reporting information, we lacked the infrastructure to try to build a new evidence base for a novel pathogen. So we largely worked from what we knew about flu, which in many important respects didn’t apply to the crisis we faced from COVID.
Some of the CDC documents were subject to revisions or delays by political officials at HHS and the White House, and the general refrain was that this interference made the CDC reluctant to advance other guidance or degraded the impact of the recommendations it issued. But there were plenty of matters the CDC opined on, that flew well below the radar of its political interlopers, where the CDC had failed to release relevant and timely information. Moreover, the CDC’s approach to these efforts didn’t change much once President Biden took over. Two of the CDC’s most senior career officials left the agency within the first six months of Biden’s term, in part, I’m told, over friction with the new administration.11 Frustration with the agency, it seemed, had bipartisan appeal. It’s perhaps convenient, but self-serving, to blame all of the CDC’s faults during the Trump term on political interference into its work. The CDC wasn’t just slow to develop this evidence, it also didn’t offer it in practical terms that made it actionable.
Still, there were White House and HHS political staff who wrongly believed that more information would confuse or alarm the public and drive people to take decisions that conflicted with the Trump administration’s reopening goals. So the CDC was, at times, intentionally stymied. In other cases, the White House and HHS lost confidence in the CDC and, not knowing how to reform the agency, they moved instead to suppress its work, isolate its leadership, and usurp some of its responsibilities.
However, what I also saw were political officials who misread the practical value of providing consumers with better information, and the benefit in leveraging the CDC to help gather and report it. Reliable information about risks would help advance the policy goals of reopening schools and businesses, because the absence of information created uncertainty, and uncertainty bred indecision. People chose to take no action at all if they didn’t have data that could help them correctly calculate, and lessen, the risks of the actions that they wanted to pursue.
Take the issue of opening schools. The White House wanted schools to be reopened in fall 2020. But political officials feared that more information about transmission in schools and the conditions that led to outbreaks could frighten additional schools into staying shut. So these same political officials stymied efforts at the CDC to put out more prescriptive guidance to schools outlining the steps that schools could take to reduce the likelihood of outbreaks.
This political posture probably had an effect that was opposite its unfortunate purpose, causing more schools to remain closed. The schools didn’t have enough information to guide safe decisions to reopen. At best, these political efforts were a misreading of the value that information could play in supporting action in the setting of uncertainty. Did masks lower the likelihood of spread in classrooms? Did distancing help? Was keeping students in distinct social pods effective? These were critical questions that needed to be answered. If we had data to guide these actions, more schools would have had a framework to know how to both stay open and reduce the risk of outbreaks. Secretary of Education Betsy DeVos said it wasn’t the responsibility of her department to collect and report this information.12 That was probably true, although the education department could have led that effort. However, the obligation to collect and report these data certainly belonged to the CDC.
National pandemic strategies going back to 2006 included schools as part of national disease surveillance. There were requirements for the reporting of school-based data to local health agencies who would then provide it to the states. The states, in turn, are committed to providing that information to the CDC. However, staff in the White House hesitated to systematically track this school-related data and share it with the public, and the CDC seemed to struggle with collecting this kind of bottom-line information, anyway, even if the White House had encouraged it to do so.
During a fast-moving crisis, in the absence of good information, people tend to be more conservative, and less willing to try something perceived as risky. When we can discharge uncertainty and properly handicap a danger, we can help people embrace reasonable risks.
I learned at the FDA that people are often willing to confront risks that they can adequately measure for themselves, but balk when forced to embrace risks that seem open-ended, ambiguous, or hard to measure. We learned that timely and complete reporting on drug side effects was reassuring to patients. They needed to have confidence that, if there were risks associated with a drug, the FDA would unearth this information and promptly report it to patients.
Armed with good information, patients were able to make informed choices and assume risks that they felt were reasonable for their individual circumstances. To help support this informed patient decision making, the FDA made substantial investments in recent years to develop more information about the real-world use of medical products, especially about their safety, and to provide this information directly to consumers in a regular and timely way. This was a major focus of many efforts I undertook while serving as the agency’s commissioner. The same principles apply across public health challenges. In the setting of COVID, more data about how and where COVID spread occurred in schools would have provided more certainty to school administrators on how to lower the chance of outbreaks. In the absence of good information to inform these decisions, facing uncertainty, many cautious districts chose to close schools instead. By the end of March 2020, as the US epidemic was getting under way, 94 percent of American schools were closed, and the majority of them would remain shut for the duration of the year.13
Dr. Christopher Murray was the director of the Institute for Health Metrics and Evaluation and the architect of a model of COVID spread that was closely followed by the Trump White House. Writing in the New York Times, just as the fall surge was gaining explosive momentum, he said that federal agencies, including the CDC, had been telling him since March that the government was compiling bottom-line, county-level data on COVID cases, hospitalizations and deaths, the timing of social distancing mandates, testing, and other factors that could provide insights on how policy actions were affecting how fast and wide the virus would spread. This kind of data would have been invaluable in helping to establish more-targeted measures. As Murray wrote, “This information can provide insights into how combinations of public health mandates—masks, social distancing and school closures, for instance—can keep the virus spread in check. But the government, inexplicably, is not sharing all of its data. Researchers have asked federal officials many times for the missing information but have been told it won’t be shared outside the government.”14
The New York Times had to sue the CDC under the Freedom of Information Act to obtain data on COVID cases tabulated by race and ethnicity. It was basic information that could help focus resources on communities that were being hardest hit by the virus, to help save more lives. The information would eventually prove that Black and Latino Americans were being excessively harmed by COVID in a “widespread manner that spans the country, throughout hundreds of counties in urban, suburban and rural areas, and across all age groups.”15
The dominant narrative over this time period remained that the White House pressured the CDC to subdue certain reporting. The record shows political actions certainly played a role in the suppression of some critical information. But there was another problem afoot. The CDC didn’t have all the pertinent information in the first place, or the ability to collect it in a reliable fashion that would enable timely reporting. The agency paid local and state health officials to report bespoke feeds of data that was typically collected in a format that made it inaccessible to anyone other than the CDC. The agency used proprietary forms that required healthcare providers to input the information into specialized data streams that were for the CDC’s exclusive use. It slowed the collection of information and increased the chance for errors, since many providers had to extract data from other systems and separately transpose it onto the CDC’s forms. There was also no natural market for this information, so it was not in anyone else’s normal work stream. It was being gathered and shared only for the CDC’s consumption. And since it fell outside the normal systems for collecting healthcare information, with all the compliance rooted in those tasks, there was no embedded audit function. Because the data collection was done separately from other routine healthcare reporting, it also meant that systems for collecting it were generally outdated.
It was possible to derive the same information by culling the data from our existing electronic health records, a process that would have also provided a more natural audit trail and more quality control over the information. The CDC could have taken the role of being an aggregator of existing data feeds rather than a proprietor of unique reporting streams that were distinct from other pools of healthcare information. The CDC took significant pride in its proprietary data feeds, however, and clung to its model, even though the agency’s approach had many obvious shortcomings.
