In the first half of the nineteenth century, European governments began to gather and publish vast amounts of statistical data on the vital characteristics of populations: their rates of marriage, birth, death and disease.[1] The analysis of this data revealed that while the future was contingent, there were nonetheless certain regularities according to which governments could rationally plan. An example is the biometer, developed in the 1840s by William Farr, head of the British General Register Office. This device demonstrated the likelihood of mortality in any given year for a particular age group. It combined national census data and parish death registers to track a group of infants of the same age through life, recording the numbers still alive at periodic intervals until all had died. Such data could reveal “laws of vitality” that would make it possible to anticipate the future fate of these infants. As Farr explained: “Although we know little the labors, the privations, the happiness, the calms or tempests, which are prepared for the next generation of Europeans, we entertain little doubt that about 9000 of them will be found alive at the distant Census in 1921.”[2]
This style of reasoning about disease and death can be termed actuarial. Like insurance, it requires historical data about patterns of incidence of events in order to make rational calculations about future probabilities. In the field of public health, however, it is applied with a different aim: to optimize the health of populations. Once there is sufficient data on differential risk of disease, it becomes possible to develop targeted interventions to reduce mortality rates. This actuarial logic serves to legitimate political decisions on risk, whether or not the potential hazard eventually appears.[3] Over time, this mode of calculation guided policy decisions in fields ranging from public health to industrial accidents to retirement pensions.
The actuarial style of reasoning, oriented toward disease prevention through the management of risk, has remained predominant among experts in public health. However, beginning in the last decades of the twentieth century it has increasingly coexisted with a different approach, one that emphasizes vigilant monitoring of the onset of an unpredictable but potentially catastrophic event. If risk management involves the creation of a common space of calculation through which planners can anticipate the likelihood of future events, vigilance assumes that the future cannot be known and that one must therefore plan for the unexpected. Rather than relying on a calculus of cost and benefit, vigilance enjoins intervention in a precautionary mode: one must act now or one may be held accountable later for the results of inaction.[4]
Two kinds of security mechanism are in play. If risk management leads to the invention of actuarial devices that assemble patterns of historical incidence, vigilance requires sentinel devices that can provide early warning of encroaching danger. An actuarial device is invented for a world in which the possible threats to collective life can be known through statistical analysis and the problem is one of accumulating enough data to guide cost-effective intervention. A sentinel device, in contrast, is devised in order to stimulate action when decision is imperative but knowledge is incomplete.
Sentinel devices are especially salient for experts in monitoring threats whose onset may be sudden and unpredictable, and whose initial effects may be imperceptible to humans. In the field of public health, such tools are designed to detect the emergence of unexpected or unknown disease. One example of vigilant monitoring for encroaching pathogens comes from “viral forecasting,” such as a Google-funded enterprise that collects and tests samples of African bush meat for the emergence of zoonotic disease based on the premise that such a system can “stop the next pandemic before it starts” (see Lachenal forthcoming). Another is “syndromic disease surveillance,” which aims to detect signals of a new epidemic even before doctors have made any diagnoses, for instance by looking at anomalies in emergency room visits or in the use of over-the-counter medications (see Fearnley).
While these devices are designed to alert officials to a significant event in the present, they provide little information about what is likely to happen next. For this reason they are typically linked to guidelines or protocols for taking authorized action in the face of uncertainty. Thus sentinel devices do not operate autonomously, but are integrated into systems of alert-and-response, including preparedness plans that structure official response and decision instruments that guide intervention upon the onset of an event. Such responses, however, may be subject to criticism from actors who are invested in an actuarial approach and who are suspicious of vigilance as a technocratic mode. A recent European controversy around vaccination policy—though it played out in an “ethical” idiom—can be understood as a critique, from some quarters of public health, of the legitimacy of the sentinel device as a guide to techno-political intervention.
The Next Pandemic
When the newly reasserted influenza virus a/h1n1 made its appearance among humans in the Spring 2009, it seemed at first to be the pathogen the international health community had been preparing for. Dozens had apparently died in Mexico from a respiratory ailment, and hundreds more were hospitalized. Reports of cases from around the United States indicated rapid transmission of the virus. There was a possibility that this would become a deadly pandemic, but its key statistical characteristics—in particular, its case fatality ratio—were not yet known. Within weeks an extensive public health apparatus had taken hold of the virus, tracking its global extension through reference laboratories, mapping its genomic sequence, collating data on hospitalization and death rates, working to distribute anti-viral medicines and develop a vaccine, and communicating risk to various publics. While some elements of this apparatus were decades old, such as the Global Influenza Surveillance Network and the egg-based technique of vaccine production—others were quite new, such as internet-based outbreak reporting systems, molecular surveillance, and national pandemic preparedness plans.
Based on reports from Mexico and the US, who Director-General Margaret Chan declared a Public Health Emergency of International Concern (pheic) under the newly revised International Health Regulations (ihr). Here the sentinel was linked up to a decision instrument designed to guide political-administrative action. Following ihr protocol, Chan appointed an Emergency Committee constituted of recognized influenza experts, who recommended a Phase Four Pandemic Alert. Given the controversy that followed, it is important to point out that the definition of “pandemic” from who’s 2009 preparedness guidance document referred to “sustained community-level outbreaks” in multiple regions but made no reference to the severity of the virus.
Four days later, on April 29, the Emergency Committee voted to raise the pandemic alert level to Phase Five, indicating that national health authorities should move from “preparedness” to “response” activities. Chan assured the public that who was tracking the emerging pandemic across multiple registers—clinical, epidemiological, and viral—and advised national health ministers to “immediately activate their pandemic plans” (Chan 2009a). For North American and European governments, among other things this meant triggering advanced purchase agreements with vaccine manufacturers to produce millions of doses in time for anticipated fall immunization campaigns. In the absence of epidemiological data on the severity of the virus, the pandemic alert system alongside national preparedness plans provided government officials with guideposts for action.[5]
On June 11, Chan announced pandemic alert Phase Six, a full global pandemic. In her public statement, she pointed to the agency’s vigilance as the event unfolded: “No previous pandemic has been detected so early or watched so closely, in real-time, right at the very beginning. The world can now reap the benefits of investments, over the past five years, in pandemic preparedness” (Chan 2009b). At the same time, she also warned of ongoing uncertainty: “The virus writes the rules and this one, like all influenza viruses, can change the rules, without rhyme or reason, at any time.” Vigilant watchfulness would continue to be necessary.
As of early July, experts were still trying to figure out what h1n1’s “rules” were, in particular its rules of transmissibility and virulence. A critical problem remained the lack of data on the overall incidence, as opposed to the number of fatalities, of h1n1 in the exposed population. This was the well-known “problem of the denominator.” A team of epidemiologists argued for immediate investment in serologic surveys so that the case fatality ratio could be calculated: “Without good incidence estimates,” they wrote, “estimates of severity will continue to suffer from an unknown denominator. The effectiveness of control measures will be difficult to assess without accurate measures of local incidence” (Lipsitch et al. 2009). This was an attempt to move from vigilance to risk management through the intensive gathering, sharing and analysis of epidemiological data. The Director of the US Institute of Medicine described such efforts as “epidemic science in real time,” through which “scientists can enable policies to be adjusted appropriately as an epidemic scenario unfolds” (Fineberg and Wilson 2009).
Significant political and economic decisions had to be made in the absence of fully elaborated data on risk. Beginning in the summer 2009, the US government spent $1.6 billion on 229 million doses of vaccine in what the Washington Post later called “the most ambitious immunization campaign in US history” (Stein 2010). In the early fall, unanticipated delays in vaccine production combined with high demand led to criticism of health officials for poor planning, which faded as the anticipated wave of h1n1 arrived without causing a catastrophic number of deaths.
In Europe, when the fall wave arrived, the apparent mildness of the virus led to widespread public skepticism about state-led vaccination campaigns. The French government spent an estimated five hundred million euros on a campaign that in the end immunized only ten percent of the population. By the winter, the governments of France, Germany and England all sought to renegotiate their advanced purchase agreements with vaccine manufactures and to unload their excess doses on poor countries in the Global South at bargain prices.
A series of political controversies then erupted over the intensive public health response to h1n1. In Le Monde, former French Red Cross president Marc Gentilini admonished the government for its spending on the campaign, noting that “preparing for the worst wasn’t necessarily preparing correctly” (Chaon 2010). A physician and legislator for the governing conservative party decried the misallocation of public health resources, saying “the cost is more than the deficit of all France’s hospitals and is three times [the amount spent] on cancer care” (Daneshkhu and Jack 2010). The French government defended its actions on the grounds of precaution: “I will always prefer to be too prudent than not enough,” said President Sarkozy (Whalen and Gautheir-Villars 2010).
The attention of critics then turned to the warnings from international flu specialists that had led to the mass vaccination campaigns. As Gentilini put it, “I don’t blame the health minister, but the medical experts. They created an apocalyptic scenario. There was pressure from the World Health Organization, which began waving the red warning flags too early” (“Flu Vaccine” 2010). The head of the French Socialist Party demanded a parliamentary inquiry, calling the vaccination campaign a “fiasco” and arguing that multinational drug companies were “the big winners in this affair” (Daneshkhu and Jack, 2010). The Chair of the Council of Europe’s Health Committee, a German physician, convoked public hearings on the matter, charging that the who pandemic declaration was “one of the greatest medical scandals of the century” (Macrae 2010).
Witnesses before the European Council’s Health Committee argued that scarce health resources had been squandered on a virus that turned out to be less dangerous than seasonal flu, and that such resources should have been spent on “real” killers, whether heart disease in wealthy countries or infant diarrhea in poor ones. A German epidemiologist cited annual mortality statistics to criticize the who’s emphasis on managing potential outbreaks at the expense of treating the actual “great killers” whose toll was attested to by epidemiological data: “I would like to point out that of the 827,155 deaths in 2007 in Germany about 359,000 come from cardiovascular diseases, about 217,000 from cancer, 4968 from traffic accidents, 461 from HIV/AIDS and zero from SARS or Avian Flu” (Keil 2010). Here, coming from one segment of public health experts, we find the public display of numbers used to make the case that rational intervention must be based on risk calculation rather than on precaution against potential catastrophe.
But rather than see the who as engaged in a different type of reasoned action, critics denounced a lack of objectivity, arguing that conflicts of interest among members of the Emergency Committee must have led to the pandemic declaration. One source of suspicion was the removal of the measurement of severity from the who preparedness guidance document several months before the appearance of h1n1. In June, an investigative report in the British Medical Journal revealed paid consulting relations between leading influenza experts and vaccine manufacturers (Cohen and Carter 2010). The same week, the Council of Europe released its report, concluding that the pandemic declaration had led to “a distortion of priorities of public health services across Europe, waste of huge sums of public money, [and the] provocation of unjustified fears among Europeans,” and suggesting that who deliberations had been tainted by unstated conflicts of interest between experts and the drug companies that profited from the vaccine campaign (Parliamentary Assembly 2010).
In response to these allegations, Chan chartered a review of the agency’s response under the aegis of ihr. The Review Committee’s final report, released in May 2011, absolved the who influenza experts of overstating the seriousness of the pandemic. “Reasonable criticism can be based only on what was known at the time and not on what was later learnt,” the Committee argued, pointing out that “the degree of severity of the pandemic was very uncertain throughout the middle months of 2009, well past the time, for example, when countries would have needed to place orders for vaccine” (World Health Organization 2011). In the case of a novel pathogen, the virulence of an encroaching pandemic cannot be determined based on accumulated knowledge about the past. At a moment of critical decision, one will inevitably suffer from a dearth of numbers.
In the 1840s, the actuarial device in public health was invented in the context of an attempt to know and manage the regularities of collective life. A century and a half later, sentinel devices proliferated in response to a different problem, that of the unpredictable but potentially catastrophic outbreak in a globally interconnected world. These two approaches to securing public health encountered one another around the question of what kind of event h1n1 was to be: an alarm precipitously sounded or a bullet barely dodged.