Cost-effective pandemic preparedness
In this post, I outline, at a high-level, my thoughts on pandemic preparedness. I cover threats regardless of origin (zoonotic, diseases spread from animals into humans; accidental, those caused by humans without intent; or deliberate, as an act of warfare or terrorism). I hope in future to go into further details about my reasoning, including in a quantitative way. For now, I present them with some intuitive reasoning in footnotes.
Summary
Direct1 spending on pandemic preparation might be comparable in cost-effectiveness to the best global health interventions (e.g.: malaria bed nets). This is based on the historic number of deaths from pandemics, but is a good guide to future impact.
The most likely reason that future threats will be significantly higher is from non-state malicious actors; however, this is highly uncertain. Promising mitigations may be low-cost and sensible, such as mandatory DNA synthesis screening.
Indirect philanthropic spending, for instance increasing governmental or private spending on pandemics, is promising. This is especially true for interventions that also decrease the losses due to seasonal epidemics or noninfectious health risks (e.g.: indoor pollution).
There are few, if any, scenarios that lead to societal collapse. The only scenario I have seen that seems plausible is a so-called “stealth” pathogen (a pathogen is a virus, bacteria, or other microorganism that causes disease). I am extremely uncertain about the plausibility of such a scenario. The proposed response plans are inadequate to mitigate this risk, making tractability low.
A major weakness in our pandemic response is inadequate tools. This problem is worsened by early uncertainty in a pandemic, which makes it difficult to calibrate responses accurately. Currently, we have: imprecise and costly tools (e.g.: reducing interactions across society); tools that are hard to implement effectively (e.g.: masks or contact tracing); and tools that scale poorly (e.g.: border controls).
Introduction
Pandemics have historically occurred every 4 years on average, killing, in expectation, 7 in 10,000 of the global population, or about 5.6 million deaths each time.2 This gives an expected death toll of 1.6 million per year.
How much should philanthropists be willing to spend to prevent this? GiveWell estimates global health interventions (e.g.: bednets to prevent malaria) can save a life for around $5,000 on average. Therefore, to be equally cost-effective, preventing all pandemic deaths needs to cost $8 billion per year or less. Alternatively, a marginal benefit reducing deaths by 10% needs to cost no more than $800 million per year. The best opportunities, such as the 100 Days Mission, likely cross this bar.3 Many opportunities are unlikely to pass this bar though, such as ongoing surveillance. Quantitative cost-effectiveness analyses are rarely available in this space, but should be encouraged to find other opportunities.
However, there are various ways more effective opportunities available for philanthropists than spending directly on preparedness.
Leveraging government or private sector spending. Rich world governments will spend $1 million or more to save a life. Spending on pandemic preparedness is much more likely to be cost-effective at this level. If a case can be made for private companies to invest (e.g.: because the interventions will reduce workplace absences) then the counterfactual is probably even better.
Deploying interventions that provide ongoing benefits,4 even in the absence of a pandemic. Some exciting ideas in this category are clinical metagenomic sequencing or improving indoor air quality. There seems to be interesting trials in both cases here, at least in the UK, with new standards for buildings and programmes trialling more widespread metagenomic sequencing.
One-off investments that require low ongoing costs. For example, if a new intervention can be developed (e.g.: a contact tracing app) that requires a one-off spend to develop, little maintenance, but can quickly be deployed at the first signs of an outbreak. Here, the one-off spend can accrue long-term benefits. Yet, we must be careful that these investments are not outpaced by technological or societal changes. Many systems developed in the wake of the 2009 H1N1 pandemic were irrelevant only a decade later for COVID-19, when video calling was much more available and mRNA vaccines were rapidly developed.
Increases in risk. Many have argued that, for various reasons, the risks of a pandemic are increasing over time, although the evidence-base is weak. For zoonotic risks, there is increased contact with animals due to habitat loss and more factory farming. For accidental risks, there are more labs doing risky research. For deliberate risks, access to the knowledge required to weaponize infectious diseases may become more widespread. The risk from deliberate risks is the most uncertain. Cost-effectiveness is hard to assess but there are low cost interventions (e.g.: DNA synthesis screening) that seem worthwhile.
Second-order effects. Some have argued that pandemics could cause long-term effects on humanity, causing societal collapse or extinction. The view that such risks should guide our actions is known as longtermism; I am somewhat sceptical of this view. However, even putting aside my scepticism of longtermism generally, the concerns here are currently very speculative.
In short, rich-world governments should spend more on pandemics, and we should look to convince private companies that reducing biological threats is in their best interest. I am very interested in ideas for one-off investments that could improve preparedness, but think the case for these is easy to overstate. Low-cost ways to mitigate increasing risks are also worth pursuing. More research into the likelihood of long-term effects of pandemics, and cost-effectiveness analyses of reducing this risk, would help analyse this space.
The next two sections give my reasoning behind changing risks and long-term threats, probably the most controversial of my views. The final section changes tack to consider the weaknesses in our response tools.
Changing risks
Humanity experienced, on average, 0.3 pandemics per year across the 17th and 18th centuries, 0.5 per year in the 100 years up until the end of the Second World War, but only 0.1 per year in the 79 years since.5 Any argument that the rate of pandemics is increasing must answer why, empirically, this risk has recently been below the historic rate.
Alternatively, one could argue that the risk could be increasing because the severity of each pandemic is increasing. I am not aware of any evidence for this. My prior is that we should be able to mitigate pandemics more effectively in the future. COVID-19 has taught us a lot such as: the effectiveness of lockdowns and how to deploy vaccines more rapidly than we ever have done in the past. There are further reasons to be more optimistic. To name a few: the 100 Days Mission (supported by the G7) to go even faster on vaccines, therapeutics, and diagnostics; new tools, including those utilising machine learning, to speed drug discovery; and technologies that will make home-working even easier making lockdowns less costly (e.g.: virtual reality or self-driving cars).
The argument that zoonotic pandemic risks are rising is weak. Much of the evidence for increasing animal-to-human disease transmission is confounded by better global diagnostics and only considers the post-war period.6 There are reasonable mechanisms for increasing risk, such as deforestation leading to more human/animal interaction. But I am yet to see anything to update away from my view that pandemics are rarer than they have been historically.
Labs handling the most dangerous pathogens (BSL-4 labs) are increasing in number. Based on current trends, their risk will equal the historical risk from zoonotic diseases in the 2030s.7 Therefore, while these may change the picture, they do not change the conclusions on cost-effectiveness, which need order-of-magnitude changes. While these pandemics might be in the more severe end of historic ones, the distribution of severity is probably not much greater.8
The final class of risk, and perhaps the most uncertain, is deliberate attacks. State actors, which have had and continue with bioweapons programmes, have the greatest potential. However, they seem likely limited by the indiscriminate nature of human-to-human transmissible bioweapons. This seems unlikely to change, although machine learning models allowing these states to discover more dangerous pathogens are perhaps threatening.
The more changeable class is terrorists and other non-state groups. Historically, they have been unable to use bioweapons.9 However, advancements in dual-use technologies could overcome this. For example more widespread access to DNA synthesis or more accessible access to information (e.g.: increased use of open source publishing or large language models such as ChatGPT functioning as "search engines on steroids"). Further research into these risks, including engagement between the scientific and intelligence/counter-terrorism communities, would be helpful to better assess these risks. There’s plausibly some cheap and helpful interventions here, such as DNA synthesis screening.
Long-term effects
Gopal et al. (2023) argues that biological threats pose a threat to the long-term future of humanity due to causing societal collapse. They propose two scenarios whereby this could arise: “wildfire” pandemics that are so frightening that enough essential workers stay home, and “stealth” pandemics that infect such a large fraction of the global population before detection that we cannot respond to them.
A wildfire pandemic scenario is a disease spreading quickly to such an extent that even lockdowns cannot prevent their spread. Imagine early COVID-19 but more lethal or spreading several times as fast. Eventually, enough essential workers become infected or refuse to work (fearing for their lives), that society breaks down. This seems incredibly unlikely to me. First, such a disease would be far out-of-distribution of anything we have seen previously, combining the worst elements of a variety of pathogens. For a disease to continue growing exponentially in a lockdown (which could be stronger than those in COVID-19) would make it one of the most infectious diseases we have ever seen.10 Some pathogens (e.g.: influenza), that are very well-adapted to humans, have never managed this. Second, even if this does occur, it is unclear if it would lead to societal collapse. While it is hard to do much except speculate here, my intuition is that this is a very high bar. Humans, especially those fearing for their lives, are ingenious. We would likely find more efficient ways of operating society, needing fewer essential workers. Finally, the idea that essential workers would stay at home while society breaks down around them is implausible to me. I would welcome evidence to change my mind here, but that case has not been made.
Stealth pandemics do seem very scary. Their biological plausibility is highly uncertain, as is the ability to engineer these pathogens in the near or medium term. This is compounded because I think such a pathogen needs to be more severe than Gopal et al. suggest.11 A stealth pandemic needs to combine the worst elements of several pathogens that humans have ever faced, making it far out-of-distribution compared to anything we have seen. I am extremely uncertain here, have not seen arguments either way, and do not have any expertise.
There are plausibly some low-cost interventions to greatly increase our probability of detecting a stealth pandemic before it infects a significant fraction of the population. For example, metagenomic testing in easy-to-access or high-risk populations (e.g.: healthcare workers, blood or respiratory samples taken for other purposes, or travellers). If metagenomic sequencing became cheap and useful enough to justify for clinical reasons, this data would likely be enough. However, what to do following detection remains an unanswered question. Until these questions are answered, tractability on this issue remains low.
I am not the first to point out that the arguments that biological risks have a reasonable chance of causing long-term harm to humans are weak. While these threats should be considered, I want to see quantitative cost-effectiveness before we redirect significant resources on this rationale.
Pandemic response
A major constraint on pandemic response is that our response tools are blunt. This means that taking a precautionary approach and responding early is expensive. Finding ways to better calibrate our response should be a high priority.
Restrictions on social activity, such as closing venues, are the fastest way we know of to stop a pathogen12 spreading; the most extreme version of this being a lockdown. These are expensive to implement from many perspectives, including economics and mental health. They also deprive individuals of their liberties, which is morally questionable. Arguably, this is because of their indiscriminate nature: everyone must stop activity regardless of their personal level of risk. We should look to find lower cost measures.
The most obvious version of granular measures are isolating only the individuals most likely to be infected. Contact tracing is the normal implementation of this, yet performed poorly in COVID-19. Either the criteria for tracing was spread broad (negating much of its use), or it had only marginal effects. The most promising paths for improvements here are rapid diagnostics or automated contact tracing. Both showed promise during the pandemic,13 and could be more impactful with better preparation.
Passive measures could also play an important part here. If we reduce the ability for a pathogen to spread, then we can mitigate pandemics without any restrictions on anyone’s lives. Promising avenues here are improving indoor air quality either through better ventilation and filtration, or germicidal ultraviolet light.
Another avenue to pursue is to improve our ability to calibrate responses early in a pandemic. The large data and model uncertainty14 means that true estimates of our uncertainty are extremely large in an outbreak’s early phase. Yet, the quicker we can characterise the likely severity of an outbreak the more quickly we can respond appropriately. Numerous academic groups are exploring ways to enhance our response. Incremental progress across areas is likely our best hope.
Combined, the above suggestions will massively improve our response to outbreaks, before they become pandemics. Better knowledge will inform a response, which itself can be stepped up or down in a more granular way.
Thank you for reading to the end. I am currently looking for a job! If you think your organisation could benefit from this type of thinking, please get in touch.
These thoughts are all my own, informed by discussions with a wide variety of people. I am particularly grateful to the Biosecurity Working Group based at the Meridian Office in Cambridge for both these discussions and comments on drafts. Lin Bowker-Lonnecker, and James Lester both provided helpful and thought-provoking feedback. My views have been heavily informed by my research and experience providing scientific advice to the government about the epidemiology of COVID-19.
By direct I mean paying for defences, as opposed to lobbying or other efforts that can generate leverage.
This is based on the dataset from Marani et al. (2021). My preliminary reanalysis of their data suggests pandemics (killing at least 1 in 100,000 of the global population) occur with this frequency and severity. A recent modelling effort published by the Centre for Global Development (blogpost summary) implies higher numbers by a factor of around 2. Unfortunately, the methodology in that paper is not detailed enough to reconcile the differences easily.
For example, CEPI, with a budget of $300m per year, aims to provide a vaccine within 100 days of an outbreak. It seems likely this would reduce pandemic deaths by more than 4%, passing this bar.
Deaths from seasonal and endemic respiratory illnesses are comparable to the pandemic deaths I give here. Indoor pollutants cause similar harm to indoor pathogen spread, and filtration/ventilation can reduce both harms.
As footnote 2.
The most prominent papers here are Jones et al. (2008) and Allen et al. (2017). Meadows et al. (2023) appears more convincing but their results (figure 2) seem somewhat overfit.
The rate of BSL-4 labs being built, reported lab accidents, and virological papers published all are growing at roughly the same rate. There have been one or two pandemics caused accidentally by humans (1977 Russian flu and possibly COVID-19). Taking this growth rate in a gamma-Poisson model gives the conclusion that the risk surpasses the historical pandemic risk between 2032 and 2042 depending on assumptions.
Most labs working with viruses are working with ones similar to what we see in nature. For example, one of the leading hypotheses for the 1977 Russian Flu pandemic is that they were trying to develop a vaccine against the strain and it went wrong. Furthermore, any viruses in labs (by definition) are under active study. This means we should be better prepared for them. However, labs are likely to focus on the more concerning pathogens (because these are of greatest public health interest), and sometimes even make pathogens more dangerous. Such labs should come under greater scrutiny.
Possibly the closest to succeeding were the doomsday cult Aum Shinrikyo in deploying anthrax. Other attempted terrorists (e.g.: the Anthrax letters) did not want to cause societal collapse or human extinction. However, they made several technical mistakes leading to their attempt failing. No terrorist organisation is known to have attempted or been interested in an attack using an infectious disease.
The first UK lockdown reduced R0 by roughly 80% (Eales et al., 2022). If the situation required it to prevent society starving, I think this could be much more effective (e.g.: reducing the remaining contacts or widespread effective masking among essential workers), reducing the riskiness of the remaining contacts by 2-10x. This gives the potential for a R0 reduction of 90-98%. This controls anything with a pre-intervention R0 of less than 10-50.
Gopal et al. claim, similar to the wildfire scenario, that a majority of essential workers need to be deliberated or killed to cause societal collapse; it is unclear what their sources are to believe this. However, Ord (2020), argues that at least 50% of humans in every region need to be killed, and that plausibly as few as 98 survivors could restart civilisation. The stealth nature of the pandemic means that panic or people staying home to protect themselves is less likely.
Assuming a respiratory pathogen, the most likely to cause a pandemic.
Daily testing of contacts (or other individuals likely to have COVID-19) has shown promise in both modelling and randomised controlled trials.Evaluations of contact tracing apps show effectiveness, but uptake remains a problem.
Data uncertainty means that the data does not say what we think it says (e.g.: due to ascertainment or other selection biases). Model uncertainty means that the choice of epidemiological model to use is uncertain. Neither of these types of uncertainty are normally captured in traditional scientific measures of uncertainty (e.g.: confidence intervals).