The Masque of Rona
by Liam Kofi Bright and Richard Bradley
In the face of a global pandemic whose scale and impact is for most of us entirely new, governments around the world are adopting policies that severely interfere with our daily lives. I know this has lots of people wondering about whether such decisions could really be justified. And even if we accept that this action is justified we might still be interested in how one can sensibly reason about taking action in the face of unprecedented problems. So for the sakes of introducing the topic to a wider audience, in today’s post I am joined by my colleague Richard Bradley, an expert on making rational decisions under conditions of uncertainty.
Let’s be very clear about this: we are not epidemiologists or medical or social scientists - so we are not going to offer any direct policy advice. What policy should be adopted depends a lot on empirical particulars, and the pertinent domains of expertise for this would be quite distinct from ours. Don’t read this blog for policy advice or for justification of actually existing policies and academics should be clear about what we don't know.
(One of us - Kofi - sometimes uses twitter, and has found Dr. Carl Bergstrom’s twitter feed to be a really informative source of regular information about COVID19, public health concepts, and useful practices we can adopt as states and individuals.)
Instead, as mentioned, what we hope to do in this blog post is give a basic introduction to the problems of decision making under extreme uncertainty. We’re using policy making for COVID19 as a running example of this. We take the topic up because there has been some public concern about the sort of dramatic action many nations are taking even though the specific evidence we have about the virus is quite tentative or based only on models. And previous UK government policies were critiqued by scientists precisely because they were felt to take too great a risk in light of a paucity of evidence. As the scientists in that open letter put it “we are not convinced that enough is known about “behavioural fatigue” or to what extent these insights apply to the current exceptional circumstances. Such evidence is necessary if we are to base a high-risk public health strategy on it”. If we are to understand why people might think this the wrong approach, we need to understand: how should we reason about high risk choices when evidence is unclear or lacking?
A fairly standard theory of rational decision making goes something like this: a good choice is one which yields the greatest expected benefit. Spelling out what that means in any detail turns out to be surprisingly complicated but the rough idea will suffice for now. Our decisions should be guided by two sorts of things. First, there’s what we want and how much we want it. What outcomes we consider desirable or undesirable, and how much we would be willing to sacrifice to attain them. Second, there is what we believe. How do we think the world actually is before we act, what possibilities do we think our open to us and how likely do we think these various possibilities actually are, what sort of interactions with the world are possible and how can we bring things about. The choice(s) yielding the greatest expected benefit are those which best trade off desirability, given our wants, against attainability, given our beliefs. Such choices are those we rationally ought to make.
As rough a representation as this is, it allows us to better explain one of the social benefits of science. Scientific modelling allows us to be more precise about what we believe, and hopefully also through empirical inquiry we can make those beliefs more accurate. This can be very useful when we use scientific modelling techniques to represent our beliefs in the context of decision making - for instance when guiding social policy. (One of us - Kofi - has explored in print the idea that this policy guidance is in fact the main purpose of keeping science around.) We can start thinking very precisely about how we expect variables of interest to interact and change in response to potential policies. We can also use probability theory to reason precisely about how likely it is we think various outcomes are to occur. This greater precision, along with increased accuracy, allows us to say with much greater confidence what choice yields the greatest expected benefit, given our evidence.
Of course, such scientific work is addressed to sharpening up our beliefs, not speaking so much to the wants side of things. This is one reason to be wary of people explaining what they are doing by saying they are following the science; this can only ever be half the equation even in our simplified standard theory. Even on the very simple model one really needs to follow up with the question - and to what end?
In any case, consider the way that the now famous Imperial model reasoned about outcomes associated with COVID19. (Note: this team recently came out with a different model coming to broadly similar conclusions - but we will just describe the initial model because the differences are not important to the points we wish to make here). In very brief, the Imperial model simulates the spread of the disease through the British and American populations under a variety of scenarios corresponding to different anti-pandemic policies being adopted. In the situation in which no social distancing measures were introduced, it was estimated that over 2 million would die in the US and over 500,000 would die in the UK. And unless a combination of drastic measures were taken and sustained, it predicted that the virus would still overwhelm the health care resources of both nations. For details as to how this is actually modelled, there is an excellent and accessible write up here. I note that some of the points about uncertainty we will make in this post were also made there.
It seems this model was very influential in getting the British government to change its strategy and adopt more aggressive suppression measures. So here we have an apparent instance of policy based on reasoning precisely about different policies in a scientifically well informed model. Does our standard theory thus give us reason to be confident this was, in some intuitive sense, the most rational decision?
Well, not really. For one thing, the model itself is silent on many of the dynamics we actually care about and need to know about in order to rationally make our decision. The Imperial model treats the outcome of interest as the number of people likely to die under each scenario. This because they “do not consider the ethical or economic implications” of the strategies they discuss. But their assumptions about the mortality rate of the virus assumes it will be constant across their different scenarios. As the authors put it, they are "not accounting for the potential negative effects of health systems being overwhelmed on mortality.” But we know that if our health care resources are overwhelmed this will lead to a higher mortality rate for the virus. How many more will die, and what difference will it make to this figure when it is that the overwhelming occurs, and would various of the policies they model make a difference to how this plays out? We cannot say on the basis of this model, since it was not designed to answer such questions. And this is itself reflective of the fact that - we don’t really know precise answers to those questions. As such, even restricting our attention just to mortality from the virus the model does not and cannot tell us what to expect for all the outcomes of interest. And, of course, mortality from the virus is presumably not the only thing we actually care about in an outcome - though since the contrast is often made with economic implications of policies, it should be noted the contrast between economic and other factors is somewhat artificial and recommendations from economic experts do not seem to be substantially different.
For another thing, it is necessary to make assumptions about the rate of infection and how many people begin the time period in question infected. This is necessary to know how much time we have to act, what sort of effects different policies would have at different times, and how deadly the virus is and how fast it spreads. However, as has been much discussed (e.g.) in the case of the UK because of a separate model from an Oxford team, we also don’t really know that! To be clear, the Imperial team made quite reasonable assumptions (in light of some evidence) about when the virus came to the UK and how quickly it spread from this initial exposure. But, they are perfectly upfront about this, these were assumptions which themselves rested on other assumptions which - as the Oxford model makes plain - are open to dispute and uncertainty.
We do not think that all this uncertainty, we hasten to add, is because the Imperial model was a bad model. We’re not qualified to evaluate its specifics. But speaking in general about modelling, it is right and proper to make simplifying assumptions, and the authors are not to be blamed but commended for rapidly doing publicly relevant research while making clear its limitations. It is quite possible that this is the best we can do at our early stage of knowledge about COVID19 - see here for an interesting discussion of this. This just means we have to be clear eyed about what we can and cannot get from this sort of modelling work. It can’t get us well grounded confidence that we are making a decision that will yield the greatest benefit. But if we are open with ourselves about our high degree of uncertainty and try and find a way to work with that, it can sensibly guide our decisions none the less.
The first, and perhaps most important thing, is to get some kind of representation of the uncertainty that we face when using scientific models to support decisions. Models, such as the Imperial model we have been looking at, support policy by making predictions about how the future will pan out if the system is left to itself vs if we intervene on it in some way. But these predictions are very sensitive to measurements of initial conditions (such as how many people are already infected), the values of model parameters (such as what the fatality rate is amongst the infected) and assumptions about the relationships between the variables being modelled (such as that between the transmission rate of the infection and the frequency of social contacts). Uncertainty about the initial conditions can be captured by making probabilistic predictions of outcomes e.g. by giving predictions of the form ‘With probability x, more than 100 000 people will die, with probability x+y more than 10 000 will, …”. But we still need to account for the other uncertainties, such as those regarding the true fatality rate and the factors influencing transmission. Failure to do so risks inducing unjustified confidence in the model’s predictions. For instance, it can lead them to think that one course of action can be expected to yield more benefits than another, when the real situation is that whether or not this is true depends on very particular parameter values or assumptions about causal relationships.
One way of capturing the remaining uncertainty that has found increasing support of late is to do so by specifying not just a single probability distribution over the outcomes of interest, but a family of them. If we think of each member of the family as being the distribution that we get from particular choices of parameter values and modelling assumptions, then we can see that size of the family will give a measure of just how uncertain we are about the correct choices to make.
This new way of thinking about our extreme uncertainty allows policy makers to choose actions that can be expected to yield benefits across a very wide range of scenarios. One can prefer such actions to those that are only beneficial in a narrow range of scenarios, even if amongst the latter are those that are optimal for some very specific parameter values. One could even be so cautious as to require one’s actions to be beneficial in not just a wide variety of scenarios, but only choosing actions that have acceptable (expected) consequences under all possible scenarios. Suppose for instance that you have to choose between making two investments, each costing £1000, one of which is guaranteed to yield some small benefit (say £2000) and the other yielding either nothing or £21,000 depending on whether you complete it successfully or not. If the probability of success in the latter case is greater than 5%, then its expected return is higher than the former. Suppose your best estimate of the probability is 6%, but regard any estimate between 1% and 15% are reasonable. Then although the second action is optimal, it is not robustly so – on some estimates you are expected to lose your investment. On the other hand the first action is robustly beneficial, though far from optimal on your best estimate.
So far from leading to indecision, this new way of representing and thus acknowledging our extreme uncertainty can lead to serviceable practical advice. And that is practical advice which is distinct from that issued by the standard theory, and unlike that theory does not require of us that we can say exactly how probable the various outcomes are. Granted that we can use this to make choices, the question remains which choices we should make. Why, for instance, should I pick an option just because it at least gives me an acceptable expected consequence in all the possibilities I consider?
One of us - Bradley - has said before that `hedging seems particularly compelling when the costs and benefits of an action in each state of the world accrue differently to different individuals, for in this case reducing the variance can serve the goal of treating individuals more equally.’ (Decision Theory with a Human Face - pg. 272) And here we note that the COVID19 pandemic is just the sort of scenario which could make such reasoning tempting. The worst-case scenarios in the model discussed above involved hundreds of thousands of unnecessary deaths in the UK. The calamity would be very unevenly distributed across the population, with the elderly and infirm suffering much more and more directly than other sections of the population. In the face of this sort of potential outcome, taking actions which spread the pain out a bit and in doing so avoid the worst-case scenario may seem like a tempting rule.
Another concern one might have is to maintain as much flexibility as possible for as long as possible. In situations of extreme uncertainty such delaying tactics are useful. For instance, if one’s present uncertainty leaves so many possibilities open that it is not clear which to act upon, but it is possible that future developments will make it relatively clear how to prioritise. For instance, suppose you think it is possible an experimental treatment could be found very effective at reducing the damage done by the pandemic. If and when such treatments are identified one will want to distribute them widely. There would then be some reason to take actions which you at least know can effectively slow the spread of the disease, in order to preserve the functioning of the health care infrastructure that would be necessary to implement the experimental options at mass scale. These would be instances of the general strategy of maintaining flexibility, preserving for as long as possible the chance of taking better actions.
There's no mathematical solution to tragedy, nor an analysis that saves us the responsibility of hard choices. Whatever we do there will be terrible loss. May our decisions be guided by a reason that knows its limits, and a compassion that knows none.