By Laura Schmitt Olabisi
What is deep uncertainty? And how can scenarios help deal with it?
Deep uncertainty refers to ‘unknown unknowns’, which simulation models are fundamentally unsuited to address. Any model is a representation of a system, based on what we know about that system. We can’t model something that nobody knows about—so the capabilities of any model (even a participatory model) are bounded by our collective knowledge.
One of the ways we handle unknown unknowns is by using scenarios. Scenarios are stories about the future, meant to guide our decision-making in the present. They can be verbal, artistic, graphical, quantitative, or any combination of these. The beauty of scenarios in dealing with unknown unknowns is that they harness the power of speculation and imagination, whether performed by humans or by computers. You may not know which technology is going to make the biggest difference in shifting greenhouse gas emission trajectories, or even how to figure that out. But you can imagine several different possibilities, and considering that range of possibilities gives you some idea of where the future could go, helping you to structure your decisions accordingly.
Some scenario approaches involve running a computer model thousands of times, with thousands of different combinations of parameters and equation structures, including some that the human users wouldn’t have thought to run. The output from this type of exercise sketches out a scenario space, encompassing a wide range of outcomes, from ‘good’ to ‘bad’ to ‘ugly’ and everything in between.
Decision-makers may not know which of these outcomes is more or less likely than another, but this kind of exercise allows them to help identify which decisions could shift the outcome more strongly in the ‘good’ direction (or at least away from the ‘bad’).
One example of this type of deep-uncertainty analysis is Robust Decision Making (RDM) developed at the RAND Corporation. Robust decision making typically uses a simulation model — for example, a system dynamics model or (for water-related problems) a hydrological model — to develop the scenario space that informs decisions. It also has a participatory component, with stakeholder deliberations used to define desirable and undesirable outcomes, and to rule out implausible scenarios (those for which there simply isn’t a logical argument about how they could happen).
Questions at the frontier of scenario-based policy analysis include: How can different values and preferences be incorporated into the scenario approach (for example, how do we deal with a situation in which one person’s ‘bad, avoid’ scenario is another person’s ‘great, let’s do it’ scenario)? How can we use more than one model (sometimes called a model ‘ensemble’) to create a scenario space for decision-making? How do we use participatory modeling approaches to scope a scenario space?
Such ways of dealing with deep uncertainty complement other contributions to this blog about the benefits of models for supporting decision-making under uncertainty. For example, in her blog post, Antonie Jetter argued that participatory model-building can help to mitigate or address some other kinds of uncertainty associated with models. Because of their deliberative and collaborative nature, participatory modeling exercises can narrow the range of effect uncertainty, in which we don’t know how components of the model relate to one another. By sharing their experiences of the system from multiple perspectives, stakeholders can triangulate this uncertainty to some extent.
What methods have you used to deal with uncertainty, especially unknown unknowns? Share ideas in the comments!
For more information on improving decision-making under deep uncertainty, see materials provided by the Society for Decision Making Under Deep Uncertainty: www.deepuncertainty.org
Biography: Laura Schmitt Olabisi is an associate professor at Michigan State University, jointly appointed in the Environmental Science & Policy Program and the department of Community Sustainability. She uses system dynamics modeling and other systems methods to investigate the future of complex socio-ecological systems, often working directly with stakeholders by applying participatory research methods. Her research has addressed soil erosion, climate change, water sustainability, energy use, sustainable agriculture, food security, and human health in the U.S., the Philippines, Nigeria, Zambia, Malawi, and Burkina Faso. She is member of the Participatory Modeling Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).
This blog post is one of a series resulting from the second meeting in October 2016 of the Participatory Modeling pursuit. This pursuit is part of the theme Building Resources for Complex, Action-Oriented Team Science funded by the National Socio-Environmental Synthesis Center (SESYNC).
13 thoughts on “Dealing with deep uncertainty: Scenarios”
Unknown unknowns cannot be known, by definition (I.e. you cannot identify them empirically by identifying information such as a weak signal). They are things about the future which we do not know we do not know. But we can imagine futures which are dissimilar to the present, for which no empirical data exists presently. Entrepreneurs do it all the the time. Imagination is the basis of scenario planning. We can imagine potential surprises as Shackle showed us: http://www.sciencedirect.com/science/article/pii/S0040162516300671
One scenario method that can allegedly uncover unknown unknowns is the baconian method: http://www.sciencedirect.com/science/article/pii/S0749597814000302
I wonder whether Robust Decision Making a la RAND might simply be a locally robust method: http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.2011.01772.x/full
Well there is a call for papers for a special issue of International Journal of Forecasting on this very issue if anybody wants to collaborate.
I was able to obtain a copy of the paper on the Baconian Method referred to by James Derbyshire. It is quite heavy going but makes several points about the extent of our knowledge and how our behaviour can increase it or condemn us its limitations.
Passing over the history and definitions to start at journal page 272, it points out, among other things, that:
– A planning method that, explicitly or implicitly, rests on enumerating a set of foreseeable future states “grinds to a halt” when something unforeseen arises and the best we can do then, with that method, is to start again with a longer list of possible states;
– Strategies that focus on a variety of states will be more informative than those that explore many examples of foreseeable states (continuing to increase confidence in what we first thought of in the latter case instead of looking for alternative explanations for the evidence we say proves our point that might make alternative states plausible, in the former case);
– The ‘Baconian algorithm’ described includes a conscious search for previously un-imagined future states although it doesn’t offer any magic way to discover them apart from removing the subtle behavioural factors that limit enthusiasm for the search.
In some ways it is a reformulation of the dangers of and ways to avoid things like confirmation bias. It discusses the challenges of translating theory from hypothesis testing to management planning, where noisy signals make interpretation of data difficult and we cannot repeat experiments.
Some of the discussion seems to me to overlap with the complex domain of David Snowden’s Cynefin framework where methods are recommended that make no prior assumptions and all action is guided by incremental ‘safe to fail’ experiments or probes. This has the potential to tap into a much larger network of veiwpoints and to do so over a longer time than a management team strategy planning exercise.
At the end of the paper, it seems to me to say that, as we knew but have not perhaps explained so clearly before:
– There will always be things we will not imagine that actually crop up and affect our objectives, for good or ill;
– We can push against the boundaries of our knowledge and extend it by deliberate effort although there is no theoretical basis for deciding when to stop;
– Many common practices undermine the incentives to make that effort and confine our decision making to a smaller domain that it could enjoy.
I will continue railing against the term ‘unknown unknowns’, not least because it feeds into that weakening of the incentive to push the boundaries. My own candidate replacement is ‘things we haven’t thought of’, which puts the responsibility for the limitation back onto the decision makers rather than suggesting that it is a feature of the external environment that can be assumed to be beyond our control.
What I was thinking about when I read the baconian approach was that that approach allows one to stretch out the state space, possibly uncovering previously unknown unknowns along the way, leading to a more global robustness. However, would that then be at the expense of loal robustness, in which the local area in the state space around each newly uncovered scenario is then neglected? I think what is required is both local and global robustness, in which the state space is both stretched and broadened, and the same time identified local areas within in it are search in detailed through modelling techniques. Is there a way to combine the two, or is that what Robust Decision Making does?
Anyway, another paper I came across on this sort of issue is: http://www.sciencedirect.com/science/article/pii/S1364815216310593
Hi James, whether RDM uses a local or global robustness model depends on how robustness is measured. This is a choice up to the analyst. In general, during the scenario discovery phase of RDM, one uses a global satisfying metric. Typically this is a variant of the domain criterion by looking at the fraction of computational experiments where a strategy fails. In the trade-off analysis phase, typically a global regret metric is used. This is in stark contrast to info-gap where robustness is defined as the minimum (euclidean) distance between where a strategy fails and a reference scenario. The use of a reference scenario implies a local robustness model.
Dr. Grey bring up a relevant point: one should aim to maximise the diversity of scenarios that is being considered, rather than merely the raw number of scenarios. I know of some work om diversity and RDM (http://www.sciencedirect.com/science/article/pii/S1364815216302419), but work in this space is needed.
Note that a local robustness model might be defendable. In many cases we have a fair sense of the general tendencies but lack knowledge regarding the tails. In those cases, a local robustness model such as the radius of stability can be quite useful. Evidently, one would have to carefully assess the sensitivity of the robustness to the choice of the reference point.
James, I am happy to work with you on something in this space for a special issue, just contact me if you are interested. We might link this to your recent work regarding Shackle, which I have found quite intriguing. His work is high on my to read list.
Well, Shackle did indeed invent the concept of ‘unknowledge’, which refers to everything we do not know, in contrast to what we do know. And he saw the future as largely subject to unknowledge, whereas a problem with many scenario techniques is their focus on what we do know, which is a tiny subset of everything it might be possible to know. He also refered to ‘bounded uncertainty’, however, implying that unknowledge about the future is not necessarily infinite.
Also interestingly, Shackle thought we should consider future scenarios based on our disbelief in them, rather than our belief in them, because disbelieving one scenario does not preclude the possibility of disbelieving many others. However, if you believe in a particular future outcome, by definition you disbelieve in alternatives.
I’ll send you the IJoF call for papers by email. It does seem to be very relevant to this discussion.
Hello all, thanks for your comments. This is an important discussion which demonstrates that there is a lot of uncertainty (pun intended 😉 around the ways in which some of these terms and phrases are used. Discussing them across research silos, as we are doing here, is helpful.
Stephen, I share your concern that ‘unknown unknowns’, like other terms including ‘sustainability’ perhaps, has entered the popular lexicon and become vacuous to some extent. But I respectfully disagree that it is meaningless and should be divorced from the scenario process. In any decision-making process, including scenarios, in which one does learn something about the ‘unknowns’, there remain things which we haven’t anticipated, don’t know anything about, and which constitute, in a way, the ‘shadow’ side of our decision-making. I think it’s important to acknowledge and name this shadow space, because it keeps us humble, adaptive and resilient in our decision-making. It is impossible to scope out the entirety of the ‘surprises’ which might befall a complex decision-making process, and we need to acknowledge that explicitly and recognize it. Perhaps a better term than ‘unknown unknowns’ can make that point, and if so I am happy to start using it!
Jan, I agree that the concepts of ‘unknown unknowns’ and ‘deep uncertainty’ are different, but I believe them to be related. You are correct that RAND’s definition of deep uncertainty is a situation in which actors do not know which outcome is more or less likely than another, and may not agree about the outcome possibilities. I was trying to make the point (which may not have been as clear as it should have been in a limited blog post) that scenario-based approaches like RDM can explore this space, but never fully–the ‘unknown unknowns’, things we haven’t thought of but which can potentially change our system in radical ways, always remain. I described RDM towards the end of the post, and agree that thousands of simulation runs can be a powerful way of parsing deep uncertainty. When I said that simulation models cannot address deep uncertainty, I meant the use of the models *alone* cannot do so, unless coupled with an approach like RDM or other scenario methods.
I think it is obvious that we never know everything about a situation in which we seek to act and exercise an influence. What is the philosophical position to be taken on this?
There is a danger, another awkward consequence of the way ‘unknown unknowns’ affects people, of simply abdicating from a consideration of uncertainty. I think that making the unforeseeable too big a part of our thinking and too easy to tolerate has undesirable effects on behaviour. I’ve encountered a response that might be paraphrased as “we can’t predict the future so we might just as well do what we usually do and stop worrying so much”. When pressed, if expressed at a high level rather than insisting on detailed mechanistic descriptions, a lot of what people have hidden in the unforeseeable pigeon hole can be addressed.
I think the constructive way to deal with the limits on our knowledge are to do with options and responsive capacity, having the capacity and means to respond when something unforeseen and possibly unforeseeable crops up. That is a significant shift in thinking from the standard systems approach. It is more in keeping with a complexity based approach of the sort embodied in Snowden’s Cynefin framework.
Laura, I completely agree with your point that there will always be residual uncertainty. No matter how exhaustive we explore possibilities using some type of scenario approach, we have to keep open the possibility of being wrong or of being surprised. This brings up the next question: can we do anything about residual uncertainty?
In my view, yes we can. If we design our strategies to be adapted over time in response to how the future actually plays out, we have the ability to react to unforeseen circumstances. Admittedly, this does not mean we can respond to all unforeseen circumstances, but up to a point we can. This is similar to Dr. Grey’s point about responsive capacity.
Agreed! This is the underlying philosophy behind, say, adaptive management in the environmental realm.
On the subject of adaption, this is a great paper which I came across recently: http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.2012.01792.x/abstract
I really wish we could dispense with this meaningless concept of unknown unknowns. If you contemplate a scenario in which a particular set of circumstances can arise then you know something about it. In the extreme, you might only know the type of consequences that you are concerned about, not how they will arise because there are many ways they can come about. A major shift in commodity prices, a large change in taxes or an unusual combination of interest rate changes and unemployment for instance.
The term ‘unknown unknowns’ is catchy but vacuous. If you really don’t have any idea about certain circumstances or their consequences you won’t think about what they might mean for you. As soon as you do think about that, the unknown label is gone. You might not know everything about the matter but you never do.
If it was just a bit of extravagant eye catching language, I wouldn’t mind too much but the term has become a way out for people who don’t want to think things through. It’s lazy language.
Quite often, when one forces someone using such language to dig a bit deeper they will frame their vague concerns in respectable terms that actually allow for action to be taken. We might not know what will disturb a research program developing a new chemical process for mineral extraction but we might know that chemical processing developments are regularly thrown off track for several months by either a major safety incident, unforeseen chemical interactions or scale up problems. We might not know what particular challenges a startup business will encounter but we may be certain that there is a high chance of something falling apart and derailing the business before it becomes viable.
All the good arguments for scenario techniques can be made without having to use slack, unhelpful language.
Sorry for the rant but this is a real problem undermining what could otherwise be sound decision support in complex systems.
Agent-based Models (ABMs) can handle some ‘unknown unknowns’ or at least produce them – that’s one of their advantages over System Dynamics Models (SDMs). As I have heard Stuart Kaufman say a few times, the number of unknown unknowns is context dependent – it is ‘indefinite’, not infinite but is still the great challenge for ABMs as well.
The comparison between Agent-based Modeling (ABM) and System Dynamics (SD) makes little sense. Both can produce surprising results. They both produce emergent behaviour. The only difference is that ABM operates at a lower level of granularity. For a very good comparison between the two approaches see: https://dl.dropboxusercontent.com/u/4180603/HRPapers/Heterogeneity-Network-Structure.pdf
I agree with Dr. Grey that the term unknown unknowns cannot be used to describe a scenario method. The moment you can enumerate possibilities, as you do in a scenario study, you know something. It it thus not useful for unknown unknowns. Also, it is strange to equate deep uncertainty with unknown unknowns. If we go back to the original work from the RAND corporation on deep uncertainty, you’ll see that they define deep uncertainty as a situation in which a group of actors does not know or cannot agree on a variety of things. In this case, analysts might still be able to enumerate possibilities.
Similarly, I take issue with the claim that models can not be used under deep uncertainty. In fact, one of the key approaches advocated by RAND for conditions under deep uncertainty is robust decision making. This is a model driven approach which relies on thousands of simulation runs. In fact, I would argue that under deep uncertainty, and when dealing with complex systems, a model based approach where you perform thousands of ‘what-if’ simulation runs is much more defensible than a qualitative approach, assuming you can develop or have access to a simulation model that is accepted by the decision makers. This is because human reasoning about complex systems is fundamentally flawed (see the classic work on this in the system dynamics community).