Managing deep uncertainty: Exploratory modeling, adaptive plans and joint sense making

Community member post by Jan Kwakkel

jan-kwakkel
Jan Kwakkel (biography)

How can decision making on complex systems come to grips with irreducible, or deep, uncertainty? Such uncertainty has three sources:

  1. Intrinsic limits to predictability in complex systems.
  2. A variety of stakeholders with different perspectives on what the system is and what problem needs to be solved.
  3. Complex systems are generally subject to dynamic change, and can never be completely understood.

Deep uncertainty means that the various parties to a decision do not know or cannot agree on how the system works, how likely various possible future states of the world are, and how important the various outcomes of interest are. This implies that, under deep uncertainty, it is possible to enumerate possible representations of the system, plausible futures, and relevant outcomes of interest, without being able to rank order them in terms of likelihood or importance.

There is an emerging consensus that effort needs to be devoted to making any decision regarding a complex system robust with respect to such uncertainties. A plan is robust if its expected performance is only weakly affected by deep uncertainty. Alternatively, a plan can be understood as being robust if no matter how the future turns out, there is little cause for regret (the so-called “no regrets” approach to decision making).

Over the last decade a new paradigm, known as ‘decision-making under deep uncertainty’, has emerged to support the development of robust plans. This paradigm rests on three key ideas: (i) exploratory modeling; (ii) adaptive planning; and, (iii) joint sense-making.

Exploratory modelling

Exploratory modeling allows examination of the consequences of the various irreducible uncertainties for decision-making. Typically, in the case of complex systems this involves the use of computational scenario approaches (see also the blog post by Laura Schmitt-Olabisi on Dealing with deep uncertainty: Scenarios).

A set of models that is plausible or interesting in a given context is generated by the uncertainties associated with the problem of interest, and is constrained by available data and knowledge. A single model drawn from the set is not a prediction. Rather, it is a computational ‘what-if’ experiment that reveals how the real world system would behave if the various assumptions this particular model makes about the various uncertainties were correct.

A single ‘what-if’ experiment is typically not that informative, other than suggesting the plausibility of its outcomes. Instead, exploratory modeling aims to support reasoning and decision-making on the basis of the set of models. Thus exploratory modeling involves searching through the set of models using (many-objective) optimization algorithms, and sampling over the set of models using computational design of experiments and global sensitivity analysis techniques. By searching through the set of models, one can identify which (combination of) uncertainties negatively affects the outcomes of interest. In light of this, actions can be iteratively refined to be robust with respect to these uncertainties.

Adaptive planning

Adaptive planning means that plans are designed from the outset to be altered over time in response to how the future actually unfolds. In this way, modifications are planned for, rather than taking place in an ad hoc manner. The flexibility of adaptive plans is a key means of achieving decision robustness.

This means that a wide variety of futures has to be explored. Insight is needed into which actions are best suited to which futures, as well as what signals from the unfolding future can be monitored in order to ensure the timely implementation of the appropriate actions. Adaptive planning thus involves a paradigm shift from planning in time, to planning conditional on observed developments.

Joint sense-making

Decision making on uncertain complex systems generally involves multiple actors who have to come to agreement. In such a situation, planning and decision-making require an iterative approach that facilitates learning across alternative framings of the problem, and learning about stakeholder preferences and trade-offs, in pursuit of a collaborative process of discovering what is possible.

Various decision analytic techniques can be used to enable a constructive learning process amongst the stakeholders and analysts. Decision analysis in this conceptualization must shift away from the a priori agreement on (or imposition of assumptions on) the probability of alternative states of the world and the way in which competing objectives are to be aggregated, with the aim of producing a preference ranking of decision alternatives. Instead decision analysis must shift to an a posteriori exploration of trade-offs amongst objectives and their robustness across possible futures. Decision analysis should move away from trying to dictate the right choice, and instead aim at enabling deliberation and joint sense-making amongst the various parties to decision. (For more on sense-making, see Bethany Laursen’s blog post on Making sense of wicked problems).

Closing remarks

Exploratory modeling, adaptive planning, and joint sense-making are the three key ideas that underpin the emerging paradigm of decision making under deep uncertainty. Various specific approaches that exemplify these ideas include (many-objective) robust decision-making, dynamic adaptive policy pathways, decision scaling, info-gap decision theory, adaptive policy making and assumption based planning. Notwithstanding the many technical differences that exist between these approaches, there is an increasing emphasis on what is shared. In practice also, increasingly people are adopting aspects from multiple approaches in order to offer context-specific support for making decisions under deep uncertainty.

What has your experience been with decision making under deep uncertainty? What methods have you found to be useful?

Further reading:
Bankes, S. C. (1993). Exploratory Modeling for Policy Analysis. Operations Research, 4, 3: 435-449.

Haasnoot, M., Kwakkel, J. H., Walker, W. E. and ter Maat, J. (2013). Dynamic adaptive policy pathways: A method for crafting robust decisions for a deeply uncertain world. Global Environmental Change, 23: 485-498. Online (DOI): 10.1016/j.gloenvcha.2012.12.006

Herman, J. D., Reed, P. M., Zeff, H. B. and Characklis, G. W. (2015). How should robustness be defined for water systems planning under change. Journal of Water Resources Planning and Management, 141, 10. Online (DOI): 10.1061/(ASCE)WR.1943-5452.0000509

Kwakkel, J. H., Walker, W. E. and Haasnoot, M. (2016). Coping with the Wickedness of Public Policy Problems: Approaches for Decision Making under Deep Uncertainty. Journal of Water Resources Planning and Management. Online (DOI): 10.1061/(ASCE)WR.1943-5452.0000626

Lempert, R. J., Groves, D. G., Popper, S. W. and Bankes, S. C. (2006). A General, Analytic Method for Generating Robust Strategies and Narrative Scenarios. Management Science, 52: 514-528. Online (DOI): 10.1287/mnsc.1050.0472

Walker, W. E., Haasnoot, M. and Kwakkel, J. H. (2013). Adapt or Perish: A Review of Planning Approaches for Adaptation under Deep Uncertainty. Sustainability, 5: 955-979. Online (DOI): 10.3390/su5030955

Biography: Jan Kwakkel is an associate professor at Delft University of Technology in the faculty of Technology, Policy and Management. He has a background in systems engineering and policy analysis for transport systems. His current research focuses on supporting decision making under deep uncertainty. This involves the development of taxonomies and frameworks for uncertainty analysis and adaptive planning, as well as research on model-based scenario approaches for designing adaptive plans. He has applied his research in various domains, including transportation, energy systems, and health. His primary application domain is climate adaptation in the water sector. A secondary research interest is in text mining of science and patent databases. His research is currently funded for four years through a personal development grant of the Dutch National Science Foundation.

10 thoughts on “Managing deep uncertainty: Exploratory modeling, adaptive plans and joint sense making

  1. A few observations from putting this alongside ‘normal’ risk management.

    [1] “… parties to a decision do not know or cannot agree on how the system works, how likely various possible future states of the world are, and how important the various outcomes of interest are.”

    There is nothing unusual about this state of affairs. However, there is an expectation in ordered systems that a sound process and expert facilitation will allow agreement to be reached. I think it is important to bear this in mind as there is a tendency towards declaring everything that is non-trivial complex and labeling the absence of agreement alone a sign of complexity and deep uncertainty is not helpful.

    [2] “… under deep uncertainty, it is possible to enumerate possible representations of the system, plausible futures, and relevant outcomes of interest, without being able to rank order them in terms of likelihood or importance.”

    I think this is a matter of degree. If scenarios can be enumerated, it is unusual to find no basis for distinguishing between their likelihoods, although not unheard of.

    There is often insufficient attention paid to what are “outcomes of interest” and some basic process hygiene can deal with this.

    [3] Building on the outcomes of interest, there is a danger in adopting plans for which the “expected performance is only weakly affected by deep uncertainty”. We can make a national economy extremely stable by driving it into stagnation. A simplistic drive for robustness in these terms might foster so much caution that the outcomes are predictable but not desirable.

    I think there is a need to make sense of our long term preferences and strategic objectives, expressed in very high level terms, without limiting tactical flexibility. As far as I know, the world of Agile software development relies on this but does so informally, relying on the people involved to separate these levels and manage the relationship between them as they make day to day decisions. The idea is not formalised.

    On the closing question about related experience, I believe that people often feel they are in the condition described in point [1] when in fact they can arrive at some understanding of their options and how to choose a preferred way forward. However, there will be situations in which there is no guiding star and my experience is that people default to off the cuff reactive decision making. Iterative and exploratory methods are obviously preferable to this but not always palatable to those steeped in systems analytical ways.

    • I agree that not all decisionmaking is subject to deep uncertainty. For example, operational decisions that have a time horizon of a few days to a few weeks taken in a stable environment are characterized by much lower levels of uncertainty than decisionmaking on large-scale infrastructure investments that have a lifetime of decades. It is the latter, rather than the former where deep uncertainty plays a role, and where techniques for dealing with it are relevant.

      Note also that even if a decision is being characterized by deep uncertainty, this does not imply that everything is deeply uncertain. Many decision problems are a mixture of deeply uncertain factors, and factors that can be handled by more traditional risk analysis methods.

      Still, since decisions such as large scale infrastructure planning or climate adaptation generally bring together a variety of stakeholders with different world views, different beliefs, and different preferences for what is desirable, disagreement on e.g. the probability of a particular scenario is in my experience quite typical. Moreover, in these types of decision problems there are a variety of other uncertainties as well. For example, different methodological choices can be made in how to do downscaling of climate scenarios. This methodological choice can have substantial influence on the final advice. I fail to see how any meaningful probability can be assigned to such methodological uncertainties.

      I completely agree with your point regarding robustness. That is, limited sensitivity alone is insufficient. The performance itself over the set of scenarios should evidently be desirable as well.

      • Reading this reply, I’ll just elaborate on the matter of when we are in a truly complex situation as opposed to an ordered situation, as the terms are used by Snowden in the Cynefin framework, which I think is pretty much synonymous with deep uncertainty as described here.

        One point I was trying to get across is that people might behave as if they exist in complexity and, for all intents and purposes, actually be so simply because they haven’t tried, haven’t had the time or don’t have the skills to bring about a common understanding and shared purpose among stakeholders. This situation will be complex in that changes may arise and developments might shift direction for reasons no one could see coming even though it makes sense after the event. This is not to say that the situation lends itself to analytical control but rather that time devoted to communication and building trust can eliminate some of the factors that give rise to the complex behaviour: multiple independent stakeholders with loosely aligned but independent motivations.

        In some situations, no amount of mediation and communication will bring stakeholders to a common position and natural phenomena that exceed our modelling and computational capacity will always be intractable.

        Naturally, we start thinking about such things in terms of distinct categories (complex versus ordered) but I believe there is scope for thinking about both heterogeneous, pockets or order with areas of complexity, and also time dependent characterisation where, given time, some of the divergent and independent forces at work can be reconciled and brought together.

        Interesting stuff.

        • The degree to which deep uncertainty and complexity as understood in the cynefin framework are the same is something on which I cannot really comment. I know of the framework, but I have never looked at closely to really investigate its similarity with deep uncertainty.

          I agree that joint sense making is critical in supporting decision making. Sometimes that can help resolve some of the uncertainty. But it is not guaranteed that this will happen. In cases like that, it might be useful to have tools to enable decision making to proceed despite different models, understandings etc. For decision making to proceed what matters is if it possible to come to an agreement on what to do. If we can agree on a course of action even though we have different reasons for agreeing to that course of action, that is still fine.

          I agree with the idea that complex and ordered are ends of a spectrum rather than being separate categories. I also agree that in many cases the problem contains a mixture of better characterized aspects, as well as pockets of complexity.

          • I might be blinkering myself but everything in the original piece makes a lot of sense in terms of the Cynefin framework. Snowden has spoken and written a lot on the subject (http://cognitive-edge.com) but it’s not all very accessible to someone wishing to read into it. A couple of resources I have found helpful are Tony Quinlan’s informally recorded talk as he wraps up a research trip, in which he explains some of the features of Cynefin and narrative based research: https://www.youtube.com/watch?v=EDTQNzxiXFA&feature=share and Greg Brougham’s mini book on Cynefin: https://www.infoq.com/minibooks/cynefin-mini-book

            In addition to offering a way to understand systems in which cause-effect relationships might have different characteristics, the Cynefin framework also provides a structure for thinking about how a system or parts of a system can shift. It seems to me that the complex domain is the default position of systems that are heavily influenced by human interactions, which is most of the interesting challenges we face. Many naturally occurring systems, ecosystems, fall in this domain as well.

            Part of the challenge we all face is that the approach to understanding and managing systems that has underpinned formal education for the last century has been and often still is firmly placed in the complicated domain. Many efforts to enhance understanding and our capacity to manage our world are based on increasing the depth and strength of these methods, which are not just ill fitted for complex systems but actually drive behaviour likely to precipitate failure in the management of complex systems. For a nice summary of this in the context of major infrastructure see http://www.insead.edu/facultyresearch/research/doc.cfm?did=55875

  2. Jan, thank you for this post! I’m thinking about what you wrote about sense making: “Instead decision analysis must shift to an a posteriori exploration of trade-offs amongst objectives and their robustness across possible futures.” This is actually quite insightful. Most sense making theories emphasize looking forward based on what you know now, which tends to sound a priori. But, a lot of what we know now is actually a posteriori (from previous experiences). To reference my blog post, it seems Reasons can be a priori or a posteriori, and what you’re saying is that under deep uncertainty, they should be more a posteriori.

    I totally agree…on the surface. But when you dig down into the a posteriori Reasons, you find that people have only learned those a posteriori things based on values they hold a priori. For example: “Model X taught us algal blooms happen at Y threshold.” Why did we focus on algal blooms? Why that threshold? Because we value certain things at bottom, e.g., clear water in a certain range of conditions. So that is one place a priori stuff can never be separated completely from a posteriori stuff.

    Another place is in the sense making process itself: in our feelings and beliefs, directly. I think there’s more to dig into here, but I agree with your main point: under deep uncertainty we need to look at empirical outcomes not solely a priori values and predictions.

    • Thanks for the thoughtful comments. I have read your blogpost and I broadly agree with how you characterize sense making. Interestingly enough, some of the early work on supporting the making of decisions under deep uncertainty was called computer assisted reasoning. This nicely dovetails with sense making as “the integration of reasons into an argument for understanding and believing what something means in answer to a question”.

      In the context of my blog, a priori and a posteriori are primarily used in relation to the analysis. Much traditional decision analytic work requires the upfront, a priori, specification of states of the world, the probabilities of these states of the world, the objectives, and the relative importance of these objectives. In contrast, a lot of the deep uncertainty methods emphasize postponing as much as possible this specification. So for example, let’s look at the trade-offs across a range of objectives first. If based on this we are ready to make a choice that is great. If not, why not? Would some MCDA style aggregation help? Or should we revisit the way in which the problem is being framed?

      This line of thinking is deeply inspired by the work of Alexis Tsoukias (https://doi.org/10.1016/j.ejor.2007.02.039) on constructive decision aiding.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s