Blackboxing unknown unknowns through vulnerability analysis

By Joseph Guillaume

Author - Joseph Guillaume
Joseph Guillaume (biography)

What’s a productive way to think about undesirable outcomes and how to avoid them, especially in an unpredictable future full of unknown unknowns? Here I describe the technique of vulnerability analysis, which essentially has three steps:

  • Step 1: Identify undesirable outcomes, to be avoided
  • Step 2: Look for conditions that can lead to such outcomes, ie. vulnerabilities
  • Step 3: Manage the system to mitigate or adapt to vulnerable conditions.

The power of vulnerability analysis is that, by starting from outcomes, it avoids making assumptions about what led to the vulnerabilities. The causes of the vulnerabilities are effectively a ‘black box’, in other words, they do not need to be understood in order to take effective action. The vulnerability itself is either a known known or a known unknown. The causes of the vulnerability, on the other hand, can be unknown unknowns.

We can of course partially open the black box and try to construct an understanding of those causes – turning them into known unknowns. Mitigating a vulnerability relies on having sufficient knowledge to anticipate and counter it. We can, however, also recognise that the black box remains partially unopened – that the vulnerability might occur for reasons we have not anticipated, but we can still monitor to check whether the vulnerability is occurring and adapt accordingly.

Let’s take an example investment decision. We want to store water from wet times to prepare for drought, and we are considering two options: “surface storage” in a dam and “managed aquifer recharge”, involving storing water underground, as groundwater. We want to make our decision based on outcomes – we want to choose the option that provides the greatest net benefit.

There’s a lot we know about costs and benefits of each option. They both have capital costs – to build the dam and infrastructure to infiltrate or inject water underground. They both have maintenance costs. Benefits come from having water available when needed, and a key advantage of storing water underground is that it reduces evaporation – we expect that this means there will be more water available for dry years, which translates to better socio-economic outcomes.

These costs and benefits are uncertain but vulnerability analysis gives us a way of thinking through them. Suppose we had decided to invest in managed aquifer recharge. It would be undesirable if surface storage then turned out to be better value – there would be a “crossover” in our preferred option. What are then our vulnerabilities?

If we look at each of our costs and benefits in turn, surface storage would have the advantage if its costs were lower than expected and benefits higher, and vice versa for managed aquifer recharge. If the cost of infiltration infrastructure rises, or the price of an irrigated crop falls, managed aquifer recharge may no longer be worthwhile. We can investigate how much of a change would cause this crossover to occur. If we look at both uncertainties at once, even small changes in infrastructure cost and crop price may cause a crossover – managed aquifer recharge is even less viable. These vulnerabilities become scenarios we can discuss within our investment planning process (for a description of scenarios, see Bonnie McBain’s blog post). (Info-gap theory as described in Yakov Ben-Haim’s blog post does something similar to vulnerability analysis.)

At this stage, we have not needed to know why the infrastructure cost would rise or crop price would fall. Both remain unknown unknown black boxes. But we can add information if we have it: we might be able to get a fixed price contract for the infrastructure, and we might be able to use price forecasts to evaluate how worried we should be about that vulnerability. And importantly, we can do this while only partially opening the black box, by identifying the vulnerabilities introduced by our new information. A fixed price contract can be associated with a black box probability that the contractor will not finish the job. Our price forecast can be accompanied by an error or a probability distribution with unknown unknown drivers, used for example to maximise expected utility (for a description of expected utility see the blog post by Siobhan Bourke and Emily Lancsar).

Using vulnerability analysis to work backwards from outcomes provides a powerful way of working with unknown unknowns, gradually identifying known unknowns as we come across them, while making the best use of what we consider known knowns.

What has your experience been with vulnerability analyses? Have you seen them used to blackbox unknown unknowns in practice?

To find out more:

Arshad, M., Guillaume, JHA. and Ross, A. (2014). Assessing the Feasibility of Managed Aquifer Recharge for Irrigation under Uncertainty. Water, 6 (9): 2748–69. (Online) (DOI): http://dx.doi.org/10.3390/w6092748

Guillaume JHA, Arshad M, Jakeman AJ, Jalava M, Kummu M (2016) Robust Discrimination between Uncertain Management Alternatives by Iterative Reflection on Crossover Point Scenarios: Principles, Design and Implementations. Environmental Modelling & Software, 83: 326–43. (Online) (DOI): http://dx.doi.org/10.1016/j.envsoft.2016.04.005

Biography: Joseph Guillaume PhD is a DECRA (Discovery Early Career Researcher Award) Research Fellow in the Fenner School of Environment & Society at the Australian National University in Canberra, Australia. He is an integrated modeller with a particular interest in uncertainty and decision support. Application areas have focussed primarily on water resources. Ongoing work involves providing a synthesis of the many ways we communicate about uncertainty, and their implications for modelling and decision support.

This blog post is part of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

For the six other blog posts already published in this series, see: https://i2insights.org/tag/partner-defence-science-and-technology/

Scheduled blog posts in this series:
December 3: Yin-yang thinking – A solution to dealing with unknown unknowns? by Christiane Prange and Alicia Hennig
January 14, 2020: Detecting non-linear change ‘inside-the-system’ and ‘out-of-the-blue’ by Susan van ‘t Klooster and Marjolijn Haasnoot
January 28, 2020: How can resilience benefit from planning? by Pedro Ferreira
February 11, 2020: Why do we protect ourselves from unknown unknowns? by Bem Le Hunte

10 thoughts on “Blackboxing unknown unknowns through vulnerability analysis”

  1. Comment posted on LinkedIn:
    What about where the undesirable outcomes are unknown? For example, HIV positive patients feeding their medication to their animals (which was not considered a potential undesirable outcome when the project commenced).

    My reply:

    There’s been a bit of discussion in other comments here about uncertainty in defining outcomes. The approach outlined definitely assumes a situation where it’s easier to start with objectives, but the general concept of blackboxing unknown unknowns and partially opening up the black box is more broadly applicable.

    For example, in a cost-benefit analysis context I have previously experimented with including a term for an “unaccounted difference” between alternatives.

    In the example given, I could imagine that HIV positive patients taking their medication would be an overall objective, and one could blackbox that objective by considering that there may be unknown unknown reasons and means by which medication may not be taken. A vulnerability analysis on that objective might then help surface feeding medication to their animals as a known unknown, but at the very least, awareness that an unknown unknown may exist for this objective will hopefully reduce overconfidence!

    Guillaume JHA, Arshad M, Jakeman AJ, Jalava M, Kummu M (2016) Robust Discrimination between Uncertain Management Alternatives by Iterative Reflection on Crossover Point Scenarios: Principles, Design and Implementations. Environmental Modelling & Software 83: 326–43. doi:10.1016/j.envsoft.2016.04.005

    Reply
  2. (These comments come from the perspective of an aspiring mathematician and reluctantly retired engineer with later interests in systems and “complexity”).

    I have elsewhere described three classes or levels of complexity:
    1. Large systems,
    2. intricate systems, and
    3. complex adaptive systems.

    Unlike conventional engineering systems (including projects), the 3rd category is where a system or project is populated by partially autonomous agents, each of whom can have private interests, incentives, resources and capabilities, and each of whom can engage in adaptive behaviours to find ways of influencing – whether perversely thwarting or promoting – the project’s success or system’s operation.

    Such agents typically become activists, finding allies and creating organisation to further their common ends. (For interest one can identify such “emergent” behaviours in biological systems, and in economies, politics, regulatory evasion tactics (e.g. tax evasion, cyber hacking), military endeavours).

    It is these “emergent” behaviours – the unknown “unknowns” – that are most difficult to anticipate. From my experience in government policy formation and implementation and in commercial endeavours, far more effort needs to go into anticipating and managing the risks of the perverse behaviours than into effecting the simply intended programs.

    Reply
    • I suppose anticipating and managing perverse impacts and behaviours should be seen as an essential part of the design and implementation of a program.

      Vulnerability analysis provides a way of tackling perverse behaviours without having to predict them by focusing on the effect of the behaviour.

      Working within a theory of change for a program, a planner can put in place checks and contingency plans.

      If some extra information is available regarding the nature of the behaviour then some form of mitigation or barrier can be introduced too, again without necessarily having perfect information about the behaviour itself.

      Reply
      • Hello Joseph, I agree on the need for vulnerability analysis to incorporate potential and consequences of perverse behaviours.

        From my experience, the project manager or policy implementation manager can usefully identify the parties (agents) likely to engage in relevant behaviours and analyse them. The analysis can consider the agents’ interests, incentives, impacts, resources, capabilities, and their networks of potential allies (associates).

        Emergence of associations can amplify (non-linearly) the influence such agents would have if acting independently. Examples from earlier energy policy that come to my mind generally are:

        1. When governments were first introducing energy efficiency labelling of appliances, the appliance manufacturers and local electricity distributors joined forces to oppose the initiative and also sought to influence other jurisdictions that had little interest in its success.

        2. When a government in Central Asia was seeking to improve the efficiency of water utilisation in its jurisdiction, neighbouring countries and communities used their command of other resources to effectively halt/pause the project, even at the loss of overall economic/environmental value.

        Both cases exemplified the “non-linear” superposition of individual impacts.

        Thanks for your work on establishing a framework for dealing with vulnerabilities more generally.

        Reply
        • Thanks! I agree that the agents involved are often known, or at least known unknowns, as are their associations. Unknown unknowns probably crop up as part of explaining their behaviour…

          Thanks for the good examples. The flip side is that these non-linear effects can potentially be used to counter perverse behaviours too.

          Reply
  3. It is rather hard and difficult to comment on something with which I broadly agree. Somewhere in the Tractatus, if I remember correctly, Wittgenstein makes a comment to the effect that we cannon delineate the knowable from outside. Only from within what is knowable and sayable can we characterise its limits. Something analogous is going on here. Unknown unknown are not amenable to any process aimed at produces new knowledge or by extracting more implications from what is already known. However by starting from what it is that we care about, and reasoning backward we might surface assumptions that are critical to achieving what we want. We don’t need to specify how these assumptions might fail while still being able to monitor and prepare for their possible failure. To my knowledge, the first one to make this point was Jim Dewar in his work on assumption based planning.

    Reply
    • Thanks Jan for your help in contextualising this blog post. My feeling is that there is more to Jim Dewar’s insight that hasn’t been fully realised. The idea of encapsulation of unknown unknowns into a black box is clear, but the notion of partially opening the black box is under-explored.

      My thinking is probably aligned with Neyman, that knowledge is an act of will – we decide to act as if something is true rather than ever truly knowing it (“inductive behaviour” rather than inductive inference). From this point of view, the border between known and unknown is ill defined, which is what assumption-based planning and other vulnerability-based methods exploit. Faced with a completely unknown unknown world from a sceptical point of view, it is the planner’s job to pin down things they will consider known knowns and known unknowns and select actions on that basis (affected by their tacitly determined unknown knowns).

      Pinning down objectives (and often also a current preferred plan) is often an easy starting point because this is a normative rather than epistemic decision and we typically assume we know our own mind. Everything else is a delicate negotiation of standard and burden of proof, including regarding the anticipated immediate and future consequences of what we decide to pin down. This negotiation process is perhaps the most important part of these methods, but hasn’t received the attention it deserves.

      Reply
      • I have been thinking about your reaction over the last few days. A few thoughts:
        1. I agree that this idea of partially opening up the black box is under-explored. Taleb, in the black swan, makes a remark about how some black swans might be turned into grey swans. However, he does not develop this idea any further. His ambiguous metaphoric language does not help much either in clarifying what he might mean.

        2. Never heard of Neyman, so references are welcome. I also doubt I agree with calling it an act of will, although I do agree that we often choose to act if we have enough evidence or conviction, rather than on the basis of unassailable truth. As an aside, absolute truth does in my view not exist outside of mathematical truths. I think the relevant philosophical literature to look at is the literature on judgements within epistemology. In particular in the mid 19th century this was, to my knowledge, a rather hotly debated topic with contributions from a whole range of angles.

        3. The fact that something is normative rather than epistemic does not necessarily imply that it is any less uncertain. Normative uncertainty might be under-explored, and for example the allusion to it in the classic characterization of deep uncertainty is too strongly tied to MCDA-speak [Multi-Criteria Decision Analysis]. I seriously wonder how often we truly are aware of what it is that we care about, about what is important. To your point, yes we often act as if we know or we are in a standardised context within which what we should care about is considered given (e.g., the ritual insistence on CBA [Cost Benefit Analysis]).

        4. I agree that the negotiation process, or the use of collective intelligence, is highly important yet under-explored. In the literature on modeling with stakeholders, it seems uncertainty in general and unknown unknowns specifically are not receiving much attention. For example, most group model building implicitly assumes that through a process of joint sense making you will converge to a single shared and correct representation of a system.

        Reply
        • Thanks Jan.

          Regarding Neyman, the specific reference I had in mind is
          “Inductive Behavior” as a Basic Concept of Philosophy of Science
          https://www.jstor.org/stable/1401671

          There’s also later literature comparing his views to Fisher’s, and others, e. g.
          https://link.springer.com/chapter/10.1007/978-1-4614-1412-4_90

          Regarding point 3, I’d like to clarify that it’s not that normative judgements are some how objectively less uncertain than epistemic, but that in some cases we feel more comfortable anchoring an analysis in our own judgements than in others’.

          Looking forward to exploring this further in future…

          Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading