What’s a productive way to think about undesirable outcomes and how to avoid them, especially in an unpredictable future full of unknown unknowns? Here I describe the technique of vulnerability analysis, which essentially has three steps:
Step 1: Identify undesirable outcomes, to be avoided
Step 2: Look for conditions that can lead to such outcomes, ie. vulnerabilities
Step 3: Manage the system to mitigate or adapt to vulnerable conditions.
The power of vulnerability analysis is that, by starting from outcomes, it avoids making assumptions about what led to the vulnerabilities.
In part 1 of our blog posts on why use patterns, we argued for making unstated, tacit knowledge about integrated modelling practices explicit by identifying patterns, which link solutions to specific problems and their context. We emphasised the importance of differentiating the underlying concept of a pattern and a pattern artefact – the specific form in which the pattern is explicitly described.
How can modellers share the tacit knowledge that accumulates over years of practice?
In this blog post we introduce the concept of patterns and make the case for why patterns are a good candidate for transmitting the ‘know-how’ knowledge about modelling practices. We address the question of how to use patterns in a second blog post.
How can we identify and evaluate decision forks in a modelling project; those points where a different decision might lead to a better model?
Although modellers often follow so called best practices, it is not uncommon that a project goes astray. Sometimes we become so embedded in the work that we do not take time to stop and think through options when decision points are reached.
One way of clarifying thinking about this phenomenon is to think of the path followed. The path is the sequence of steps actually taken in developing a model or in a problem solving case. A modelling process can typically be carried out in different ways, which generate different paths that can lead to different outcomes. That is, there can be path dependence in modelling.
Recently, we have come to understand the importance of human behaviour in modelling and the fact that modellers are subject to biases. Behavioural phenomena naturally affect the problem solving path. For example, the problem solving team can become anchored to one approach and only look for refinements in the model that was initially chosen. Due to confirmation bias, modelers may selectively gather and use evidence in a way that supports their initial beliefs and assumptions. The availability heuristic is at play when modellers focus on phenomena that are easily imaginable or recalled. Moreover particularly in high interest cases strategic behaviour of the project team members can impact the path of the process.
Prediction under uncertainty is typically seen as a daunting task. It conjures up images of clouded crystal balls and mysterious oracles in shadowy temples. In a modelling context, it might raise concerns about conclusions built on doubtful assumptions about the future, or about the difficulty in making sense of the many sources of uncertainty affecting highly complex models.
However, prediction under uncertainty can be made tractable depending on the type of prediction. Here I describe ways of making predictions under uncertainty for testing which conclusion is correct. Suppose, for example, that you want to predict whether objectives will be met. There are two possible conclusions – Yes and No, so prediction in this case involves testing which of these competing conclusions is plausible.