Improved resilience can contribute to the ability to deal with unknown unknowns. Dealing with uncertainty is also at the core of every planning activity. The argument put forward here is that planning processes should be considered a cornerstone for any given resilience approach. An outline of planning and resilience is given, before presenting fundamental aspects of planning that should be strengthened within a resilience strategy.
From attempting to do as much as possible within a day’s work, to launching rockets into space or managing a nation, everything requires planning.
Change can be expected, envisioned and known, and even created, accelerated or stopped. But change does not always follow a linear and predictable path, nor is it always controllable. Novelty and surprise are inescapable features of life. Non-linear change can involve threats or opportunities.
Although it defines the world we live in, who we are, the outlooks we have and what we do, we often do not relate to non-linear change in a meaningful way. What is holding us back from engaging with it? How do we deal with non-linear change? And what are promising ways forward?
Sometimes, we wonder why decisions in Asia are being made at gargantuan speed. How do Asians deal with uncertainty arising from unknown unknowns? Can yin-yang thinking that is typical for several Asian cultures provide a useful answer?
Let’s look at differences between Asian and Western thinking first. Western people tend to prefer strategic planning with linear extrapolation of things past. The underlying mantra is risk management to buffer the organization and to protect it from harmful consequences for the business. But juxtaposing risk and uncertainty is critical. Under conditions of uncertainty, linearity is at stake and risk management limited.
What’s a productive way to think about undesirable outcomes and how to avoid them, especially in an unpredictable future full of unknown unknowns? Here I describe the technique of vulnerability analysis, which essentially has three steps:
Step 1: Identify undesirable outcomes, to be avoided
Step 2: Look for conditions that can lead to such outcomes, ie. vulnerabilities
Step 3: Manage the system to mitigate or adapt to vulnerable conditions.
The power of vulnerability analysis is that, by starting from outcomes, it avoids making assumptions about what led to the vulnerabilities.
In interdisciplinary research it’s common for at least some data to be analysed using statistical techniques. Have you been taught to look for ‘p < 0.05’ meaning that there is a less than 5% probability that the finding occurred by chance? Do you look askance at your statistician colleagues when they tell you it’s not so simple? Here’s why you need to believe them.
The whole focus on p < 0.05 to the exclusion of all else is a historical hiccup, based on a throwaway line in a manual for research workers. That manual was produced by none other than R.A. Fisher, giant of statistical inference and inventor of statistical methods ranging from the randomised block design to the analysis of variance. But all he said was that “[p = 0.05] is convenient to take … as a limit in judging whether a deviation is to be considered significant or not.” Convenient, nothing more!
How can we improve knowledge exchange among scientists and decision-makers to facilitate evidence informed decision-making? Of course there is no one size fits all approach, but here I outline four strategies that could be adapted and implemented across different contexts: (i) knowledge co-production, (ii) embedding, (iii) knowledge brokers, and (iv) boundary organisations. These are illustrated in the figure below.
Perhaps the most widely advocated approach to achieving improved knowledge exchange, knowledge co-production refers to the process whereby decision-makers actively participate in scientific research programs from the onset, collaborating with researchers throughout every aspect of the study including design, implementation and analysis.
By Russell Gorddard, Matthew Colloff, Russell Wise and Michael Dunlop
Adapting to climate change can require profound alterations in environmental management and policy. However the social context of a decision process limits options and resists change, often dooming attempts to adapt to climate change even before they begin. How can decision makers in policy and management more effectively see the institutional and social water they swim in, in order to better drive change?
Values, rules and knowledge (vrk) provide a useful heuristic to help decision makers analyze how the social system shapes their decision context. Put simply, decisions require:
knowledge of options and their implications
values to assess the options
rules that enable implementation.
Viewing the decision context as an interconnected system of values, rules and knowledge can reveal limits to adaptation and suggest strategies for addressing them (Gorddard et al. 2016).
Values are the set of ethical precepts that determine the way people select actions and evaluate events.
Rules are both rules-in-use (norms, practices, habits, heuristics) and rules-in-form (regulations, laws, directives).
Knowledge is both evidence-based (scientific and technical) knowledge and experiential knowledge.
Decision context is the subset of interacting subsystems that are at play in a particular decision process. One core idea is that the decision context may exclude relevant values, knowledge or rules from being considered in decisions. Adaptation may therefore involve change in the decision context.
By Tuomas J. Lahtinen, Joseph H. A. Guillaume, Raimo P. Hämäläinen
How can we identify and evaluate decision forks in a modelling project; those points where a different decision might lead to a better model?
Although modellers often follow so called best practices, it is not uncommon that a project goes astray. Sometimes we become so embedded in the work that we do not take time to stop and think through options when decision points are reached.
One way of clarifying thinking about this phenomenon is to think of the path followed. The path is the sequence of steps actually taken in developing a model or in a problem solving case. A modelling process can typically be carried out in different ways, which generate different paths that can lead to different outcomes. That is, there can be path dependence in modelling.
Recently, we have come to understand the importance of human behaviour in modelling and the fact that modellers are subject to biases. Behavioural phenomena naturally affect the problem solving path. For example, the problem solving team can become anchored to one approach and only look for refinements in the model that was initially chosen. Due to confirmation bias, modelers may selectively gather and use evidence in a way that supports their initial beliefs and assumptions. The availability heuristic is at play when modellers focus on phenomena that are easily imaginable or recalled. Moreover particularly in high interest cases strategic behaviour of the project team members can impact the path of the process.
Imagine this scenario. You are confronted by a wicked problem, such as the obesity epidemic. You know it’s a wicked problem – many previous attempts to resolve it have failed.
Suppose that you wish to develop a plan to remedy obesity. You have identified as many relevant areas of expertise and experience as you can and approached appropriate people – researchers, health practitioners, people with political influence, and so on.
You bring them together to pool their expertise—only to find that you now have another problem. Encouraging them to work collaboratively is more difficult than you expected. They talk in jargon. Their understanding is narrow. Their commitment is to their own discipline. Some of their understanding is tacit. Some of them are argumentative. And more. What are you to do?