Can boundary objects be designed to help researchers and decision makers to interact more effectively? How can the socio-political setting – which will affect decisions made – be reflected in the boundary objects?
Here I describe a new context-specific boundary object to promote decision making based on scientific evidence. But first I provide a brief introduction to boundary objects.
How can decision making on complex systems come to grips with irreducible, or deep, uncertainty? Such uncertainty has three sources:
Intrinsic limits to predictability in complex systems.
A variety of stakeholders with different perspectives on what the system is and what problem needs to be solved.
Complex systems are generally subject to dynamic change, and can never be completely understood.
Deep uncertainty means that the various parties to a decision do not know or cannot agree on how the system works, how likely various possible future states of the world are, and how important the various outcomes of interest are.
What does the word ‘pattern’ mean to you? And how do you use patterns in addressing complex problems?
Patterns are repetitions. These can be in space, such as patterns in textiles and wallpaper, which include houndstooth, herringbone, paisley, plaid, argyle, checkered, striped and polka-dotted.
The pattern concept can also be applied to repetitions in time, as occur in music. Those who know the temporal patterns can classify a piece of music as a blues, waltz or salsa. For each of these types of music, there are also classic dance steps, that usually go by the same name; these are patterns of movement in space and time.
These examples get to the idea that patterns can be viewed more generally as any type of repetitive structure or recurring theme that we can look for and potentially recognize or discover and then assign a memorable name to, such as “houndstooth” or “waltz”. Recognizing the pattern may then indicate a particular course of action, such as “perform dance moves that go with a waltz”.
The ability to recognize a pattern and then take appropriate action is something that we associate with intelligence.
What can we learn about the role and importance of scoping in the context of environmental impact assessment?
“Closed” versus “open” scoping
I am intrigued by the highly variable approaches to scoping practice in environmental impact assessment and the considerable range between “closed” approaches and more ambitious and open exercises. Closed approaches to scoping tend to narrow the range of questions, possibilities and alternatives that may be considered in environmental impact assessment, while limiting or precluding meaningful public input. Of course, the possibility of more open scoping is sometimes precluded beforehand by narrow terms of reference determined by regulators.
When scoping is not done well, it inevitably compromises subsequent steps in the process.
Scientific uncertainty creates problems in many fields of public policy. Often, it is not possible to satisfy the high demands on the information input for standard methods of policy analysis such as risk analysis or cost-benefit analysis. For instance, this seems to be the case for long-term projections of regional trends in extreme weather and their impacts.
However, we cannot wait until science knows the probabilities and expected values for each of the policy options. Decision-makers often have good reason to act although such information is missing. Uncertainty does not diminish the need for policy advice to help them determine which option it would be best to go for.
It seems simple enough to say that community values and aspirations should be central to informing government decisions that affect them. But simple things can turn out to be complex.
In particular, when research to inform land and water policy was guided by what the community valued and aspired to rather than solely technical considerations, a much broader array of desirable outcomes was considered and the limitations of what science can measure and predict were usefully exposed.
How can we improve the often poor interaction and lack of genuine discussions between policy makers, experts, and those affected by policy?
As a social scientist who makes and uses models, an idea from Daniel Dennett’s (2013) book ‘Intuition Pumps and Other Tools for Thinking’ struck a chord with me. Dennett introduces the idea of using lay audiences to aid and improve understanding between experts. Dennett suggests that including lay audiences (which he calls ‘curious nonexperts’) in discussions can entice experts to err on the side of over-explaining their thoughts and positions. When experts are talking only to other experts, Dennett suggests they under-explain, not wanting to insult others or look stupid by going over basic assumptions. This means they can fail to identify areas of disagreement, or to reach consensus, understanding, or conclusions that may be constructive.
For Dennett, the ‘curious nonexperts’ are undergraduate philosophy students, to be included in debates between professors. For me, the book sparked the idea that models could be ‘curious nonexperts’ in policy debates and processes. I prefer and use the term ‘interested amateurs’ over ‘curious nonexperts’, simply because the word ‘amateur’ seems slightly more insulting towards models!
What is deep uncertainty? And how can scenarios help deal with it?
Deep uncertainty refers to ‘unknown unknowns’, which simulation models are fundamentally unsuited to address. Any model is a representation of a system, based on what we know about that system. We can’t model something that nobody knows about—so the capabilities of any model (even a participatory model) are bounded by our collective knowledge.
One of the ways we handle unknown unknowns is by using scenarios. Scenarios are stories about the future, meant to guide our decision-making in the present.
How does a modeler know the ’optimal’ level of complexity needed in a model when those desiring to gain insights from the use of such a model aren’t sure what information they will eventually need? In other words, what level of model complexity is needed to do a job when the information needs of that job are uncertain and changing?
Simplification is why we model. We wish to abstract the essence of a system we are studying, and estimate its likely performance, without having to deal with all its detail. We know that our simplified models will be wrong. But, we develop them because they can be useful. The simpler and hence the more understandable models are the more likely they will be useful, and used, ‘as long as they do the job.’
Ask most 21st century citizens whether they like technology and they will respond with a resounding, “Yes!” Ask them why and you’ll get answers like, “Because it’s cool and technology is fun!” or “Technology systems help us learn and understand things.” Or “Technology helps us communicate with one another, keep up with current events, or share what we are doing.” Look at the day-to-day activities of most people on the planet and you’ll find that they use some form of technology to complete almost every activity that they undertake.
When you think about it, technologies are really just tools. And we humans are tool users of old.
I don’t see the world in pictures. I mean, I see the world in all its beautiful shapes and colors and shadings, but I don’t interpret the world that way. I interpret the world through the stories I create. My interpretations of these stories are my own mental models of how I view the world. What I can do then, to share this mental model, is create a more formalized model, whether it is a simple picture (in my case a very badly drawn one), or a system dynamics model, or an agent-based model. People think of models as images, as representations, as visualizations, as simulations. As tools to represent, to simplify, to teach, and to share. And they are all these things, and we need them to function as these things, but they are also stories, and can be interpreted and shared as such.
When computer technology became available for developing and using graphics interfaces for interactive decision support systems, some of us got excited about the potential of directly involving stakeholders in the modeling and analyses of various water resource systems. Many of us believed that generating pictures that could show the impact of various design and management decisions or assumptions any user might want to make would give them a better understanding of the system being modeled and how they might improve its performance.
We even got fancy with respect performing sensitivity analyses and displaying uncertainty. Our displays were clear, understandable, and colorful. Sometimes we witnessed users even believing what they were seeing.