That which we call a rose, by any other name would smell as sweet.
That Shakespeare guy really knew what he was talking about. A rose is what it is, no matter what we call it. A word is simply a cultural agreement about what we call something. And because language is a common thread that binds cultures together, participatory modeling – as a pursuit that strives to incorporate knowledge and perspectives from diverse stakeholders – is prime for integrating stories into its practice.
To an extent, that’s what every modeling activity does, whether it’s through translating an individual’s story into a fuzzy cognitive map, or into an agent-based model. But I would argue that the drive to quantify everything can sometimes make us lose the richness that a story can provide.
In part 1 of our blog posts on why use patterns, we argued for making unstated, tacit knowledge about integrated modelling practices explicit by identifying patterns, which link solutions to specific problems and their context. We emphasised the importance of differentiating the underlying concept of a pattern and a pattern artefact – the specific form in which the pattern is explicitly described.
How can modellers share the tacit knowledge that accumulates over years of practice?
In this blog post we introduce the concept of patterns and make the case for why patterns are a good candidate for transmitting the ‘know-how’ knowledge about modelling practices. We address the question of how to use patterns in a second blog post.
To make progress in contributing to the solution of complex real-world problems, transdisciplinary research has come to the forefront. By integrating multiple disciplines as well as the expertise of partners from societal practice, transdisciplinary researchers are able to look at a problem from many angles, with the goal of making both societal and scientific advances.
But how can these different types of expertise be integrated into both a better understanding of the problem and more effective ways of addressing it?
Colleagues and I have collected 43 methods from a number of transdisciplinary research projects dealing with a variety of research topics. We have grouped them into seven classes following an epistemological hierarchy. We start with methods in the narrower sense, progressing to integration instruments.
Can we help the next generation of policy makers, business leaders and citizens to become creative, critical and independent thinkers? Can we make them aware of the nature of the problems they will be confronted with? Can we strengthen their capacity to foster and lead stakeholder processes to address these problems?
By Tuomas J. Lahtinen, Joseph H. A. Guillaume, Raimo P. Hämäläinen
How can we identify and evaluate decision forks in a modelling project; those points where a different decision might lead to a better model?
Although modellers often follow so called best practices, it is not uncommon that a project goes astray. Sometimes we become so embedded in the work that we do not take time to stop and think through options when decision points are reached.
One way of clarifying thinking about this phenomenon is to think of the path followed. The path is the sequence of steps actually taken in developing a model or in a problem solving case. A modelling process can typically be carried out in different ways, which generate different paths that can lead to different outcomes. That is, there can be path dependence in modelling.
Recently, we have come to understand the importance of human behaviour in modelling and the fact that modellers are subject to biases. Behavioural phenomena naturally affect the problem solving path. For example, the problem solving team can become anchored to one approach and only look for refinements in the model that was initially chosen. Due to confirmation bias, modelers may selectively gather and use evidence in a way that supports their initial beliefs and assumptions. The availability heuristic is at play when modellers focus on phenomena that are easily imaginable or recalled. Moreover particularly in high interest cases strategic behaviour of the project team members can impact the path of the process.
Policy problems are complex and – while sometimes simple solutions can work – complexity tools and complexity thinking have a major part to play in planning effective policy responses. What is ‘complexity’ and what does ‘complexity science’ do? How can agent-based modelling help address the complexity of environment and development policy issues?
At the most obvious level, one can take complexity to mean all systems that are not simple, by which we mean that they can be influenced but not controlled. Complexity can be examined through complexity science and complex system models.
As investigators who engage the public in both modeling and research endeavors we address two major questions: Does citizen science have a place within the participatory modeling research community? And does participatory modeling have a place in the citizen science research community?
Let us start with definitions. Citizen science has been defined in many ways, but we will keep the definition simple. Citizen science refers to endeavors where persons who do not consider themselves scientific experts work with those who do consider themselves experts (around a specific issue) to address an authentic research question.
How can we improve the often poor interaction and lack of genuine discussions between policy makers, experts, and those affected by policy?
As a social scientist who makes and uses models, an idea from Daniel Dennett’s (2013) book ‘Intuition Pumps and Other Tools for Thinking’ struck a chord with me. Dennett introduces the idea of using lay audiences to aid and improve understanding between experts. Dennett suggests that including lay audiences (which he calls ‘curious nonexperts’) in discussions can entice experts to err on the side of over-explaining their thoughts and positions. When experts are talking only to other experts, Dennett suggests they under-explain, not wanting to insult others or look stupid by going over basic assumptions. This means they can fail to identify areas of disagreement, or to reach consensus, understanding, or conclusions that may be constructive.
For Dennett, the ‘curious nonexperts’ are undergraduate philosophy students, to be included in debates between professors. For me, the book sparked the idea that models could be ‘curious nonexperts’ in policy debates and processes. I prefer and use the term ‘interested amateurs’ over ‘curious nonexperts’, simply because the word ‘amateur’ seems slightly more insulting towards models!
What is deep uncertainty? And how can scenarios help deal with it?
Deep uncertainty refers to ‘unknown unknowns’, which simulation models are fundamentally unsuited to address. Any model is a representation of a system, based on what we know about that system. We can’t model something that nobody knows about—so the capabilities of any model (even a participatory model) are bounded by our collective knowledge.
One of the ways we handle unknown unknowns is by using scenarios. Scenarios are stories about the future, meant to guide our decision-making in the present.
How to give others your hard-won insights so that their work can be more informed, efficient, and effective? As I’ve gotten older, it is something that I think about more.
It is widely recognized that the environment is an integrated but also “open” system. As a result, when working with issues relating to the environment we are faced with the unsatisfying fact that we won’t know “truth”. We develop an understanding that is consistent with what we currently know and what we consider state-of-the-practice methods. But, we can never be sure that more observations or different methods would not result in different insights.
How can co-creation communities use models – simple visual representations and/or sophisticated computer simulations – in ways that promote learning and improvement? Modeling techniques can serve to generate insights and correct misunderstandings. Are they equally as useful for fostering new learning and adaptation? Sterman (2006) argues that if new learning is to occur in complex systems then models must be subjected to testing. Model testing must, in turn, yield evidence that not only guides decision-making within the current model, but also feeds back evidence to improve existing models so that subsequent decisions can be based on new learning.