Why model?

By Steven Lade

Steven Lade
Steven Lade (biography)

What do you think about mathematical modelling of ‘wicked’ or complex problems? Formal modelling, such as mathematical modelling or computational modelling, is sometimes seen as reductionist, prescriptive and misleading. Whether it actually is depends on why and how modelling is used.

Here I explore four main reasons for modelling, drawing on the work of Brugnach et al. (2008):

  • Prediction
  • Understanding
  • Exploration
  • Communication.

I start with mental models – the informal representations of the world that we all use as we go about both our personal and professional lives – and then move on to formal models.

Mental models

We are all modellers! We all use mental models every day for a variety of different purposes:

  • To make quantitative predictions about the future. For example, if I throw the ball this fast, where will it land? How much money would my house sell for?
  • To understand things that happened. For example, why did the cake I baked not turn out like expected? Why was Donald Trump elected as president in the USA, against many expectations?
  • To explore alternative versions of our worlds. For example, what if I added a room to my house? What is life like for someone living in another country?
  • To communicate. Communication is nothing more than the construction and sharing of mental models via language, and we use it every day. For example, when we talk about love, the weather, justice, our garden, or tax, we use representations of these concepts that are at least partially shared among those involved in the conversation.

Formal models

All these purposes can also be fulfilled by formal models.

Prediction is the model purpose most commonly associated with formal modelling, though in wicked problems prediction should be treated cautiously and with full understanding of the model’s assumptions.

Understanding is the model purpose most commonly used in traditional science, to test hypotheses against observations.

The remaining two purposes, exploration and communication, are of the most relevance for wicked problems, yet are arguably the most underappreciated.

Exploration using formal models is nothing more than a reasoning tool to support our own mental modelling capacity for exploration. The effects of complex system dynamics features such as multiple interacting feedbacks can be difficult to anticipate and may even be counter-intuitive: that’s why they’re considered ‘complex’.

An example can be seen in research on how different poverty-environment relationships affect which poverty alleviation strategies are likely to be effective (Lade et al., 2017). We showed that in situations where poor people degrade their environment—usually because they have no choice—asset inputs may help break that cycle of poverty. But in situations where poor people maintain their environment, and agricultural intensification leads to increased environmental degradation, asset inputs may be counterproductive and even reinforce poverty, requiring other strategies.

Finally, sometimes the process of constructing the formal model can be just as valuable as the model itself. Participatory model construction encourages communication of each participant’s mental models, thereby developing awareness of others’ perspectives and possibly challenging one’s own mental model. In an earlier blog post, Jen Badham and Gabriele Bammer described how jointly designing formal models can help stakeholders draw out differences in their mental models of a complex system. For example, a modelling process could help draw out the different understandings that farmers and government policy-makers have of an agricultural system and the different challenges that they face when interacting with this complex system.

In summary, mathematical models have a valuable place even in complex systems with wicked problems, especially when used for exploration and communication. As with any tool, the key is to be aware of why you’re using them.

Why do you model? Do you have other modelling purposes to share? Or additional examples of the reasons for modelling described above?

Brugnach, M., Pahl-Wostl, C., Lindenschmidt, K. E., Janssen, J. A. E. B., Filatova, T., Mouton, A., Holtz, G., van der Keur, P. and Gaber N. (2008). Complexity and Uncertainty: Rethinking The Modelling Activity. U.S. Environmental Protection Agency Papers, 72. (Online): http://digitalcommons.unl.edu/usepapapers/72

Lade, S. J., Haider, L. J.,  Engström, G. and Schlüter, M. (2017). Resilience offers escape from trapped thinking on poverty alleviation. Science Advances, 3, 5: e1603043. (Online) (DOI): https://doi.org/10.1126/sciadv.1603043

Biography: Steve Lade is a researcher at the Stockholm Resilience Centre, Stockholm University, Sweden and an Honorary Senior Lecturer at the Fenner School of Environment and Society, Australian National University in Canberra, Australia. He uses complex systems tools to study the resilience and sustainability of human and natural systems including fisheries, poverty traps and the Earth system. He is currently funded by a young researcher mobility grant from the Swedish Research Council Formas.

Using the concept of risk for transdisciplinary assessment

Community member post by Greg Schreiner

Greg Schreiner (biography)

Global development aspirations, such as those endorsed within the Sustainable Development Goals, are complex. Sometimes the science is contested, the values are divergent, and the solutions are unclear. How can researchers help stakeholders and policy-makers use credible knowledge for decision-making, which accounts for the full range of trade-off implications?

‘Assessments’ are now commonly used. Following their formal adoption by the Intergovernmental Panel for Climate Change (IPCC) in the early 1990s, they have been used at the science-society-policy interface to tackle global questions relating to biodiversity and ecosystems services, human well-being, ozone depletion, water management, agricultural production, and many more. Continue reading

Managing deep uncertainty: Exploratory modeling, adaptive plans and joint sense making

Community member post by Jan Kwakkel

Jan Kwakkel (biography)

How can decision making on complex systems come to grips with irreducible, or deep, uncertainty? Such uncertainty has three sources:

  1. Intrinsic limits to predictability in complex systems.
  2. A variety of stakeholders with different perspectives on what the system is and what problem needs to be solved.
  3. Complex systems are generally subject to dynamic change, and can never be completely understood.

Deep uncertainty means that the various parties to a decision do not know or cannot agree on how the system works, how likely various possible future states of the world are, and how important the various outcomes of interest are. Continue reading

Scoping: Lessons from environmental impact assessment

Community member post by Peter R. Mulvihill

Peter R. Mulvihill (biography)

What can we learn about the role and importance of scoping in the context of environmental impact assessment?

“Closed” versus “open” scoping

I am intrigued by the highly variable approaches to scoping practice in environmental impact assessment and the considerable range between “closed” approaches and more ambitious and open exercises. Closed approaches to scoping tend to narrow the range of questions, possibilities and alternatives that may be considered in environmental impact assessment, while limiting or precluding meaningful public input. Of course, the possibility of more open scoping is sometimes precluded beforehand by narrow terms of reference determined by regulators.

When scoping is not done well, it inevitably compromises subsequent steps in the process. Continue reading

Making predictions under uncertainty

Community member post by Joseph Guillaume

Joseph Guillaume (biography)

Prediction under uncertainty is typically seen as a daunting task. It conjures up images of clouded crystal balls and mysterious oracles in shadowy temples. In a modelling context, it might raise concerns about conclusions built on doubtful assumptions about the future, or about the difficulty in making sense of the many sources of uncertainty affecting highly complex models.

However, prediction under uncertainty can be made tractable depending on the type of prediction. Here I describe ways of making predictions under uncertainty for testing which conclusion is correct. Suppose, for example, that you want to predict whether objectives will be met. There are two possible conclusions – Yes and No, so prediction in this case involves testing which of these competing conclusions is plausible. Continue reading