Agent-based modelling for knowledge synthesis and decision support

Community member post by Jen Badham

Jen Badham (biography)

The most familiar models are predictive, such as those used to forecast the weather or plan the economy. However, models have many different uses and different modelling techniques are more or less suitable for specific purposes.

Here I present an example of how a game and a computerised agent-based model have been used for knowledge synthesis and decision support.

The game and model were developed by a team from the Centre de Coopération Internationale en Recherche Agronomique pour le Développement (CIRAD), a French agricultural research organisation with an international development focus. The issue of interest was land use conflict between crop and cattle farming in the Gnith community in Senegal (D’Aquino et al. 2003).

Agent-based modelling is particularly effective where understanding is more important than prediction. This is because agent-based models can represent the real world in a very natural way, making them more accessible than some other types of models. Continue reading

Complexity and agent-based modelling

Community member post by Richard Taylor and John Forrester

richard-taylor
Richard Taylor (biography)

Policy problems are complex and – while sometimes simple solutions can work – complexity tools and complexity thinking have a major part to play in planning effective policy responses. What is ‘complexity’ and what does ‘complexity science’ do? How can agent-based modelling help address the complexity of environment and development policy issues?

Complexity

At the most obvious level, one can take complexity to mean all systems that are not simple, by which we mean that they can be influenced but not controlled. Complexity can be examined through complexity science and complex system models. Continue reading

Models as ‘interested amateurs’

Community member post by Pete Barbrook-Johnson

pete-barbrook-johnson
Pete Barbrook-Johnson (biography)

How can we improve the often poor interaction and lack of genuine discussions between policy makers, experts, and those affected by policy?

As a social scientist who makes and uses models, an idea from Daniel Dennett’s (2013) book ‘Intuition Pumps and Other Tools for Thinking’ struck a chord with me. Dennett introduces the idea of using lay audiences to aid and improve understanding between experts. Dennett suggests that including lay audiences (which he calls ‘curious nonexperts’) in discussions can entice experts to err on the side of over-explaining their thoughts and positions. When experts are talking only to other experts, Dennett suggests they under-explain, not wanting to insult others or look stupid by going over basic assumptions. This means they can fail to identify areas of disagreement, or to reach consensus, understanding, or conclusions that may be constructive.

For Dennett, the ‘curious nonexperts’ are undergraduate philosophy students, to be included in debates between professors. For me, the book sparked the idea that models could be ‘curious nonexperts’ in policy debates and processes. I prefer and use the term ‘interested amateurs’ over ‘curious nonexperts’, simply because the word ‘amateur’ seems slightly more insulting towards models! Continue reading

The ‘methods section’ in research publications on complex problems – Purpose

Community member post by Gabriele Bammer

gabriele-bammer
Gabriele Bammer (biography)

Do we need a protocol for documenting how research tackling complex social and environmental problems was undertaken?

Usually when I read descriptions of research addressing a problem such as poverty reduction or obesity prevention or mitigation of the environmental impact of a particular development, I find myself frustrated by the lack of information about what was actually done. Some processes may be dealt with in detail, but others are glossed over or ignored completely.

For example, often such research brings together insights from a range of disciplines, but details may be scant on why and how those disciplines were selected, whether and how they interacted and how their contributions to understanding the problem were combined. I am often left wondering about whose job it was to do the synthesis and how they did it: did they use specific methods and were these up to the task? And I am curious about how the researchers assessed their efforts at the end of the project: did they miss a key discipline? would a different perspective from one of the disciplines included have been more useful? did they know what to do with all the information generated? Continue reading

ICTAM: Bringing mental models to numerical models

Community member post by Sondoss Elsawah

sondoss-elsawah
Sondoss Elsawah (biography)

How can we capture the highly qualitative, subjective and rich nature of people’s thinking – their mental models – and translate it into formal quantitative data to be used in numerical models?

This cannot be addressed by a single method or software tool. We need multi-method approaches that have the capacity to take us through the learning journey of eliciting and representing people’s mental models, analysing them, and generating algorithms that can be incorporated into numerical models.

More importantly, this methodology should allow us to see in a transparent way the progression on this learning journey. Continue reading

Choosing a model: If all you have is a hammer…

Community member post by Jen Badham

badham
Jen Badham (biography)

As a modeller, I often get requests from research or policy colleagues along the lines of ‘we want a model of the health system’. It’s relatively easy to recognise that ‘health system’ is too vague and needs explicit discussion about the specific issue to be modelled. It is much less obvious that the term ‘model’ also needs to be refined. In practice, different modelling methods are more or less appropriate for different questions. So how is the modelling method chosen? Continue reading