Five principles for achieving impact

Community member post by Mark Reed

Mark Reed (biography)

What key actions can help research have impact? Interviews with 32 researchers and stakeholders across 13 environmental management research projects lead to the five principles and key issues described below (Reed et al., 2014).

1. Design:

  • Understand what everyone wants. This can help in managing expectations of different stakeholders and project members and identifying potential issues/problems early on.
  • Understand the context of the project. Use local characteristics, traditions, norms and past experiences as a starting point for planning the project.
  • Take your time. Knowledge exchange is time consuming if done properly.
  • Design your knowledge exchange activities carefully. Spend time researching the context, the stakeholders, and possible approaches. Design for flexibility, get feedback, and adapt your plans to suit changing circumstances.
  • The early bird catches the worm. Ideally planning and research into the context and stakeholders should begin prior to project commencement.
  • Get buy in. Ownership and ongoing commitment can be formal (eg., monetary investment or contracted time to the project) or informal (eg., regular engagement via social media).
  • Independence. Ensure that the management of the research is seen as independent and neutral, so you can build trust with stakeholders. This can be achieved through a neutral organization leading the process or an independent facilitator running sessions with stakeholders.
  • Mix up your methods. Plan to use a variety of methods for engaging with stakeholders and the public to suit different people’s preferences.
  • The process is as important as the outcome.
  • Resource your impact. Generating impact takes significant time and resources. Budget for a well designed process, which includes social events, staff time, professional facilitation, refreshments and (in some cases) financial compensation to cover time and expenses for participants.
  • Use knowledge brokers. Identify individuals that play a significant role in your stakeholder community and may be able to act as a champion.
  • Visualise your research. Tools that use maps, illustrations, cartoons, drawings, photos and models are particularly successful.

2. Represent:

  • Involve the right people. Make sure power dynamics between individuals are considered and attention is paid to selecting individuals who have the power to make a difference. If there are people or groups who doubt the value of the process keep them informed and give them the option of joining in later.
  • Not just the usual suspects. Those of different ages, gender, backgrounds and cultures bring different knowledge, concerns and perspectives to the table.
  • Understand and create networks. Understand the people’s social networks and spend time creating connections both vertically and horizontally within and between relevant organisations.
  • Personal initiative. Many impacts are based on one individual’s initiative, perseverance and hard work; you need at least one individual who is willing to push the process through and maintain momentum.

3. Engage:

  • Away days. Put time aside at the start of the project for the research team and key stakeholders to get to know one another’s expertise, background and languages.
  • Be enthusiastic. Enthusiasm is infectious and can help maintain momentum and achieve long-term involvement of participants, even when outcomes are delayed or mistakes are made.
  • Find out what motivates people. Motivations can include: academic interest, to learn, fear of missing out, financial gain, professional duty, personal promotion, and to support or promote causes they care about. Be honest with participants about what they will gain through participation.
  • Build capacity for engagement. Include basic training activities to improve knowledge exchange and co-production.
  • Build personal relationships. Impact is all about relationships. Taking time to socialize is important.
  • Build trust.
  • Multiple modes of two-way communication. Whether face-to-face or via social media, use the widest possible spectrum of communication media available to you, so that everyone who is interested in your research can engage with you via their preferred mode.
  • Keep in people’s comfort zones. Have meetings in the local area and in a non-threatening, neutral environment. Choose activities (at least initially) that people are comfortable with.
  • Enjoy! Make sure the process is enjoyable and interesting for everyone involved.
  • Keep it simple. A stakeholder steering group may help in ensuring the language and approach is suitable.
  • Work around people’s commitments. Consult with those you want to work with to match your process to their commitments.
  • Manage power dynamics. Recognize that power dynamics play a role in the process; plan for and manage this appropriately.
  • Record. In order to ensure transparent, trustworthy processes make sure that your process is properly recorded.
  • Keep your goals in mind. Reiterate research and impact goals throughout the process and keep to deadlines.
  • Respect cultural context. Consider local attitudes to gender, informal livelihoods, social groupings, speaking out in public and so on.
  • Respect local knowledge. Respect local perceptions, choices, and abilities and involve all types of knowledge when setting goals and planning for impact.
  • Share responsibilities. Share out responsibilities and credit in order to help build relationships, trust in the process and foster ownership for those involved.

4. Early impact:

  • Deliver quick wins. Delivery of practical outcomes early can help build trust and relationships, keeping people engaged for the longer-term.
  • Work for mutual benefit. Spend time finding out what people want from the process and try hard to deliver this.

5. Reflect and sustain:

  • Get participant feedback regularly. Use it to adapt techniques and deal with problems as they arise.
  • Make time for reflection.
  • Learn from others who have achieved impact. Visit other projects that successfully delivered impact and speak to people who have carried out similar work to what you are planning.
  • Continuity of involvement. This is especially important for projects dealing with controversy.
  • Maintain momentum. Review sessions, feedback forms and good facilitation can ensure that momentum is maintained.

These principles and issues are summarised in the following figure:




Do these ideas resonate with your experience? Are there other issues that you have found to be important?

References and more information:

Reed, M. S. (2016). The Research Impact Handbook. Fast Track Impact: Aberdeenshire, United Kingdom (details at:

Reed, M. S., Stringer, L. C., Fazey, I., Evely, A. C. and Kruijsen, J. H. J. (2014). Five principles for the practice of knowledge exchange in environmental management. Journal of Environmental Management. 146: 337–345. Online (DOI): 10.1016/j.jenvman.2014.07.021.

Biography: Mark Reed is a Professor of Social Innovation at Newcastle University UK, in a HEFCE (Higher Education Funding Council for England) funded Chair as part of N8 AgriFood. He is based at the Institute for Agri-Food Research & Innovation and the Centre for Rural Economy in the School of Agriculture, Food and Rural Development, and is a Visiting Professor at University of Leeds and Birmingham City University.

ICTAM: Bringing mental models to numerical models

Community member post by Sondoss Elsawah

Sondoss Elsawah (biography)

How can we capture the highly qualitative, subjective and rich nature of people’s thinking – their mental models – and translate it into formal quantitative data to be used in numerical models?

This cannot be addressed by a single method or software tool. We need multi-method approaches that have the capacity to take us through the learning journey of eliciting and representing people’s mental models, analysing them, and generating algorithms that can be incorporated into numerical models.

More importantly, this methodology should allow us to see in a transparent way the progression on this learning journey. This transparency is important to build stakeholders’ confidence in the modelling process, and promote reflection and learning.

The ICTAM method described below was developed in the context of integrated water assessment, but it is more widely applicable. Integrated water assessment is a field which integrates knowledge from various scientific disciplines (eg., hydrology, economics, and social science) in order to build an understanding of the complex water problems that arise from the interactions between humans and the environment.

Before describing the method, it is worth addressing the question: Why do we need to bring mental models – the subjective, and often incomplete and flawed assumptions about the surrounding world, biased by personal (eg., past) experience and external factors (eg., media) – to integrated water assessment and other models? The answer is that people’s decisions and actions influence water (and other resource) use directly and indirectly. To change people’s resource use, policies need to understand and target factors that influence how people make decisions, as well as how their decisions affect the biophysical environment, and the feedback effects on future decisions.

The ICTAM method

ICTAM is a step-wise method for bringing qualitative mental models into formal quantitative simulation models. The ICTAM acronym stands for the key methods used throughout the process: Interviews, Cognitive mapping, Time-sequence Unified Modelling Language (UML), All-encompassing framework, and numerical agent-based Models. The figure below shows the steps and outputs.

The process starts by conducting semi-structured interviews with stakeholders. The purpose is to collect data about how people think, interpret information, and make judgements, with minimal intrusion from the researcher.

In the second step, the researcher develops cognitive maps for individuals based on the data collected through interviews. The structure and content of cognitive maps are validated by sharing them with interviewees and seeking their feedback.

Using results from mapping structure and content analysis techniques, the researcher merges individual cognitive maps into a collective map as a unifying view that encompasses individual views. This is step 3.

In step 4, the collective map is used to develop a sequence of conceptual decision models. The conceptual models are transition objects between conceptual and numerical modelling. They provide more formal implementation-based descriptions of the decision making process. This step includes three activities: (1) using UML time sequence diagramming technique to abstract all functions required to represent decisions identified in the collective map, (2) identifying possible models and data required to implement decision functions, and (3) developing pseudo-code representation of those parts of the conceptual model to be implemented.

In the final step, the researcher uses the conceptual decision making models to create a detailed agent-based model that can be executed. The pseudo-code is translated into an actual code implementation. For the inner working of the model, this step involves using additional quantitative data (eg., from literature reviews) to specify thresholds and functional forms of certain functions used by ‘agents’ in the agent-based model.

ICTAM provides a logical and transparent progression for linking mental models of stakeholders (qualitative) to formal simulation models (quantitative)

ICTAM accommodates the complexities of human decision making and behaviour, moving beyond simple treatments of human response as a single parameter and simplistic rational assumptions about human cognition and behaviour.

The process is cyclic. At any step, the researcher can revisit past data analysis, examining any inconsistencies and omissions. Depending on the project’s objective and degree of stakeholder participation in the process, the researcher can share outputs from each step with them. This can serve multiple purposes, such as data validation, engaging participants in the modelling process, and using a particular output from the process to achieve a learning and communication outcome (eg., using cognitive maps to communicate to the group about individual mental models, information gaps, inconsistencies).

What does ICTAM offer modellers?

  1. It leverages the strengths of mixing methods by bringing together two well-established methods: cognitive mapping and agent-based modelling. Cognitive mapping taps into the richness and diversity of subjective mental models and decision making processes. However, the conceptual nature of cognitive mapping limits its capacity to simulate and visualise the effects of decisions over time. Simulation based approaches such as agent-based modelling overcome this limitation.
  2. It is easy to explain modelling artefacts. The graphical format of the cognitive maps, and the fact that they are built using natural everyday language that stakeholders use, make them easy-to-explain tools to communicate about mental models and complex systems between stakeholders, modellers and software developers.
  3. It aggregates individual mental models into collective views. The network structure allows for capturing and visualising complex interactions between system processes. Analysing the structural properties of the maps, along with the content analysis, allows the researcher to integrate different views into composite maps.
  4. It provides modelling clarity and transparency. The progression from qualitative subjective data, to formal UML decision models, and agent simulation provides transparency and clarity. At any point in the modelling process, the modeller can share outputs with decision makers and revisit previous steps.

I am very interested to hear of other examples of how people deal with the challenges of eliciting and analysing mentals models, especially when the objective is to develop numerical policy assessment models. I will also be excited to know if people have ideas of case studies where ICTAM can be applied and further developed.

To find out more, see:

Elsawah, S., Guillaume, J. H. A., Filatova, T., Rook, J., and Jakeman, A. J. (2015). A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: From cognitive maps to agent-based models. Journal of Environmental Management, 151, 500-516. Online (DOI): doi:10.1016/j.jenvman.2014.11.028.

Biography: Sondoss Elsawah is a senior lecturer at the University of New South Wales, Canberra. She comes from an operations research background. Her research focuses on the development and use of multi-method approaches to support learning and decision making in complex socio-ecological and socio-technical decision problems. Application areas include natural resource management and defence capability management. Her recent work focuses on designing and conducting laboratory experiments to examine the effectiveness of simulation models in understanding how people make decisions about dynamic decision making problems.

This blog post is one of a series resulting from the first meeting in March 2016 of the Core Modelling Pursuit. This pursuit is part of the theme Building Resources for Complex, Action-Oriented Team Science funded by the National Socio-Environmental Synthesis Center (SESYNC).

A governance compass

Community member post by Tim Gieseke

Tim Gieseke (biography)

How can we better understand governance when dealing with complex social and environmental issues? Here I describe a set of concepts that I have found useful — a governance compass. The aim is to provide guidance for organizations to align partnerships, accountability, equity, ownership and value at the ‘point of service’. The ‘point of service’ varies. For human health, it is the patient. In life-long learning, it is the professional. In agriculture sustainability, it is the landscape.

The governance compass identifies governance actors and their roles; governance styles and how they combine into a footprint; and finally how these combine with tasks into a governance framework. Although the compass has been developed for agricultural issues, it has broader relevance. Continue reading

Participatory processes and participatory modelling: The sustainable procedure framework

Community member post by Beatrice Hedelin

Beatrice Hedelin (biography)

How can we resolve debates about participatory processes between proponents and skeptics? What role can participatory modelling play in improving participatory processes?

Proponents argue for the merits of participatory processes, which include learning; co-production of knowledge; development of shared understanding of a problem and shared goals; creation of trust; and local power and ownership of a problem.

Sceptics point to evidence of inefficient, time-consuming, participatory processes that escalate conflict and mistrust. They also highlight democratic problems; lack of transparency; and powerful actors that benefit in relation to weaker ones such as the unorganized, poor, and uneducated. Continue reading

Harnessing analogies for creativity and problem solving

Community member post by Christian Schunn

Christian Schunn (biography)

What is an analogy? How can analogies be used to work productively across disciplines in teams?

We know from the pioneering work of Kevin Dunbar (1995), in studying molecular biology labs, that analogies were a key factor in why multidisciplinary labs were much more successful than labs composed of many researchers from the same backgrounds. What is it about analogies that assists multi- and interdisciplinary work? Continue reading

Where are the stakeholders in implementation science?

Community member post by Allison Metz and Annette Boaz

Allison Metz (biography)

Should implementation science make more room for consultation, collaboration and co-creation with stakeholders? Would finding more active roles for stakeholders in implementation science be a promising approach to increasing the use of research evidence for improvements in policy and services?

The goal of implementation science is to promote the sustainable implementation of research evidence at scale to improve population outcomes, especially in health and human services. Nevertheless, the mobilization of research evidence on the frontlines of health and human services has been quite limited, especially in public agencies serving the vast majority of consumers. Continue reading