Site icon Integration and Implementation Insights

A framework to evaluate the impacts of research on policy and practice

By Laura Meagher and David Edwards

1. Laura Meagher (biography)
2. David Edwards (biography)

What is meant by impact generation and how can it be facilitated, captured and shared? How can researchers be empowered to think beyond ‘instrumental’ impact and identify other changes generated by their work? How can the cloud of complexity be dispersed so that numerous factors affecting development of impacts can be seen? How can a way be opened for researchers to step back and reflect critically on what happened and what could be improved in the future? How can research teams and stakeholders translate isolated examples of impact and causes of impact into narratives for both learning and dissemination?

We have developed a framework to evaluate research impact in a way that addresses these questions. It has been piloted on 12 case studies led by Forest Research, a government research agency in UK (Edwards and Meagher 2019) and is likely to be useful to researchers more generally, perhaps especially but not exclusively those in applied fields. To date the framework has been found to be user-friendly and fit for purpose.

Put simply, the framework addresses three questions:

    1. what changed?
    2. why/how did change occur?
    3. so what?

What changed?

There are three sub-questions here: what kind of impact, who changed, and how do we know?

We identify five types of impact:

  1. Instrumental: changes to plans, decisions, behaviours, practices, actions, policies
  2. Conceptual: changes to knowledge, awareness, attitudes, emotions
  3. Capacity-building: changes to skills and expertise
  4. Enduring connectivity: changes to the number and quality of relationships and trust
  5. Culture/attitudes towards knowledge exchange and towards research impact itself.

It is worth saying a little more about the last two, which are based on the assumption that enduring links between researchers and users and positive attitudes by researchers and stakeholders to knowledge exchange and research impact more broadly are conducive to continued collaboration and impact in the future.

As well as recognising types of impact, we need to know who has been influenced. Who changed will typically include one or more of:

  1. Policy-makers: including government agencies and regulatory bodies; local, national and international
  2. Practitioners: public, private, NGOs
  3. Communities: of place or interest, general public
  4. Researchers: within and beyond the project and institution
  5. Other.

The question ‘How do we know?’ requires assessing which indicators and methods should be used, and questions asked, to demonstrate impacts and/or progress towards generation of impacts.

This involves considering how multiple proximate and end-users were influenced in multiple ways, over different timescales, using a more nuanced language of impact and of the processes and factors that generate it. It requires consideration of:

  1. the long timeframes over which the development and diffusion of impacts occur
  2. ambiguities associated with attributing causality
  3. limitations in the relevance of quantitative metrics.

Why/how did change occur?

We identify eight causal factors, which influence the impacts of a research project:

  1. Problem-framing: Level of importance; tractability of the problem; active negotiation of research questions; appropriateness of research design.
  2. Research management: Research culture; integration between disciplines and teams; promotion of research services; planning; strategy.
  3. Inputs: Funding; staff capacity and turnover; legacy of previous work; access to equipment and resources.
  4. Outputs: Quality and usefulness of content; appropriate format.
  5. Dissemination: Targeted and efficient delivery of outputs to users and other audiences.
  6. Engagement: Level and quality of interaction with users and other stakeholders; co-production of knowledge; collaboration during design, dissemination and uptake of outputs.
  7. Users: Influence of knowledge intermediaries, eg., ‘champions’ and user groups; incentives and reinforcement to encourage uptake.
  8. Context: Societal, political, economic, biophysical, climate and geographical factors.

The last two factors – ‘users’ and ‘context’ – lie outside the control of researchers and can be seen as external, while ‘research management’, ‘outputs’, and ‘dissemination’ are all primarily internal factors. Other factors relate to interactions between the project team and external stakeholders (eg., potential users and funders), namely ‘problem framing’, ‘inputs’, and ‘engagement’, highlighting the significance of cross-cutting interactions.

So what?

The final part of the framework addresses:

  1. What worked? What could (or should) have been done differently?
  2. What could (or should) be done in the future?

The questions consider explicitly what lessons can be learned, encouraging critical reflection that can contribute to decision-making and hence improvement of future impact generation. To achieve this, the impacts and causal factors identified from the previous parts of the framework can be assembled into a narrative (such as a case study) to illuminate the often-complex causal relations between them, expressed across multiple stakeholders, as a means to support learning, decision making and action.

The goal is not a factual, objective statement, but rather a credible ‘story’, if possible constructed with key stakeholders, that helps participants reach consensus on ‘what changed, why, and so what’.

Final words

In recognition of the vast heterogeneity of impact stories, the evaluation framework is deliberately flexible. Because knowledge mobilisation is composed of complex interactive elements and processes, the framework avoids linearity and artificial sequencing.

Instead, the evaluation framework is composed of conceptual ‘building blocks’ that users can draw upon to construct and analyse their own impact narratives, and, if needed, to develop a suite of indicators of impacts and progress towards impacts. The framework is intended primarily for formative evaluation: our focus is thus on understanding, reflection, improvement and communication rather than on external accountability and decisions about allocation of resources. However, summative approaches to evaluation could also employ the same framework, ideally seeking input from multiple stakeholders and data sources.

Overall, the framework provides a means to transform informal deliberations about impact generation into a process that considers the full range of impact types and causal factors, in a format that supports internal learning and external communication.

What do you think? How could you/your team/your organisation try out the framework?

To find out more:
Edwards, D. M. and Meagher, L. R. (2019). A framework to evaluate the impacts of research on policy and practice: A forestry pilot study. Forest Policy and Economics. (Online) (DOI): https://doi.org/10.1016/j.forpol.2019.101975

Biography: Laura Meagher PhD is the Senior Partner in the Technology Development Group, an Honorary Fellow at the University of Edinburgh and at the James Hutton Institute and an Associate at the Research Unit for Research Utilisation at the University of St Andrews, all in Scotland, UK. She has spent over 30 years working in the US and the UK with and within research and education institutions, along with industry and government, focussing on strategic change. Two foci are complementary: facilitating change and evaluating results of change efforts. As well as promoting interdisciplinarity, she has evaluated interdisciplinary research programmes, provision and capacity-building schemes, and evaluation mechanisms.

Biography: David Edwards PhD is an environmental social scientist with 25 years’ experience in UK, Europe, Africa and South Asia. He is a member of the Senior Management Team at Forest Research, the research agency of the Forestry Commission, where he is Head of the Centre for Ecosystems, Society and Biosecurity. He is based in Scotland, UK. He manages the research programme ‘Integrating research for policy and practice’ which seeks to understand and enhance the impact of forest-related research upon decision-makers and land managers across the public and private sectors. He has developed and applied a range of frameworks, methods and tools to assess the cultural values associated with forests including deliberative processes with environmental artists, forest managers and local communities to create new public discourses around the cultural meanings and values associated with woodlands.

Exit mobile version