A framework to evaluate the impacts of research on policy and practice

By Laura Meagher and David Edwards

author-laura-meagher
Laura Meagher (biography)

What is meant by impact generation and how can it be facilitated, captured and shared? How can researchers be empowered to think beyond ‘instrumental’ impact and identify other changes generated by their work? How can the cloud of complexity be dispersed so that numerous factors affecting development of impacts can be seen? How can a way be opened for researchers to step back and reflect critically on what happened and what could be improved in the future? How can research teams and stakeholders translate isolated examples of impact and causes of impact into narratives for both learning and dissemination?

author-david-edwards
David Edwards (biography)

We have developed a framework to evaluate research impact in a way that addresses these questions. It has been piloted on 12 case studies led by Forest Research, a government research agency in UK (Edwards and Meagher 2019) and is likely to be useful to researchers more generally, perhaps especially but not exclusively those in applied fields. To date the framework has been found to be user-friendly and fit for purpose.

Put simply, the framework addresses three questions:

  1. what changed?
  2. why/how did change occur?
  3. so what?

What changed?

There are three sub-questions here: what kind of impact, who changed, and how do we know?

We identify five types of impact:

  1. Instrumental: changes to plans, decisions, behaviours, practices, actions, policies
  2. Conceptual: changes to knowledge, awareness, attitudes, emotions
  3. Capacity-building: changes to skills and expertise
  4. Enduring connectivity: changes to the number and quality of relationships and trust
  5. Culture/attitudes towards knowledge exchange and towards research impact itself.

It is worth saying a little more about the last two, which are based on the assumption that enduring links between researchers and users and positive attitudes by researchers and stakeholders to knowledge exchange and research impact more broadly are conducive to continued collaboration and impact in the future.

As well as recognising types of impact, we need to know who has been influenced. Who changed will typically include one or more of:

  1. Policy-makers: including government agencies and regulatory bodies; local, national and international
  2. Practitioners: public, private, NGOs
  3. Communities: of place or interest, general public
  4. Researchers: within and beyond the project and institution
  5. Other.

The question ‘How do we know?’ requires assessing which indicators and methods should be used, and questions asked, to demonstrate impacts and/or progress towards generation of impacts.

This involves considering how multiple proximate and end-users were influenced in multiple ways, over different timescales, using a more nuanced language of impact and of the processes and factors that generate it. It requires consideration of:

  1. the long timeframes over which the development and diffusion of impacts occur
  2. ambiguities associated with attributing causality
  3. limitations in the relevance of quantitative metrics.

Why/how did change occur?

We identify eight causal factors, which influence the impacts of a research project:

  1. Problem-framing: Level of importance; tractability of the problem; active negotiation of research questions; appropriateness of research design.
  2. Research management: Research culture; integration between disciplines and teams; promotion of research services; planning; strategy.
  3. Inputs: Funding; staff capacity and turnover; legacy of previous work; access to equipment and resources.
  4. Outputs: Quality and usefulness of content; appropriate format.
  5. Dissemination: Targeted and efficient delivery of outputs to users and other audiences.
  6. Engagement: Level and quality of interaction with users and other stakeholders; co-production of knowledge; collaboration during design, dissemination and uptake of outputs.
  7. Users: Influence of knowledge intermediaries, eg., ‘champions’ and user groups; incentives and reinforcement to encourage uptake.
  8. Context: Societal, political, economic, biophysical, climate and geographical factors.

The last two factors – ‘users’ and ‘context’ – lie outside the control of researchers and can be seen as external, while ‘research management’, ‘outputs’, and ‘dissemination’ are all primarily internal factors. Other factors relate to interactions between the project team and external stakeholders (eg., potential users and funders), namely ‘problem framing’, ‘inputs’, and ‘engagement’, highlighting the significance of cross-cutting interactions.

So what?

The final part of the framework addresses:

  1. What worked? What could (or should) have been done differently?
  2. What could (or should) be done in the future?

The questions consider explicitly what lessons can be learned, encouraging critical reflection that can contribute to decision-making and hence improvement of future impact generation. To achieve this, the impacts and causal factors identified from the previous parts of the framework can be assembled into a narrative (such as a case study) to illuminate the often-complex causal relations between them, expressed across multiple stakeholders, as a means to support learning, decision making and action.

The goal is not a factual, objective statement, but rather a credible ‘story’, if possible constructed with key stakeholders, that helps participants reach consensus on ‘what changed, why, and so what’.

Final words

In recognition of the vast heterogeneity of impact stories, the evaluation framework is deliberately flexible. Because knowledge mobilisation is composed of complex interactive elements and processes, the framework avoids linearity and artificial sequencing.

Instead, the evaluation framework is composed of conceptual ‘building blocks’ that users can draw upon to construct and analyse their own impact narratives, and, if needed, to develop a suite of indicators of impacts and progress towards impacts. The framework is intended primarily for formative evaluation: our focus is thus on understanding, reflection, improvement and communication rather than on external accountability and decisions about allocation of resources. However, summative approaches to evaluation could also employ the same framework, ideally seeking input from multiple stakeholders and data sources.

Overall, the framework provides a means to transform informal deliberations about impact generation into a process that considers the full range of impact types and causal factors, in a format that supports internal learning and external communication.

What do you think? How could you/your team/your organisation try out the framework?

To find out more:
Edwards, D. M. and Meagher, L. R. (2019). A framework to evaluate the impacts of research on policy and practice: A forestry pilot study. Forest Policy and Economics. (Online) (DOI): https://doi.org/10.1016/j.forpol.2019.101975

Biography: Laura Meagher PhD is the Senior Partner in the Technology Development Group, an Honorary Fellow at the University of Edinburgh and at the James Hutton Institute and an Associate at the Research Unit for Research Utilisation at the University of St Andrews, all in Scotland, UK. She has spent over 30 years working in the US and the UK with and within research and education institutions, along with industry and government, focussing on strategic change. Two foci are complementary: facilitating change and evaluating results of change efforts. As well as promoting interdisciplinarity, she has evaluated interdisciplinary research programmes, provision and capacity-building schemes, and evaluation mechanisms.

Biography: David Edwards PhD is an environmental social scientist with 25 years’ experience in UK, Europe, Africa and South Asia. He is a member of the Senior Management Team at Forest Research, the research agency of the Forestry Commission, where he is Head of the Centre for Ecosystems, Society and Biosecurity. He is based in Scotland, UK. He manages the research programme ‘Integrating research for policy and practice’ which seeks to understand and enhance the impact of forest-related research upon decision-makers and land managers across the public and private sectors. He has developed and applied a range of frameworks, methods and tools to assess the cultural values associated with forests including deliberative processes with environmental artists, forest managers and local communities to create new public discourses around the cultural meanings and values associated with woodlands.

18 thoughts on “A framework to evaluate the impacts of research on policy and practice

  1. Thank you for sharing; this is a topic area that we have also very been interested in from the perspective of an agricultural research organisation in New Zealand. Here is a link to a paper we put together with colleagues in CSIRO Australia (Agriculture and Food) https://doi.org/10.1177/1035719X18823567

    • Many thanks for sharing your paper, Helen. I’m surprised it’s taken me this long to discover the Cynefin framework! My experience with the UK environmental sector is that evaluation can be seen as a tick box exercise, but when formative approaches generate meaningful insights from stakeholders, then we all start getting interested – typically because it helps justify research efforts, but sometimes because it leads to deeper dialogue that can shape researchers’ questions, assumptions, goals, outputs, etc. for the next round of research. Our challenge has been to capture these learning moments and show how they arose – something we believe our ongoing impact case studies at Forest Research are beginning to show.

  2. It’s very nice, thank you for sharing! I will consider the causal factors you list which are very interesting! I use a similar model myself for our research impact analysis at Elrha. Re: your list of impact types, one question, I’m not totally clear on the rationale of 5- I normally include changes to attitudes of research users rolled up with analysis of ‘enduring connectivity’ because the positive attitudes are a necessity which enduring connectivity wouldn’t happen (so it’s an outcome not an impact I suppose in my framework!) Interested to know what you see as the benefit of splitting up 4 and 5? Would you see improved attitudes as an impact in its own right if it didn’t result in one of the other 4 types? I’m not averse to it, just wondering about the rationale!

    • Glad you find the causal factors interesting! I do value both the fourth and fifth impact types in their own right and differentiate between them in this way:
      – ‘Enduring connectivity’ is about the building of specific individual to specific individual relationships that last (thus making it more likely that two-way dialogues and influences will continue between these individuals).
      – ‘Culture/attitude change’ means that the actors have become more positively oriented toward the overall process of knowledge exchange – they are more likely to become involved in knowledge exchange situations, with anyone (not only the person they may have worked with or built a relationship with) – thus increasing the potential for impacts to occur in the future – an important ‘ripple effect’. This can mean that a stakeholder may participate in collaborative research projects in the future, or that a stakeholder organisation facilitates or rewards such activity. Equally, in evaluating research programmes I have been able to find that, through their experiences, individual academics were planning to incorporate knowledge exchange in their future work, having been bitten by the bug of working with stakeholders and (and we know that, in theory at least, academic cultures are changing to push knowledge exchange).

      • Thank you for the reply! This makes sense. I am about to embark on some impact analysis of our past funded portfolio and might consider incorporating these ideas into the qualitative analysis/case studies as I think they are useful. It would be great to continue the dialogue in the future, happy to connect via email if you are interested.

        • Many thanks, Cordelia, for your helpful point about impact types 4 and 5. Just to add to Laura, it can take our researchers a while to fully understand the fifth type – people start by assuming we mean ‘culture/attitudes towards trees and forests etc.’ rather than towards ‘the process of KE and impact’. (In fact, the former would be a nice ‘conceptual impact’.) I agree both ‘4’ and ‘5’ and perhaps even ‘3’ could be seen as outcomes or steps towards instrumental and conceptual impact. We’ve thought hard about this, and believe that all five should be valued in their own right, and that, while a linear sequence of steps might well unfold between them, it is helpful to see all the concepts in the framework as discrete ‘building blocks’ that can be assembled in any number of ways to express the complexity of knowledge mobilization.

  3. Thank you. A simple and clear framework. I wonder though whether we should focus more on how the stock of research knowledge is having an impact rather than the impact of individual research projects.

    • Many thanks! There are certainly many levels at which impact could be sought. In some sense, the twelve case studies generated by Forest Research start to flesh out a view of impacts of a particular stock of research knowledge. I suppose what we are trying to do with the Framework is offer people a way to pull a thread from out of a complex bit of weaving, to be able to draw connections between a finite bit of research (which could be a project, programme or larger) and particular impacts.

      • It’s a very good point, thanks Sean. Just to add to Laura’s response, the case studies were selected to reflect a full range of research interventions from small, discrete consultancies through to major programmes, e.g. tree breeding, and climate change modelling, that have developed over decades and comprised numerous projects funded in different ways. The idea is that the framework would work for a topic, project, research team, programme or agency – but also a user, e.g. a policy-maker, to evaluate and offer feedback on a number of research outputs they have commissioned.

  4. I think “The question ‘How do we know?’” might bear deeper consideration

    It is one thing to be able to discern cause effect relationships in retrospect but we should not assume that that they will apply in future, that they will be a reliable guide to future efforts

    Some will rest on self evident relationships or at least stable mechanisms even if they are not easily understood without analysis, which will persist long enough to be useful in future work

    Others will have emerged from the interactions between the elements described as “… ‘users’ and ‘context’ – [that] lie outside the control of researchers and can be seen as external”. The complex manner in which people interact can throw up unpredictable developments that are not reproducible, possibly turning on a small feature that shifted everyone’s attention at a key time and could just as well have gone another way

    While it is couched in analytical terms, this broadcast includes some interesting examples of how, in real life and in experiments, success is unpredictable when human behaviour plays a significant role in a system

    “The formula – the new science of success” https://www.abc.net.au/radionational/programs/scienceshow/the-new-science-of-success/11928548

    A shift of viewpoint to placing users and their behaviour at the centre. Instead of seeing users, or at least their behaviour, as external to the system alters the way we see change and evaluation

    My own thought processes on the subject are strongly influenced by the Cynefin framework

    • Thanks for this and for the link. Absolutely, human interactions are supremely variable! Nonetheless, we do believe in the power of learning; that a reflective mind can help in making sense of the past and preparing for the future (however inevitably surprising).

      • It’s always difficult to get this principle across and I don’t think I have managed to do so here.

        One of the key points is to give up the idea of an external observer’s knowledge being definitive, even when it is based on research.

        This is not really a suitable forum in which to take the matter further but I will drop in a reference to an interview that covers this ground obliquely, often the best way to address it https://www.abc.net.au/news/2020-03-05/how-indigenous-thinking-can-save-the-world/12024218

        A few of the points it covers are: the primacy of context; leaving aside the external observer viewpoint to acknowledge that we are part of the system; accepting the legitimacy of multiple truths (not in the Trumpian sense but allowing valid alternative interpretations); the importance of patterns in making sense of the world.

        The ideas are dotted through the interview, starting to flow about 2m30s in.

    • I think your point about the ‘researcher centric’ tendency of impact evaluation is interesting and important. We’ve followed conventions by ‘starting’ with the researcher and ‘finishing’ with the user, but another approach would be to view the mobilization of knowledge from ‘above’ and give equal weight to influences and impacts affecting all stakeholder groups, not just researchers. I think a similar set of impact types and causal factors could still be used but presented in a different way.

  5. Normally I’m wary of frameworks because the world is full of them and they’re almost always theoretical. But this one ticks a lot of boxes – it’s pragmatic, plain language and actually has been used.

    • Thank you for your kind comments. I noticed several points that resonated in your link, including the nice phrase ‘strategic adjustment’.

Leave a Reply