Considerations for choosing frameworks to assess research impact

By Elena Louder, Carina Wyborn, Christopher Cvitanovic and Angela T. Bednarek

authors_elena-louder_carina-wyborn_christopher-cvitanovic_angela-t-bednarek
1. Elena Louder (biography)
2. Carina Wyborn (biography)
3. Christopher Cvitanovic (biography)
4. Angela Bednarek (biography)

What should you take into account in selecting among the many frameworks for evaluating research impact?

In our recent paper (Louder et al., 2021) we examined the epistemological foundations and assumptions of several frameworks and drew out their similarities and differences to help improve the evaluation of research impact. In doing so we identified four key principles or ‘rules of thumb’ to help guide the selection of an evaluation framework for application within a specific context.

1. Be clear about underlying assumptions of knowledge production and definitions of impact

Clarifying from the start how research activities are intended to achieve impact is an important pre-cursor to designing an evaluation. Furthermore, defining what you mean by impact is an important first step in selecting indicators to know if you’ve achieved it.

For example, a research organization should be clear up front whether changes in attitude, problem framing, and/or relationships count as impact. This must involve outlining why certain activities are expected to contribute to impact, and what those impacts might look like. If, for instance, it is assumed that interactions between stakeholders lead to improved relationships, indicators can usefully be developed to evaluate the nature, frequency, quality, etc., of interactions.

This epistemological clarity helps define what counts as impact, and what counts as robust evidence of that impact.

2. Attempt to measure intermediate and process-related impacts

Whether this means expanding the definition of impact, or evaluating quality, or ‘contribution to impact,’ select indicators that capture nuanced changes in problem framing, understanding, or mindsets.

Evaluations should at least partially attempt to capture the ‘below the tip of the iceberg’ knowledge co-production activities. This could be done by focusing at least part of an evaluation on measuring perspectives of participants (via interview or survey) regarding changes such as increased capacity, changes in expertise and knowledge, and shifts in how a problem is understood or framed.

Attention to such intermediate impacts is important as they may serve as building blocks for end-of-process outcomes, and also enable the evaluation of ‘progress makers’ along a theory of change to identify if a project is tracking towards intended outcomes.

3. Balance emergent and expected outcomes

While it is important to be clear on expectations and aspirations, evaluations should have at least some open-ended component which captures unexpected outcomes, both positive and negative. This could be implemented through crafting at least part of an evaluation in an open-ended manner.

For example, rather than rubrics with pre-determined criteria, ask instead – what changed? who changed? how do you know? Such an open-ended approach allows unexpected outcomes to surface.

4. Balance indicators that capture nuance and those that simplify

Evaluations which assign numerical scores to impact may be extremely useful for project managers and large research organizations. However, aggregated scores can sometimes overshadow conceptual changes in the way a problem is framed, or subtle changes resulting from knowledge co-production. Over-emphasis on simple evaluations can also lead to ‘gaming the indicators,’ and provide perverse incentives to tailor research to meet the indicators.

While indicators that can be quantitatively scored (for a hypothetical example, assigning 1-10 scores on dimensions like suitable context, legitimacy and relevance, project outputs) may be easy to use, especially for comparing different research projects, such an approach might not register why or how changes occurred.

The same is true for the number of indicators – fewer indicators may make evaluation simpler and more convenient, where more indicators may deliver more detailed information. This tension must be considered when designing an evaluation.

Closing questions

While these four considerations were derived from a review of frameworks used in the environmental sciences, how well can they be applied in other domains and disciplines? Within other domains and disciplines, are there additional considerations that must be accounted for? Are there other key considerations that you would add based on your experiences of impact evaluation?

louder_considerations-in-choosing-frameworks-to-assess-research-impact
Rules of thumb for selecting a framework to evaluate research impact

To find out more see:
Louder, E., Wyborn, C., Cvitanovic, C. and Bednarek, A. T. (2021). A synthesis of the frameworks available to guide evaluations of research impact at the interface of environmental science, policy and practice. Environmental Science and Policy, 116: 258-265. (Online – open access) (DOI): https://doi.org/10.1016/j.envsci.2020.12.006

Biography: Elena Louder is a PhD student in the department of Geography, Development and Environment at the University of Arizona, Tucson, USA. Her research interests include political ecology, the politics of renewable energy development, knowledge co-production, and biodiversity conservation.

Biography: Carina Wyborn PhD is an interdisciplinary social scientist with a background in science and technology studies, and human ecology. She is based at the Institute for Water Futures at The Australian National University in Canberra, where she researches the science, policy, and politics of environmental futures and capacities that enable future-oriented decision making, in the context of uncertainty.

Biography: Chris Cvitanovic PhD is a transdisciplinary marine scientist at The Australian National University in Canberra. His research is focused on improving the relationship between science, policy and practice to enable evidence-informed decision-making for sustainable ocean futures.

Biography: Angela Bednarek PhD directs the Evidence Project at The Pew Charitable Trusts in Washington DC in the USA. The Evidence Project is a cross-cutting initiative aimed at increasing the use of evidence in policy and practice by marshalling funders, practitioners, scholars, and others to demonstrate effective practice and spur systemic changes in research and evidence use infrastructure.

8 thoughts on “Considerations for choosing frameworks to assess research impact”

  1. This is a very interesting conclusion that you draw. I think it would be now interesting to operationlise these considerations for the practice of transdisciplianry research and especially for funders of transdisciplinary research who always ask for indicators and impact. Here, we need much more dialoge with funders to clarify the diversity of societal impact of transdisciplinary research, which is solution-oriented and has diverse forms of impacts, outcomes, and results. Thank you for your thoughts.

    Reply
  2. Excellent paper and post! Thank you for bringing more attention to the needed work of evaluating research impact, which has been especially missing when evaluating interdisciplinary research. One tricky thing about doing this is identifying what is valuable in the research & its impacts. Any recommendations for how to identify, integrate, and/or adjudicate among plural values in these evaluations? It can’t just be “Whatever the project researchers decide is valuable,” right? Because what if they totally disvalue racial equity, democratic engagement, community & individual sovereignty, etc?

    Reply
    • Thanks Bethany, excellent point. Evaluations should always involve a conversation between those being evaluated, those doing the evaluation, and those funding the research/evaluation around what is the intended impact of an initiative, how it will be measured, with what indicators etc etc. Based on that response, I don’t think it is ever just up to project researchers to decide, and in that regard, answer to your first question is sort of easy – ideas should be exposed, integrated, and discussed at the outset, and if there is a desire to measure more than one thing to capture plurality of values then provided there is agreement within the team then that would be fine. Your provocative question is more difficult, because that is about higher order normative standards that may or may not be part of the focus/remit of a project in its immediate intention. For example, should a biodiversity conservation research project trying to improve frog habitat in a strict protected area be required to address these issues? Some would argue yes, others would say no. So then it would come down to either the individual motivations of those involved with the research, or those who are funding the research to make it clear that these process oriented aspects of the research matter irrespective of the topic of the project.

      Does that help? Make sense? These are tricky things to grapple with, particularly in the absence of clear rules or normative guidelines about what is appropriate or not that transcend fields.

      Reply
  3. Thanks for this – looks very useful and I look forward to reading in full. We have recently kicked off a 5-yr research programme and, while we set out a very high level Theory of Change along with our pathway to impact statement, the programme is fundamentally based on TD working and co-production with multiple stakeholders so impact (potential changes in policy and practice) and key stakeholders yet to be identified. We will be starting to develop a framework impact strategy in the coming months. https://wellcomeopenresearch.org/articles/6-30 / https://truud.ac.uk/

    Reply
    • It’s good to know about your project. We look forward to hearing more as you and your team develop the impact strategy.

      Reply
  4. Thanks for these insights.

    Two additional thoughts: 1. when research attempts to influence policy, a bit of ‘practical wisdom’ is helpful. See: https://evaluationandcommunicationinpractice.net/evaluating-policy-impact-an-arena-for-practical-wisdom/ and I wonder how to include this in your recommendations.
    A second item is the question of what actually constitutes ‘impact’ in complex and dynamics contexts. See: https://evaluationandcommunicationinpractice.net/what-is-impact/ So often the impact cannot be attributed to a single (research) effort.

    Reply
    • Thank you for these thoughts and further resources. We share your perspective that what’s counts as ‘impact’ is quite variable and greatly determines how you go about evaluating it, indeed that’s one of the central themes we wrestle with and explore in the paper. Thanks again for engaging with our work.

      Reply

Leave a Reply to Ricardo RamirezCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading