By Niki Ellis, Anne-Maree Dowd, Tamika Heiden and Gabriele Bammer
What does it take for research to be impactful? How should research impact be assessed? How much responsibility for impact should rest with researchers and how much with government, business and/or community partners?
We present five key insights based on our experience in achieving research impact in Australia:
- Planning for impact is essential
- Quality relationships trump all other factors
- Assessment of research contributions should be tailored to the type of research and based on team, not individual, performance
- Researchers alone cannot be responsible for achieving impact
- Be open to continual learning.
1. Planning for impact is essential
The benefits of planning for impact, and the time and effort required, are underestimated. This involves:
- addressing key questions, including:
- What is the theory of change, specifically what does the team know and assume about how change happens?
- Who will be the key beneficiaries of the research? Are they in government, business and/or civil society? Which specific departments, organisations or individuals?
- Will the research influence change in policy or in practice or both?
- What are the critical pathways for achieving impact in terms of inputs, activities, outputs and outcomes?
- ensuring that research teams have the capabilities and capacities that they need to deliver according to the theory of change and the identified critical pathways
- collaboratively setting research priorities, and identifying “value,” with everyone concerned with achieving impact, especially researchers, those who will use the research (decision makers), funders and community stakeholders
- understanding ‘absorbtive capacity’, ie., the ability of research teams and the organisations benefiting from the research to assimilate and use new knowledge. This acknowledges that attention, rather than information, is the limiting factor in producing impact
- ensuring that the research leadership is oriented to delivering impact and that there are champions for impact at critical stages in the innovation system
- using effective frameworks and tools, eg., the Consolidated Framework for Implementation Research (CFIR, https://cfirguide.org/)
- revisiting and adjusting plans at regular (eg., 6-monthly) intervals, recognising that circumstances will change.
2. Quality relationships trump all other factors
Relationships are important within the research team, as well as between researchers and all external partners.
Within the research team this involves:
- appreciating each team member’s strengths, especially the expertise they can contribute to achieving impact (eg., they may have long-established relationships with particular stakeholders, be good at visualisation or other forms of communication, or be expert in commercialisation).
With external partners this involves:
- developing respectful relationships that seek to balance the short-term needs of decision makers and the long-term horizon of effective research
- managing power imbalances (eg., between funders and researchers), so that they do not impede the open exchange of ideas
- working with community stakeholders and others who identify particular problems (such as clusters of illness or environmental pollution) to develop an evidence base to establish the legitimacy and strength of their concerns.
3. Assessment of research contributions should be tailored to the type of research and based on team, not individual, performance
The impact of “blue skies” or basic research requires different assessment from that of applied research. Effectively assessing the impact of “blue skies” research involves:
- encouraging researchers to “hand over their baby” to applied researchers and/or those who will implement the key finding or idea
- assessing how effectively blue-skies researchers stay involved in the application of their finding, including to trouble-shoot problems and adapt it to new circumstances.
Further, effectively assessing research impact requires a cultural shift in research organisations to recognise that:
- the capacity to deliver impact is an organisational asset (like other physical and financial assets)
- teams, not individuals, make impact happen.
Rewards and performance management need to be adjusted accordingly and need to be seen as effective on the ground, not just in policy documents. Assessing teams rather than individuals involves:
- developing reward and performance measures for teams
- evaluating individuals on how they contributed to a team’s efforts, in particular, did they contribute their expertise and skills to the best of their ability?
4. Researchers alone cannot be responsible for achieving impact
At present the primary responsibility for research impact rests with researchers, despite the fact that they have little control over many aspects of the innovation system. Instead, everyone in the innovation system, especially funders and decision makers, should be responsible for their role, singly and together, in achieving impact. Researchers, for example, can only control who they seek to interact with, and cannot be held solely responsible for how those interactions pan out.
While researchers understand and are becoming increasingly focused on impact, a similar shift has yet to occur in business, government and civil society. Further, while researchers are increasingly building their understanding of how business, government and civil society work, those sectors often still poorly understand the requirements of high quality research or even how to effectively use evidence.
It is also important to keep in mind that research impact may sometimes be unpopular with (or even strongly opposed by) government, business and civil society, as researchers also have a role in being critical of current policies and programs, and pushing for improvement. It is essential that this critical role remains part of researchers’ social licence to operate.
5. Be open to continual learning
There is still a lot that is unknown about achieving impact and this requires the ability to build on experience, including failure, to improve understanding about:
- the complexity of change
- how best to build relationships between decision-makers and researchers; despite its importance there are still many unknowns including the level in the system that is most effective eg., national, state, sector or individual
- the most effective ways for researchers to help raise issues of concern to communities that governments and/or business may prefer to ignore, including how best to respond when ‘dirty’ tactics are used
- how to effectively overcome biases and recognise diversity, including in gender, culture and power
- how to evaluate failures.
What’s your experience?
Do these insights resonate with you? Are there other issues that you would add? Are there areas where your experience differs?
To find out more:
These ideas were presented at the opening panel “Optimising implementation for impact” of the Impact Frameworks and Cultural Change Conference held online from February 25-26th 2021. The video of the panel is available at https://impactframeworks.info/sessions/optimising-implementation-for-impact/, with more details about the conference, including all the webinar videos and podcasts, at https://impactframeworks.info/.
Biography: Niki Ellis MBBS is an Adjunct Professor in the Department of Epidemiology and Preventive Medicine at Monash University in Melbourne, Australia. She works as a consultant with organisations to strengthen the evidence-base for their policies and practice, as well as to improve their ability to demonstrate impact.
Biography: Anne-Maree Dowd PhD is the Executive Manager for Performance and Evaluation at the CSIRO (Commonwealth Scientific and Industrial Research Organisation). She is based in Brisbane, Australia. She manages investment planning, tracking and impact assessment. Along with scientific research capabilities, she has strategic management, planning and performance expertise.
Biography: Tamika Heiden PhD is the founder of the Research Impact Academy which provides consultancy services to support the creation, capture and communication of research impact. She has worked in health research and research coordination for over 15 years focused on translation and impact.
Biography: Gabriele Bammer PhD is a professor at The Australian National University in Canberra in the Research School of Population Health’s National Centre for Epidemiology and Population Health. She is developing the new discipline of Integration and Implementation Sciences (i2S) to improve research strengths for tackling complex real-world problems through synthesis of disciplinary and stakeholder knowledge, understanding and managing diverse unknowns, and providing integrated research support for policy and practice change.