By Niki Ellis, Anne-Maree Dowd, Tamika Heiden and Gabriele Bammer
What does it take for research to be impactful? How should research impact be assessed? How much responsibility for impact should rest with researchers and how much with government, business and/or community partners?
We present five key insights based on our experience in achieving research impact in Australia:
- Planning for impact is essential
- Quality relationships trump all other factors
- Assessment of research contributions should be tailored to the type of research and based on team, not individual, performance
- Researchers alone cannot be responsible for achieving impact
- Be open to continual learning.
1. Planning for impact is essential
The benefits of planning for impact, and the time and effort required, are underestimated. This involves:
- addressing key questions, including:
- What is the theory of change, specifically what does the team know and assume about how change happens?
- Who will be the key beneficiaries of the research? Are they in government, business and/or civil society? Which specific departments, organisations or individuals?
- Will the research influence change in policy or in practice or both?
- What are the critical pathways for achieving impact in terms of inputs, activities, outputs and outcomes?
- ensuring that research teams have the capabilities and capacities that they need to deliver according to the theory of change and the identified critical pathways
- collaboratively setting research priorities, and identifying “value,” with everyone concerned with achieving impact, especially researchers, those who will use the research (decision makers), funders and community stakeholders
- understanding ‘absorbtive capacity’, ie., the ability of research teams and the organisations benefiting from the research to assimilate and use new knowledge. This acknowledges that attention, rather than information, is the limiting factor in producing impact
- ensuring that the research leadership is oriented to delivering impact and that there are champions for impact at critical stages in the innovation system
- using effective frameworks and tools, eg., the Consolidated Framework for Implementation Research (CFIR, https://cfirguide.org/)
- revisiting and adjusting plans at regular (eg., 6-monthly) intervals, recognising that circumstances will change.
2. Quality relationships trump all other factors
Relationships are important within the research team, as well as between researchers and all external partners.
Within the research team this involves:
- appreciating each team member’s strengths, especially the expertise they can contribute to achieving impact (eg., they may have long-established relationships with particular stakeholders, be good at visualisation or other forms of communication, or be expert in commercialisation).
With external partners this involves:
- developing respectful relationships that seek to balance the short-term needs of decision makers and the long-term horizon of effective research
- managing power imbalances (eg., between funders and researchers), so that they do not impede the open exchange of ideas
- working with community stakeholders and others who identify particular problems (such as clusters of illness or environmental pollution) to develop an evidence base to establish the legitimacy and strength of their concerns.
3. Assessment of research contributions should be tailored to the type of research and based on team, not individual, performance
The impact of “blue skies” or basic research requires different assessment from that of applied research. Effectively assessing the impact of “blue skies” research involves:
- encouraging researchers to “hand over their baby” to applied researchers and/or those who will implement the key finding or idea
- assessing how effectively blue-skies researchers stay involved in the application of their finding, including to trouble-shoot problems and adapt it to new circumstances.
Further, effectively assessing research impact requires a cultural shift in research organisations to recognise that:
- the capacity to deliver impact is an organisational asset (like other physical and financial assets)
- teams, not individuals, make impact happen.
Rewards and performance management need to be adjusted accordingly and need to be seen as effective on the ground, not just in policy documents. Assessing teams rather than individuals involves:
- developing reward and performance measures for teams
- evaluating individuals on how they contributed to a team’s efforts, in particular, did they contribute their expertise and skills to the best of their ability?
4. Researchers alone cannot be responsible for achieving impact
At present the primary responsibility for research impact rests with researchers, despite the fact that they have little control over many aspects of the innovation system. Instead, everyone in the innovation system, especially funders and decision makers, should be responsible for their role, singly and together, in achieving impact. Researchers, for example, can only control who they seek to interact with, and cannot be held solely responsible for how those interactions pan out.
While researchers understand and are becoming increasingly focused on impact, a similar shift has yet to occur in business, government and civil society. Further, while researchers are increasingly building their understanding of how business, government and civil society work, those sectors often still poorly understand the requirements of high quality research or even how to effectively use evidence.
It is also important to keep in mind that research impact may sometimes be unpopular with (or even strongly opposed by) government, business and civil society, as researchers also have a role in being critical of current policies and programs, and pushing for improvement. It is essential that this critical role remains part of researchers’ social licence to operate.
5. Be open to continual learning
There is still a lot that is unknown about achieving impact and this requires the ability to build on experience, including failure, to improve understanding about:
- the complexity of change
- how best to build relationships between decision-makers and researchers; despite its importance there are still many unknowns including the level in the system that is most effective eg., national, state, sector or individual
- the most effective ways for researchers to help raise issues of concern to communities that governments and/or business may prefer to ignore, including how best to respond when ‘dirty’ tactics are used
- how to effectively overcome biases and recognise diversity, including in gender, culture and power
- how to evaluate failures.
What’s your experience?
Do these insights resonate with you? Are there other issues that you would add? Are there areas where your experience differs?
To find out more:
These ideas were presented at the opening panel “Optimising implementation for impact” of the Impact Frameworks and Cultural Change Conference held online from February 25-26th 2021. The video of the panel is available at https://impactframeworks.info/sessions/optimising-implementation-for-impact/, with more details about the conference, including all the webinar videos and podcasts, at https://impactframeworks.info/.
Biography: Niki Ellis MBBS is an Adjunct Professor in the Department of Epidemiology and Preventive Medicine at Monash University in Melbourne, Australia. She works as a consultant with organisations to strengthen the evidence-base for their policies and practice, as well as to improve their ability to demonstrate impact.
Biography: Anne-Maree Dowd PhD is the Executive Manager for Performance and Evaluation at the CSIRO (Commonwealth Scientific and Industrial Research Organisation). She is based in Brisbane, Australia. She manages investment planning, tracking and impact assessment. Along with scientific research capabilities, she has strategic management, planning and performance expertise.
Biography: Tamika Heiden PhD is the founder of the Research Impact Academy which provides consultancy services to support the creation, capture and communication of research impact. She has worked in health research and research coordination for over 15 years focused on translation and impact.
Biography: Gabriele Bammer PhD is a professor at The Australian National University in Canberra in the Research School of Population Health’s National Centre for Epidemiology and Population Health. She is developing the new discipline of Integration and Implementation Sciences (i2S) to improve research strengths for tackling complex real-world problems through synthesis of disciplinary and stakeholder knowledge, understanding and managing diverse unknowns, and providing integrated research support for policy and practice change.
16 thoughts on “Five insights on achieving research impact”
Thank you so much for these wonderful suggestions to achieve research impact. I always like when the post is written by a team rather than individuals. It make me realize how important team work in achieving impact.
I asked myself at the beginning of my quest to make the change happen in my institute. Which one do we need to change first the mindset of researchers or the vision and mission of leaders and policymakers?. It was clear for me after a while that leaders needed to be highly aligned and interested in order to do impactful research that solves complex problems. Therefore, fear was my first strategy. By using common metric tools (e.g. measuring their current performance, competitiveness, benchmarks from organizations that represent world’s best practice), I was able to convince them that they need to do impactful research. They had to establish new policies in research, contact with key beneficiaries, align with country strategic plan, encourage building multidisciplinary team through research unit with specific goal, centralize the research, build on their current strength, motivate researchers through incentives, plan for effective time management, interdisciplinary educational programs, overcome some obstacles such as the isolation of disciplines and subdisciplines, etc.
Now we need to identify each team member’s strengths, start with experts in each fields and gradually involve who is interested but doesn’t have the expertise, measure the outcome of the current performance and build on it to expand the infrastructure of research, design an effective knowledge co-production atmosphere by setting up specific goals, inter- /transdisciplinary offices, central labs, research performance progress report, punishment and reward policy, etc.
Still time is required to achieve real impact but I hope that we are on the right track, the researchers need interdisciplinary education before interdisciplinary collaboration to understand the impact philosophy that is built on active listening to community/stakeholders problems, trust each other, recognition of team member strengths/weakness and respectful relationships among diverse disciplines. I believe that individuals must advocate for change and start the process but teams can make the change really happen
Many thanks for those observations, Tarek. They mesh well with ours. Good luck with your endeavours!
Thanks Gabriele, we are trying to do what you taught us and everything time we face hard challenges, we remember your words that patience is one of the secret ingredients for change
Thanks again, you are good teacher
What an interesting and comprehensive comment. One aspect of this that interests me in particular is how best to organise dialogue between policy makers and researchers.
My experience is that bringing together the long term views on possibilities from researchers with the shorter term needs of policy makers results in very good ideas about potential research priorities, however it requires resources to facilitate this, as we know from theories on collective impact.
Doing this at too low a level in the system, eg at project level, will drive stakeholders mad with repeated demands for input; do this at a macro-level, and the conversation may not be specific enough.
Industry level dialogue with the view to definition of research needs seems a good idea to me, but often it ends up being determined by the source of funding. For example, I led a research centre that had funding from government authorities in one Australian state. We had mechanisms for regular dialogue that worked well in generating relevant research priorities, but they would have probably been just as relevant for the authorities that existed in other states and territories as well.
Thanks Niki for sharing your experience in the engagement with policymakers and stakeholders. I would be very interested in your mechanism for dialogue based integration. In my opinion as you already mentioned, the tools of integration are really important as they facilitate effective communication between all involved parties but one must be smart enough to use them at different stages of project and for specific people.
Thanks again for your reply, it is really useful for me
Tarek: In the case of the centre I referred to above, we established formal structures for dialogue with regular meetings between researchers and key policy makers around our programs. It was quite resource intensive however. A review of the centre took the view that the mechanisms for stakeholder engagement were effective and that we would grow into them. I think if we had been able to develop the centre into a national one – and there was support from this from the other Australian States, this would have become cost effective. However the funders decided to keep the centre as primarily focussed on one State, and I left! We also found that seminars where we presented the work being done through the centre and then discussed the implications were effective. I also think that it is important that the research leaders forge relationships with the policy leaders in whatever way they can. We were regularly invited to attend Executive meetings and occasionally Board meetings.
In looking at this issue for other organisations I have recommended establishing a broad community of practice for research around a government policy maker, driven by an executive of key stakeholders – researchers, policy makers and practitioners.
I would be interested to hear your views. Paraphrasing a recent text book on implementation science, we know stakeholder engagement is critical, we do not yet know what best practice is.
Stakeholder engagement is an area of implementation science that will never, in my opinion, have a guidelines or magic bullet solution because it is highly dependent on human values/interests and problem priority and framing.
As I mentioned before, prioritizing certain problem which is on the agenda of policymakers and presenting partial solution in a language that they can understand is key for effective engagement. There will be some emergent complex problems such as covid-19 which will take over and will have high priority for all parties to focus on.
Your recommendation for establishing a community of practice, while engaging continuously with policymaker and stakeholder is very good idea. However, your team must build continues trust with them by having effective communication skills, problem solving, observation and intuition abilities. Sometimes you have to make trade off for the greater good, like in my case, I engaged with a funder by inviting esteemed professor in my specialty who then took over despite the fact that the idea and proposal was mine because I was considered young researcher in the eye of the funder.
For engaging with stakeholders in industry (R&D purposes), I advised my community service team, never engage before you are well prepared, study their products, their local competitors, their possible problems, public opinion about them, then invite expertise related to the development of their products if you don’t have one, make several researcher meetings, frame the problem, identify who is in charge (i.e. CEO, etc), get someone who knows them personally well and knows their attributes, bring the best knowledgeable communicators in your team who can speak simple language with different technical and social skills, present the problem and partial convincing technical solution to it, amplify the economical values of the solution then you will be able collaborate, and empower. These steps take time but worked for me and it is up to my team to gain their continual trust and sustain this collaboration in the future.
Thank Tarek for this interesting reply. I couldn’t agree more. Whilst models and frameworks are useful, we have to remember that impact requires understanding and working with people. Just yesterday I was listening to a great podcast about stakeholder engagement and empathy which got me thinking about how much of how we operate has neglected the human element of interaction, understanding, trust and ultimately empathy.
It is a different skill set that is required for these activities which leads to the questions of who should be doing this work, and should we bring human behaviour elements into how we build skills for translation, implementation and ultimately impact.
Love the thinking that is going on and the conversation generated is so rich with considerations and though producing comments.
Thanks again for your insightful input.
Thanks a lot Tamika for your feedback on my reply. I am so happy that three of you interacted with my thoughts.
You summarized exactly what I think about how we can achieve research impact; frameworks are really useful in providing systematic steps to do something but we have to consider the human nature and our diverse mental models about projects value, research outcomes, etc, especially for the people who can do something about the problem.
We are diverse in everything and adding research members who can understand this diversity and adapt to it is very crucial in achieving impact in order to benefit our community with our research results. That is why, I recently started to talk more with other disciplines with background from humanity and social sciences and it is amazing how different they see the problem, it is really useful to have them in the team of basic and applied research
Thanks again and I wish to read more posts from you all
I totally agree with Melanie. Actually, the impact that is most frequently perceived and reported by clinicians (nurses, pharmacists and physicians) and patients and parents of young children in our longitudinal (observational) studies are this evidence (a) validates what I am thinking (cognitive impact), and (b) confirms that I am doing the right thing (legitimating use). Legitimating use of evidence is a great use of evidence as it reassures clinicians and people that what they do is still the right thing to do, which is essential due to the vast amount of research results published on a daily basis.
Thanks, Pierre, for these additional important points. The risk with such validating and confirming is confirmation bias – ie we all have a tendency to cherry-pick what we agreed with and ignore what we don’t. Have you looked for this and found any evidence in your research?
Thanks for a really useful consolidation of your combined experience with research impact. I think the point about researchers not having sole responsibility for impact is a key and neglected issue.
Following on from Melanie’s comment, I want to emphasise the need to take a critical approach to understanding, planning and assessing impact. This has come up repeatedly in our discussions of research impact and knowledge-to-action at the ANU. Impact has become a catch-all phrase, which is tending to be understood in scalar, quantitative terms, i.e. the more the better (like progress, or indeed innovation – see Stirling 2008). It is important to understand impact in more normative terms – research can bring about a variety of changes, in understanding, practice, policy, and industry, which in turn have a variety of consequences and secondary effects. These can be positive and negative. If we become focused on increasing impact, without attention to the nature, quality and consequences of impact, we risk instrumentalising research and eroding the role that knowledge producers have in evaluating and holding to account the use knowledge to wield power and make change (or not).
Stirling, A. (2008). “Opening Up” and “Closing Down”: Power, Participation, and Pluralism in the Social Appraisal of Technology. Science, Technology & Human Values, 33(2), 262-294
Thanks Wendy for your great comment.
The understanding and co-creation of impact pathways is a critical first step in not only defining what will occur and be delivered by the research component of the pathway, but what role and actions will be needed and undertaken for successful uptake and adoption. It is only through engaging and including all stakeholders across the pathways to impact can you gain those insights. Together you can then articulate, and scenario plan the consequences captured as impacts. As you state, those can be both positive and negative, but also considering the relevance, reach and scale of the benefits.
In addition, I try to view the process from an “optimisation” perspective rather than only from a “growth” view. As you rightly state, increasing impact cannot be our only approach or goal, but building into our planning processes how to optimise multiple types of positive impacts in order to achieve the greatest returns on investments is an alternative to consider. This approach assists in not only addressing issues such as efficiency, effectiveness, and breadth in delivering benefits, but it should also identify negative factors which could cause challenges, barriers, limitations or negative effects. Once again, co-creating pathways provides a greater capacity to consider and choose options from a variety of views, which can build a more comprehensive and consider plan. The responsibility then for an “optimised” pathway is shared across the plan, hence holding each other accountable, as it is not the sole role of researchers to be held accountable nor hold others to account.
Many thanks for this post. Being open to continual learning is a crucial element to creating a sustainable impact culture. Without continual learning any impact structure will soon become managerial or ‘impact by numbers’. A lot of the lessons from this blog were also discussed by its authors at the recent Impact Frameworks and Cultural Change Conference. Their webinar is available to view here https://impactframeworks.info/sessions/optimising-implementation-for-impact/
Thank you for your thoughts Melanie. You are correct indeed that impact is much broader than practice change. I agree with your comments on the most part, however, would perhaps diverge from your thinking that impact may not be to change anything. It may be a nuance with the language in this case. Typically, impact is assessed as “change” in, or to, something. In rarer cases, research may lead to a decision not to use a particular intervention, treatment, policy and so on, which would then not change anything. KT on the otherhand does not always lead to a measurable impact.
I agree wholeheartedly that we must provide funding for KT and reward researchers for their KT activities as a standard part of academic performance and evaluations.
Thank you for these interesting insights. Many do resonate with me. I would add that I think caution is needed in linking impact to practice change, uniquely. Not all impact occurs in the form of practice change. Research evidence can shape knowledge, attitudes, decision-making; there are many forms of impact, or benefit, which is a concept I prefer. The key is to be explicit in what impact your KT is aiming to achieve and then to evaluate against this. Same applies to influence. The impact of the new evidence may not be to change anything. Research evidence can be symbolic, conceptual, or instrumental. If we structure our systems around instrumental change, we devalue the benefits that can emerge from other types of evidence.
Additionally, why I wholeheartedly agree that researchers must have the capabilities and capacities needed to engage in KT, equally important is that funders must fund KT, whether it be KT-dissemination and/or KT-Implementation, and universities and research institutions must recognize these efforts within performance evaluation and academic performance.