A tool for transdisciplinary research planning and evaluation

By Brian Belcher, Rachel Claus, Rachel Davel, Stephanie Jones and Daniela Pinto

authors_brian-belcher_rachel-claus_rachel-davel_stephanie-jones_daniela-pinto
1. Brian Belcher; 2. Rachel Claus; 3. Rachel Davel; 4. Stephanie Jones; 5. Daniela Pinto (biographies)

What are the characteristics of high-quality transdisciplinary research? As research approaches increasingly cross disciplinary bounds and engage stakeholders in the research process to more effectively address complex problems, traditional academic research assessment criteria are insufficient and may even constrain transdisciplinary research development and use. There is a need for appropriate principles and criteria to guide transdisciplinary research practice and evaluation.

Read more

Theory of Change in a nutshell

By Heléne Clark

author_helene-clark
Heléne Clark (biography)

How can you plan to make change happen or evaluate the effectiveness of actions you took? How can you link desired long-term goals with all the conditions that must be in place? How can you map out a step-by-step pathway that highlights your assumptions and expectations?

Theory of Change (ToC) is a graphic and narrative explanation of how and why a change process is expected to happen in a particular context.

At its heart, Theory of Change spells out initiative or program logic. It defines long-term goals and then maps backward to identify changes thought to be necessary to the goal that need to happen earlier (preconditions).

Theory of Change purports to explain change process in diagrammatically modeling all the causal linkages in an initiative, ie., its shorter-term, intermediate, and longer-term outcomes.

Read more

Considerations for choosing frameworks to assess research impact

By Elena Louder, Carina Wyborn, Christopher Cvitanovic and Angela T. Bednarek

authors_elena-louder_carina-wyborn_christopher-cvitanovic_angela-t-bednarek
1. Elena Louder (biography)
2. Carina Wyborn (biography)
3. Christopher Cvitanovic (biography)
4. Angela Bednarek (biography)

What should you take into account in selecting among the many frameworks for evaluating research impact?

In our recent paper (Louder et al., 2021) we examined the epistemological foundations and assumptions of several frameworks and drew out their similarities and differences to help improve the evaluation of research impact. In doing so we identified four key principles or ‘rules of thumb’ to help guide the selection of an evaluation framework for application within a specific context.

1. Be clear about underlying assumptions of knowledge production and definitions of impact

Clarifying from the start how research activities are intended to achieve impact is an important pre-cursor to designing an evaluation. Furthermore, defining what you mean by impact is an important first step in selecting indicators to know if you’ve achieved it.

Read more

Addressing societal challenges: From interdisciplinarity to research portfolios analysis

By Ismael Rafols

author_ismael-rafols
Ismael Rafols (biography)

How can knowledge integration for addressing societal challenges be mapped, ‘measured’ and assessed?

In this blog post I argue that measuring averages or aggregates of ‘interdisciplinarity’ is not sufficiently focused for evaluating research aimed at societal contributions. Instead, one should take a portfolio approach to analyze knowledge integration as a systemic process over research landscapes; in particular, focusing on the directions, diversity and synergies of research trajectories.

There are two main reasons:

Read more

‘Measuring’ interdisciplinarity: from indicators to indicating

By Ismael Rafols

author_ismael-rafols
Ismael Rafols (biography)

Indicators of interdisciplinarity are increasingly requested. Yet efforts to make aggregate indicators have repeatedly failed due to the diversity and ambiguity of understandings of the notion of interdisciplinarity. What if, instead of universal indicators, a contextualised process of indicating interdisciplinarity was used?

In this blog post I briefly explore the failure of attempts to identify universal indicators and the importance of moving from indicatORS to indicatING. By this I mean: An assessment of specific interdisciplinary projects or programs for indicating where and how interdisciplinarity develops as a process, given the particular understandings relevant for the specific policy goals.

This reflects the notion of directionality in research and innovation, which is gaining hold in policy.

Read more

Acknowledging and responding to criticisms of interdisciplinarity / Reconnaître et répondre aux critiques de l’interdisciplinarité

By Romain Sauzet

A French version of this post is available

author_romain-sauzet
Romain Sauzet (biography)

What are the core arguments that critics of interdisciplinarity employ? Which of these criticisms can help to clarify what interdisciplinarity is and what it isn’t?

While some of the criticisms of interdisciplinarity stem from a general misunderstanding of its purpose or from a bad experience, others seem well-founded. Thus, while some must be rejected, others should be accepted.

I outline five different types of criticisms drawn from three main sources:(1) academic writings (see reference list), (2) an empirical survey on interdisciplinarity (Sauzet 2017) (3) informal discussions.

Read more

Providing a richer assessment of research influence and impact

By Gabriele Bammer

author - gabriele bammer
Gabriele Bammer (biography)

How can we affirm, value and capitalise on the unique strengths that each individual brings to interdisciplinary and transdisciplinary research? In particular, how can we capture diversity across individuals, as well as the richness and distinctness of each individual’s influence and impact?

In the course of writing ten reflective narratives (nine single-authored and one co-authored), eleven of us stumbled on a technique that we think could have broader utility in assessing influence and impact, especially in research but also in education (Bammer et al., 2019).

Read more

A framework to evaluate the impacts of research on policy and practice

By Laura Meagher and David Edwards

author-laura-meagher
Laura Meagher (biography)

What is meant by impact generation and how can it be facilitated, captured and shared? How can researchers be empowered to think beyond ‘instrumental’ impact and identify other changes generated by their work? How can the cloud of complexity be dispersed so that numerous factors affecting development of impacts can be seen? How can a way be opened for researchers to step back and reflect critically on what happened and what could be improved in the future? How can research teams and stakeholders translate isolated examples of impact and causes of impact into narratives for both learning and dissemination?

Read more

Trust and empowerment inventory for community groups

By Craig Dalton

Author - Craig Dalton
Craig Dalton (biography)

Community groups are often consulted by researchers, government agencies and industry. The issues may be contentious and the relationship vexed by distrust and poor communication. Could an inventory capture the fundamental sources of community frustration and highlight scope for improvement in respect, transparency, fairness, co-learning, and meeting effectiveness from a community perspective?

The trust and empowerment inventory presented below is based on the main sources of community frustration that I have witnessed over two decades as a public health physician and researcher liaising with communities about environmental health risks and it is likely to have broader relevance.

Read more

Three “must have” steps to improve education for collaborative problem solving

By Stephen M. Fiore

stephen-fiore_aug-2017
Stephen M. Fiore (biography)

Many environmental, social, and public health problems require collaborative problem solving because they are too complex for an individual to work through alone. This requires a research and technical workforce that is better prepared for collaborative problem solving. How can this be supported by educational programs from kindergarten through college? How can we ensure that the next generation of researchers and engineers are able to effectively engage in team science?

Read more

Assessing research contribution claims: The “what else test”

By Jess Dart

Jess Dart (biography)

In situations where multiple factors, in addition to your research, are likely to have caused an observed policy or practice change, how can you measure your contribution? How can you be sure that the changes would not have happened anyway?

In making contribution claims there are three levels of rigour, each requiring more evaluation expertise and resourcing. These are summarised in the table below. The focus in this blog post is on the basic or minimum level of evaluation and specifically on the “what else test.”

Read more

Producing evaluation and communication strategies in tandem

By Ricardo Ramírez and Dal Brodhead

authors_ricardo-ramírez_dal-brodhead
1. Ricardo Ramírez (biography)
2. Dal Brodhead (biography)

How can projects produce evaluation and communication strategies in tandem? Why should they even try? A major benefit of helping projects produce evaluation and communication strategies at the same time is that it helps projects clarify their theories of change; it helps teams be specific and explicit about their actions. Before returning to the benefits, let us begin with how we mentor projects to use this approach.

Read more