Tracking stakeholder engagement and research impact

Community member post by Cathy Day

Cathy Day (biography)

Is there an easy and efficient way to keep track of stakeholder engagement and research impact?

My colleagues and I have developed a system with two components: (1) noting engagement and impact soon after they occur and (2) recording them in a way that enables the information to be extracted for whatever purpose is required. I describe the tracking spreadsheet, the recording process we use and then how the spreadsheet is used for reporting.

Tracking spreadsheet

The Microsoft Excel tracking spreadsheet has two parts: (1) the engagement or impact and (2) the research to which these are related. These are arranged in columns, which can be adapted for the needs of any particular group.

As shown in the extract from the spreadsheet below, the columns we use for engagement and impact are:

  • date of engagement/impact
  • activity
  • details
  • engagement (Yes/No)
  • impact (Yes/No)
  • lead researcher
  • other researchers.

For ‘activity’ we use a one- or two-word description selected from a dropdown list for the following activities:

  • media engagement (writing for or speaking about research)
  • media interest (report by the media on our research, without involving the researcher)
  • department contact (working with a national or state government body)
  • government contact (meeting or working with members of parliament)
  • stakeholder engagement
  • commissioned work
  • appointment (to a statutory or advisory body)
  • keynote address
  • conference presentation.

By minimising choice here, we can search and sort efficiently, depending on the particular purposes, such as university reporting requirements.

Extract from a tracking spreadsheet showing engagement (Eng) and impact (Imp) (supplied by Cathy Day)

As can be seen in the extract from the spreadsheet below, the columns we use for research are:

  • theme
  • project or sub-theme
  • paper/research/presentation
  • date of research
  • project identifier (not shown)
  • notes (not shown).

Our research group categorises all our investigations into broad, overarching themes such as ‘indigenous health’, ‘cardiovascular disease’ or ‘methods’, and these are provided in a drop-down list. The sub-theme or project column offers more detailed options such as ‘tobacco’, ‘social inequalities’, and ‘big data’.

Extract from a tracking spreadsheet showing research (supplied by Cathy Day)

Recording engagement and impact

Our entire research group meets fortnightly to keep each other informed of our work, to share ideas and to report to each other on all aspects of progress. These fortnightly group meetings include a standing agenda item on engagement and impact. At this point in the meeting, researchers inform each other of activity within the last fortnight including media coverage of their research, stakeholder engagement, advice provided to federal and state government agencies, collaborations with health-related non-government organisations and advocacy groups, changes in health practice based on their research and meetings with government ministers and members of parliament. This information is briefly noted in the meeting minutes.

A summary of this activity is then entered into the tracking spreadsheet by the research support team. Since this reporting happens on a fortnightly basis, the information is fresh in the researchers’ minds and detail is unlikely to be forgotten.

Reporting on engagement and impact

The columns were developed based on the variety of ways the group is required to report engagement and impact. It allows sorting by date of research or date of impact, as required, and can be filtered by the project identifier, in order to meet various reporting needs. For example, it can identify all activities for the group in a calendar year, or all activities led by a certain researcher, or all activities associated with a project or published paper.

The tracking spreadsheet allows for ambiguity (eg., imprecision in the date) and flexibility. For example, the two main Australian funding agencies categorise engagement and impact differently. The Australian Research Council defines ‘engagement’ as a subset of ‘impact’, while Australia’s National Health and Medical Research Council considers them to be separate outcomes. The tracking spreadsheet allows activity to be engagement or impact or both, and no column has to be filled in, to allow for engagement or impact that doesn’t fall neatly into the categories.


The use of a standardised, centralised repository for recording engagement and impact soon after it occurs has enabled the group to:

  • rapidly answer ad hoc queries about research, such as ‘what has been the impact of the group’s work on smoking by indigenous Australians?’
  • formally report to various funding agencies
  • help researchers frame their promotion applications.

Reflections on these reports have, in turn, enabled the group to identify the strategies for maximising stakeholder engagement and research impact.

What strategies have you found useful for keeping track of stakeholder engagement and/or research impact?

Biography: Cathy Day PhD is Research Manager of the Epidemiology for Policy and Practice Group in the National Centre for Epidemiology and Population Health, Research School of Population Health at The Australian National University.

Cathy Day is a member of blog partner PopulationHealthXchange, which is in the Research School of Population Health at The Australian National University.

Toolboxes as learning aids for dealing with complex problems

Community member post by Stefan Hilser

Stefan Hilser (biography)

How can toolboxes more effectively support those learning to deal with complex societal and environmental problems, especially novices such as PhD students and early career researchers?

In this blog post, I briefly describe four toolboxes and assess them for their potential to assist learning processes. My main aim is to open a discussion about the value of the four toolboxes and how they could better help novices.

Before describing the toolboxes, I outline the learning processes I have in mind, especially the perspective of legitimate peripheral participation.

Learning is not just the internalization of facts or mere acquisition of skills through ‘learning by doing’. It is a process, which is constituted in a mutually reinforcing way by the whole person, their activities and the world in which these are situated.

For novices in the academic world, it is about becoming a member of a community of practice through a process that Lave and Wenger (1991) call legitimate peripheral participation. This involves starting with partial involvement in a community of practice, which enables novices to gain familiarity and at the same time move towards fuller forms of participation. Peripheral forms of participation are “a way of gaining access to sources for understanding through growing involvement” (Lave and Wenger 1991: 37).

This learning process is shaped by how the community of practice structures its resources and how access to them and visibility are mediated through artefacts and the language used to describe practice (which is also a practice in itself). One type of artefact that shapes legitimate peripheral participation is toolboxes, four of which I present in this blog post.

The four toolboxes

The toolboxes come from different communities of practice, have different foci, types and number of resources, and their own particular strengths that provide value for their users. The table below gives a short overview of these four elements for each of the toolboxes.

Overview of four toolboxes (compiled by Stefan Hilser)

How do the toolboxes support learning?

As part of my PhD I am reviewing these four toolboxes and analysing how their structure supports learning through legitimate peripheral participation. I do so from a learner’s perspective and based on my experience with other platforms and tools that are built around the ideas of collaboration and knowledge co-creation (ie., Stack Overflow, GitHub and Wikipedia).

Strengths of the toolboxes: Something that is often missing in journal articles, are “insights into … practice” in a way that is “showing, not just telling” (Friedman, Gray and Ortiz Aragón 2018: 3). In contrast, the toolboxes give access to a rich body of knowledge about methods, experience reports, approaches, tools and more. Each toolbox facilitates the navigation of that knowledge through categories, filters and search functions. This helps novices to limit the visibility of resources, reducing – to a certain extent – the risk of being overwhelmed by too much information.

Further, the development of tools can serve as a connecting point between theory and methods on one hand and practical know-how on the other hand. This explicit focus on research practice can also help cross language barriers among different communities of practice. For the toolboxes described here, methods and cases describing the underlying practice, for example, can help cross the transdisciplinary, team science and integration and implementation sciences communities of practice.

Room for learning: Despite existing connection points (eg., similar methods) among the toolboxes and a rich body of knowledge about theory and practice, the current infrastructure of the four toolboxes does not actually connect them. Moreover, the toolboxes do not offer many possibilities for novices to learn through legitimate peripheral participation. While they allow users to navigate the existing content, they do not provide structures that support learners (especially novices) in participating in the toolboxes and in learning about the research approaches and communities of practice they represent. The main hurdles are:

  • Lack of dedicated spaces for questions and discussions that are relevant to the learner; and,
  • Lack of structures that support co-creation and synthesis that would assist the learner.

With regard to dedicated spaces for questions and discussions: The td-net toolbox does not provide scope for discussion. The SciTS toolkit offers a space for comments under each of their resources. The i2S repository does not allow commenting, but the i2Insights blog is a space where users can comment on the different blog entries on tools (methods, concepts and theories), institutionalization, education, or case studies. The td Academy allows for comments under each of the themes, subthemes and methods.

In the three toolboxes where there is space for discussion, they are all tied to a specific resource (eg., a concept, method, or blog entry). This restricts the possibilities of novices starting their own topics for discussion, or asking questions, because most discussions are defined by the experts providing the content. Such discussions can be intimidating for a novice, rather than encouraging them to participate.

In terms of co-creation and synthesis: The td Academy provides a synthesis of key topics (so far only in German), developed as a result of a series of dialogues, but it does not provide structures that support learners in participating in that synthesis. Conversely, the SciTS toolkit offers the possibility of editing existing content, which can be seen as a form of co-creation, but no synthesis of key topics. I consider this form of participation as more suitable for experts, as it requires a certain knowledge of the concepts and methods.

What has your experience been?

What are your thoughts on:

  • Using or contributing to these toolboxes?
  • What do you like about the toolboxes?
  • What is missing from the toolboxes?
  • Do you have experience with other toolboxes or platforms that you think these toolboxes could learn from?
  • Would you be interested in a platform that connected these four toolboxes? Would you be interested if such a platform enabled more participation, collaboration and knowledge co-creation? What ideas do you have for making such a platform work?

To find out more about the four toolboxes:

Friedman, V. J., Gray, P. and Ortiz Aragón, A. (2018). From doing to writing action research: A plea to ARJ authors. Action Research, 16, 1: 3-6. Online (DOI):

Lave, J. and Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press: Cambridge, United Kingdom.

Biography: Stefan Hilser is a PhD student in the research group “Processes of Sustainability Transformation” at Leuphana University, Lueneburg, Germany. He is researching the collaboration processes of the team of which he is also a member and aiming to implement an intervention that supports their learning. This fits with his broader interests in how to best support learning in inter- and trans- disciplinary research.

This blog post aims to capture discussion from attendees at Stefan Hilser’s presentation at the Leverage Points conference at Leuphana University in Lueneburg, Germany, February 6-8, 2019. Comments from those unable to attend the conference are, of course, also very welcome.

Metacognition as a prerequisite for interdisciplinary integration

Community member post by Machiel Keestra

Machiel Keestra (biography)

What’s needed to enable the integration of concepts, theories, methods, and results across disciplines? Why is communication among experts important, but not sufficient? Interdisciplinary experts must also meta-cognize: both individually and as a team they must monitor, evaluate and regulate their cognitive processes and mental representations. Without this, expertise will function suboptimally both for individuals and teams. Metacognition is not an easy task, though, and deserves more attention in both training and collaboration processes than it usually gets. Why is metacognition so challenging and how can it be facilitated?

Understanding cognitive processes and representations

Whenever we engage with any cognitive or behavioral tasks, our brain employs a mental representation or knowledge structure that corresponds to a word, image, or other information pertaining to that task. Experience contributes to further enrichment and structuring of that representation. A beginner’s mental representation of a new word (‘space shuttle’, eg.) may thus contain just its letters and an image of the object, yet additional experience and information is automatically cognitively integrated with that representation, providing associations with mental representations of shuttle parts, of launching and landing actions, images and so on.

The superior performance of experts relies upon their having assembled many more, and more complex, mental representations pertaining to their field of expertise. Examples range from accurate and fast recognition, recall and processing by chess masters of a large number of complex chess positions, the complex sequences of movements of musicians playing by sight, fast strategic decisions by sport champions, and immediate detection of a mistake in a function by mathematicians.

Importantly, we do not need to learn those mental representations explicitly, nor are we often able to make them explicit once we have learned them. Indeed, we usually acquire and employ them automatically and implicitly in our cognition and behavior: for example, young children don’t explicitly learn grammatical rules yet can use them well.

Problems with expertise

Handy as such automatic and implicit handling may be, it also contributes to ‘brittleness’ and other flaws of individual expertise. In particular, experts:

  • are overconfident in exceptional situations
  • often demonstrate a bias or fixedness towards habitual responses
  • tend to rely upon their expertise in neighbouring yet different domains
  • often display a lack of creativity compared to beginners.

These are direct consequences of the cognitive processes upon which expertise rests and the knowledge structures or mental representations that are involved in those processes.

Particularly relevant for interdisciplinarity is the disappointing fact that expertise can make it more difficult for us to recognize how the insights from an expert in another domain can be added to our knowledge or performance. Making our cognitive processes and representations explicit by metacognizing is a requisite for recognizing implicit assumptions and for acknowledging gaps in knowledge and methods that another expert might help to fill, as shown in the first figure below.

An expert engages in meta-cognition about their thinking and knowing. Here, the expert reflects specifically about their learning process (in the cloud) and the set of representations it has yielded. Such reflections also prepare them for the integration of an additional insight from another expert, such as the green square added here (from Keestra 2017: 142).

Team collaboration and metacognition

Adding another level of complexity to this situation in the case of team work is the fact that experts automatically develop mental representations related to their team work, in addition and connected to those pertaining to their individual expertise. These representations concern the ‘who, what, why, when, and how’ of the team, containing information about the team itself, its task, process-related information and representation of its overarching goal.

Team mental representations bring along similar risks to those mentioned above. For example, overlapping representational contents that are shared by all team members are quickly recognized and tend to dominate joint cognition and actions. Analogous to an individual expert’s bias, a team risks slipping into groupthink when it does not engage in team metacognition.

For team members to recognize and make effective use of each other’s non-overlapping representations and skills, in contrast, requires extra time and effort devoted to team metacognition in addition to individual metacognition, as represented in the figure below.

An interdisciplinary team of experts together develops a more comprehensive understanding of a phenomenon – represented by the three-dimensional cube composed of different elements each of them contributes. Their joint or team meta-cognition upon their interdisciplinary collaboration facilitates the process of their development of an interdisciplinary integration of their distinct mental representations of the phenomenon (from Keestra 2017: 156).

Working in a team also has metacognitive benefits. Being confronted with the metacognitive self-reflections and the self-regulatory strategies of others and receiving feedback from them can help in recognizing and articulating individual metacognition. In addition, individuals can feel more motivated to metacognize when functioning in teams.

However, to have such results team metacognition must be adequately guided in order to avoid unnecessary confusion, which can be exacerbated when status and cultural differences are insufficiently addressed. Adequately planning research phase-specific rounds of metacognition is important. Team leaders should formulate relevant prompts or questions to elicit the required individual and team metacognitive reflections and discussions.

Useful prompts

A few prompts are listed here; more can be found in an appendix to Keestra (2017):

  • What added value could research from the humanities/social sciences/sciences have for your research?
  • Why do you think that the task you propose to perform is optimal for solving the problem at stake? Would an alternative route be possible? Do you have doubts about the routes proposed by others?
  • Of all the features of the problem under scrutiny (as perhaps represented in a figure), what does and what does not make sense from your disciplinary perspective? Or what is especially difficult to understand? What would you like to know more about?
  • What goals have you as a team determined and what is the plan for reaching those goals? What are the issues that might arise with the current plan? Do your answers to these questions vary among individual team members?
  • Did new insights emerge or a new situation present itself to the team, that makes you feel you should revisit some of your previous personal contributions to the team work?


What has your experience been with metacognition challenges in interdisciplinary work? Have you ever seen or experienced groupthink? Are there other prompts that you have found useful? How have you incorporated metacognition in an interdisciplinary team project?

To find out more:
Keestra, M. (2017). Meta-cognition and reflection by interdisciplinary experts: Insights from cognitive science and philosophy. (With an appendix of prompts or questions for metacognition). Issues in Interdisciplinary Studies, 35, 121-169. Online (abstract):

Further reading:
Wiltshire, T. J., Rosch, K., Fiorella, L. and Fiore, S. M. (2014). Training for collaborative problem solving: Improving team process and performance through metacognitive prompting. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 58, 1: 1154-1158. Online (DOI):

Biography: Machiel Keestra PhD is a tenured assistant professor of philosophy at the Institute for Interdisciplinary Studies at the University of Amsterdam, the Netherlands. He teaches philosophy of science and interdisciplinary research in the Natural and Social Sciences bachelor and in the Brain and Cognitive Science master programs. He is a researcher at the Institute for Logic, Language and Computation, focusing on the philosophy of cognitive neuroscience. He is past-president of the international Association for Interdisciplinary Studies (AIS) and co-chairs the upcoming AIS conference on ‘Interdisciplinarity in Global Contexts’, October 24-26, 2019, in Amsterdam.

Trust and empowerment inventory for community groups

Community member post by Craig Dalton

Author - Craig Dalton
Craig Dalton (biography)

Community groups are often consulted by researchers, government agencies and industry. The issues may be contentious and the relationship vexed by distrust and poor communication. Could an inventory capture the fundamental sources of community frustration and highlight scope for improvement in respect, transparency, fairness, co-learning, and meeting effectiveness from a community perspective?

The trust and empowerment inventory presented below is based on the main sources of community frustration that I have witnessed over two decades as a public health physician and researcher liaising with communities about environmental health risks and it is likely to have broader relevance. Key issues include not being listened to; not being fully informed; Continue reading

Three “must have” steps to improve education for collaborative problem solving

Community member post by Stephen M. Fiore

Stephen M. Fiore (biography)

Many environmental, social, and public health problems require collaborative problem solving because they are too complex for an individual to work through alone. This requires a research and technical workforce that is better prepared for collaborative problem solving. How can this be supported by educational programs from kindergarten through college? How can we ensure that the next generation of researchers and engineers are able to effectively engage in team science?

Drawing from disciplines that study cognition, collaboration, and learning, colleagues and I (Graesser et al., 2018) make three key recommendations to improve research and education with a focus on instruction, opportunities to practice, and assessment. Across these is the need to attend to the core features of teamwork as identified in the broad research literature on groups and teams. Continue reading

Embracing tension for energy and creativity in interdisciplinary research

Community member post by Liz Clarke and Rebecca Freeth

Liz Clarke (biography)

Tensions inevitably arise in inter- and transdisciplinary research. Dealing with these tensions and resulting conflicts is one of the hardest things to do. We are meant to avoid or get rid of conflict and tension, right? Wrong!

Tension and conflict are not only inevitable; they can be a source of positivity, emergence, creativity and deep learning. By tension we mean the pull between the seemingly contradictory parts of a paradox, such as parts and wholes, stability and chaos, and rationality and creativity. These tensions can foster interpersonal conflict, particularly when team members treat the apparent contradictions as if only one was ‘right’. Continue reading