How can we affirm, value and capitalise on the unique strengths that each individual brings to interdisciplinary and transdisciplinary research? In particular, how can we capture diversity across individuals, as well as the richness and distinctness of each individual’s influence and impact?
In the course of writing ten reflective narratives (nine single-authored and one co-authored), eleven of us stumbled on a technique that we think could have broader utility in assessing influence and impact, especially in research but also in education (Bammer et al., 2019).
What is meant by impact generation and how can it be facilitated, captured and shared? How can researchers be empowered to think beyond ‘instrumental’ impact and identify other changes generated by their work? How can the cloud of complexity be dispersed so that numerous factors affecting development of impacts can be seen? How can a way be opened for researchers to step back and reflect critically on what happened and what could be improved in the future? How can research teams and stakeholders translate isolated examples of impact and causes of impact into narratives for both learning and dissemination?
In situations where multiple factors, in addition to your research, are likely to have caused an observed policy or practice change, how can you measure your contribution? How can you be sure that the changes would not have happened anyway?
In making contribution claims there are three levels of rigour, each requiring more evaluation expertise and resourcing. These are summarised in the table below. The focus in this blog post is on the basic or minimum level of evaluation and specifically on the “what else test.”
How can projects produce evaluation and communication strategies in tandem? Why should they even try? A major benefit of helping projects produce evaluation and communication strategies at the same time is that it helps projects clarify their theories of change; it helps teams be specific and explicit about their actions. Before returning to the benefits, let us begin with how we mentor projects to use this approach.
By Jane Palmer, Dena Fam, Tanzi Smith and Jenny Kent
How can research writing best be crafted to present transdisciplinarity? How can doctoral candidates effectively communicate to examiners a clear understanding of ‘data’, what it is and how the thesis uses it convincingly?
The authors have all recently completed transdisciplinary doctorates in the field of sustainable futures and use this experience to highlight the challenges of crafting a convincing piece of research writing that also makes claims of transdisciplinarity (Palmer et al., 2018). We propose four strategies for working with data convincingly when undertaking transdisciplinary doctoral research.
1. Make the data visible and argue for the unique or special way in which the data will be used
Some of the comments received from our examiners reflected a sense of being provided with insufficient data, or that it was not convincing as data.
It is important that the nature of data for the purposes of the research is clearly defined, and presented in a way that demonstrates its value in the research process. Richer contextualization of the data can help to make clear its value. This can include drawing attention to the remoteness of the field location, the rare access gained to the participants, and/or the unusual or special qualities of the data that make an original contribution to knowledge.
In these and other cases, it may be important to explain how a particular kind of data can valuably inform an argument qualitatively without reference to minimum quantitative thresholds. This is particularly relevant where a transdisciplinary doctoral candidate is crossing between physical/natural science, humanities and social science disciplines.
2. Be creative and explore the possibilities enabled by a broad interpretation of ‘data’
The advantage conferred on the candidate in taking a transdisciplinary approach needs to be made evident to the examiners, especially where there may appear to have been an absorption of the ‘data’ in the wider synthesizing narratives that are typical of transdisciplinary writing.
Adopting more creative writing techniques may help the examiner both to see the data, and to see the research as valuable. Transdisciplinary doctoral candidates may, given the complex feat of communication this requires, find it useful to seek training in creative writing or science communication skills.
By Tuomas J. Lahtinen, Joseph H. A. Guillaume, Raimo P. Hämäläinen
How can we identify and evaluate decision forks in a modelling project; those points where a different decision might lead to a better model?
Although modellers often follow so called best practices, it is not uncommon that a project goes astray. Sometimes we become so embedded in the work that we do not take time to stop and think through options when decision points are reached.
One way of clarifying thinking about this phenomenon is to think of the path followed. The path is the sequence of steps actually taken in developing a model or in a problem solving case. A modelling process can typically be carried out in different ways, which generate different paths that can lead to different outcomes. That is, there can be path dependence in modelling.
Recently, we have come to understand the importance of human behaviour in modelling and the fact that modellers are subject to biases. Behavioural phenomena naturally affect the problem solving path. For example, the problem solving team can become anchored to one approach and only look for refinements in the model that was initially chosen. Due to confirmation bias, modelers may selectively gather and use evidence in a way that supports their initial beliefs and assumptions. The availability heuristic is at play when modellers focus on phenomena that are easily imaginable or recalled. Moreover particularly in high interest cases strategic behaviour of the project team members can impact the path of the process.