What are the core arguments that critics of interdisciplinarity employ? Which of these criticisms can help to clarify what interdisciplinarity is and what it isn’t?
While some of the criticisms of interdisciplinarity stem from a general misunderstanding of its purpose or from a bad experience, others seem well-founded. Thus, while some must be rejected, others should be accepted.
I outline five different types of criticisms drawn from three main sources:(1) academic writings (see reference list), (2) an empirical survey on interdisciplinarity (Sauzet 2017) (3) informal discussions.
How can we affirm, value and capitalise on the unique strengths that each individual brings to interdisciplinary and transdisciplinary research? In particular, how can we capture diversity across individuals, as well as the richness and distinctness of each individual’s influence and impact?
In the course of writing ten reflective narratives (nine single-authored and one co-authored), eleven of us stumbled on a technique that we think could have broader utility in assessing influence and impact, especially in research but also in education (Bammer et al., 2019).
What is meant by impact generation and how can it be facilitated, captured and shared? How can researchers be empowered to think beyond ‘instrumental’ impact and identify other changes generated by their work? How can the cloud of complexity be dispersed so that numerous factors affecting development of impacts can be seen? How can a way be opened for researchers to step back and reflect critically on what happened and what could be improved in the future? How can research teams and stakeholders translate isolated examples of impact and causes of impact into narratives for both learning and dissemination?
In situations where multiple factors, in addition to your research, are likely to have caused an observed policy or practice change, how can you measure your contribution? How can you be sure that the changes would not have happened anyway?
In making contribution claims there are three levels of rigour, each requiring more evaluation expertise and resourcing. These are summarised in the table below. The focus in this blog post is on the basic or minimum level of evaluation and specifically on the “what else test.”
How can projects produce evaluation and communication strategies in tandem? Why should they even try? A major benefit of helping projects produce evaluation and communication strategies at the same time is that it helps projects clarify their theories of change; it helps teams be specific and explicit about their actions. Before returning to the benefits, let us begin with how we mentor projects to use this approach.
By Jane Palmer, Dena Fam, Tanzi Smith and Jenny Kent
How can research writing best be crafted to present transdisciplinarity? How can doctoral candidates effectively communicate to examiners a clear understanding of ‘data’, what it is and how the thesis uses it convincingly?
The authors have all recently completed transdisciplinary doctorates in the field of sustainable futures and use this experience to highlight the challenges of crafting a convincing piece of research writing that also makes claims of transdisciplinarity (Palmer et al., 2018). We propose four strategies for working with data convincingly when undertaking transdisciplinary doctoral research.
1. Make the data visible and argue for the unique or special way in which the data will be used
Some of the comments received from our examiners reflected a sense of being provided with insufficient data, or that it was not convincing as data.
It is important that the nature of data for the purposes of the research is clearly defined, and presented in a way that demonstrates its value in the research process. Richer contextualization of the data can help to make clear its value. This can include drawing attention to the remoteness of the field location, the rare access gained to the participants, and/or the unusual or special qualities of the data that make an original contribution to knowledge.
In these and other cases, it may be important to explain how a particular kind of data can valuably inform an argument qualitatively without reference to minimum quantitative thresholds. This is particularly relevant where a transdisciplinary doctoral candidate is crossing between physical/natural science, humanities and social science disciplines.
2. Be creative and explore the possibilities enabled by a broad interpretation of ‘data’
The advantage conferred on the candidate in taking a transdisciplinary approach needs to be made evident to the examiners, especially where there may appear to have been an absorption of the ‘data’ in the wider synthesizing narratives that are typical of transdisciplinary writing.
Adopting more creative writing techniques may help the examiner both to see the data, and to see the research as valuable. Transdisciplinary doctoral candidates may, given the complex feat of communication this requires, find it useful to seek training in creative writing or science communication skills.
By Tuomas J. Lahtinen, Joseph H. A. Guillaume, Raimo P. Hämäläinen
How can we identify and evaluate decision forks in a modelling project; those points where a different decision might lead to a better model?
Although modellers often follow so called best practices, it is not uncommon that a project goes astray. Sometimes we become so embedded in the work that we do not take time to stop and think through options when decision points are reached.
One way of clarifying thinking about this phenomenon is to think of the path followed. The path is the sequence of steps actually taken in developing a model or in a problem solving case. A modelling process can typically be carried out in different ways, which generate different paths that can lead to different outcomes. That is, there can be path dependence in modelling.
Recently, we have come to understand the importance of human behaviour in modelling and the fact that modellers are subject to biases. Behavioural phenomena naturally affect the problem solving path. For example, the problem solving team can become anchored to one approach and only look for refinements in the model that was initially chosen. Due to confirmation bias, modelers may selectively gather and use evidence in a way that supports their initial beliefs and assumptions. The availability heuristic is at play when modellers focus on phenomena that are easily imaginable or recalled. Moreover particularly in high interest cases strategic behaviour of the project team members can impact the path of the process.
I am a firm believer in looking at interdisciplinary collaboration and knowledge exchange – or impact generation – as processes. If you can see something as a process, you can learn about it. If you can learn about it, you can do it better!
I find that this approach helps people to feel enfranchised, to believe that it is possible for them to open up what might have seemed to be a static black box and achieve understanding of the dynamics of how nouns like ‘interdisciplinarity’ or ‘knowledge exchange’ or ‘research impact’ can actually come to be.
How to give others your hard-won insights so that their work can be more informed, efficient, and effective? As I’ve gotten older, it is something that I think about more.
It is widely recognized that the environment is an integrated but also “open” system. As a result, when working with issues relating to the environment we are faced with the unsatisfying fact that we won’t know “truth”. We develop an understanding that is consistent with what we currently know and what we consider state-of-the-practice methods. But, we can never be sure that more observations or different methods would not result in different insights.
How do those building and using models decide whether a model should be trusted? While my thinking has evolved through modelling to predict the impacts of land use on losses of nutrients to the environment – such models are central to land use policy development – this under-discussed question applies to any model.
In principle, model development is a straightforward series of steps:
As a modeller, I often get requests from research or policy colleagues along the lines of ‘we want a model of the health system’. It’s relatively easy to recognise that ‘health system’ is too vague and needs explicit discussion about the specific issue to be modelled. It is much less obvious that the term ‘model’ also needs to be refined. In practice, different modelling methods are more or less appropriate for different questions. So how is the modelling method chosen?
What are the results of participatory modeling efforts? What contextual factors, resources and processes contribute to these results? Answering such questions requires the systematic and ongoing evaluation of processes, outputs and outcomes. At present participatory modeling lacks a framework to guide such evaluation efforts. In this post I offer some initial thoughts on the features of this framework.
A first step in developing an evaluation framework for participatory modeling is to establish criteria for processes, outputs, and outcomes. Such criteria would answer a basic question about what it means when we say that a participatory modeling process, output, or outcome is good, worthy, or meritorious.