By Ismael Rafols
Indicators of interdisciplinarity are increasingly requested. Yet efforts to make aggregate indicators have repeatedly failed due to the diversity and ambiguity of understandings of the notion of interdisciplinarity. What if, instead of universal indicators, a contextualised process of indicating interdisciplinarity was used?
In this blog post I briefly explore the failure of attempts to identify universal indicators and the importance of moving from indicatORS to indicatING. By this I mean: An assessment of specific interdisciplinary projects or programs for indicating where and how interdisciplinarity develops as a process, given the particular understandings relevant for the specific policy goals.
This reflects the notion of directionality in research and innovation, which is gaining hold in policy. Namely, in order to evaluate research, analyses need go beyond quantity (scalars: unidimensional indicators) and to take into account the orientations of the research contents (vectors: indicatING).
The failure of universal indicators of interdisciplinarity
In the last decade there have been multiple attempts to come up with universal indicators based on bibliometric data. They include:
- a 2010 study commissioned by the US National Science Foundation from SRI International (a non-profit research institute) which concluded that it was premature “to identify one or a small set of indicators or measures of interdisciplinary research… in part, because of a lack of understanding of how current attempts to measure conform to the actual process and practice of interdisciplinary research” (National Science Board, 2010, p. 5-35).
- Two independent reports commissioned by the UK research councils in 2015 to compare the overall degree of interdisciplinarity of countries.
- One (Elsevier, 2015) produced the unforeseen result that China and Brazil were more interdisciplinary than the UK or the US – which I interpret as an artefact of unconventional (rather than interdisciplinary) citation patterns of ‘emergent’ countries.
- The other (Adams et al., 2016), with a multiple-methods approach, was interestingly titled: ‘Do We Know What We Are Measuring?’ and concluded that: “…choice of data, methodology and indicators can produce seriously inconsistent results, despite a common set of disciplines and countries. This raises questions about how interdisciplinarity is identified and assessed. It also reveals a disconnect between the research metadata that analysts typically use and the research activity they assume they have analysed. The report highlights issues around the responsible use of ‘metrics’ and the importance of analysts clarifying the link between indicators and policy targets.”
- A quantitative literature review (Wang and Schneider 2020) “corroborate[d] recent claims that the current measurements of interdisciplinarity in science studies are both confusing and unsatisfying” and thus “question[ed] the validity of current measures and argue[d] that we do not need more of the same, but rather something different in order to be able to measure the multidimensional and complex construct of interdisciplinarity.” They also produced a heatmap depicting the correlation across a battery of measures of interdisciplinarity, which showed that there are many measures that are not in agreement.
- A broader review of evaluations of interdisciplinarity by Laursen and colleagues (2020) also found a striking variety of approaches (and indicators) depending on the contexts, purpose and criteria of the assessment. They highlighted a lack of “rigorous evaluative reasoning”, ie., insufficient clarity on how criteria behind indicators relate to the intended goals of interdisciplinarity.
These critiques do not mean that one should disregard and mistrust the many studies of interdisciplinarity that use indicators in sensible and useful ways. The critiques point out that the methods are not stable or robust enough, or that they only illuminate a particular aspect. Therefore, they are valuable but only for specific contexts or purposes.
In summary, the failed policy reports and the findings of scholarly reviews suggest that universal indicators of interdisciplinarity cannot be meaningfully developed and that, instead, we should switch to radically different analytical approaches. These results are rather humbling for people like myself who worked on methods for ‘measuring’ interdisciplinarity for many years. Yet they are consistent with critiques of conventional scientometrics and efforts towards methods for ‘opening up’ evaluation, as discussed, for example, in “indicators in the wild” (Rafols, 2019).
From indicators to indicating of interdisciplinarity
Does it make sense, then, to try to assess the degree of interdisciplinarity? Yes, it may make sense in so far as the evaluators or policy makers are specific about the purpose, the contexts and the particular understandings of interdisciplinarity that are meaningful in a given project. This means stepping out of the traditional statistical comfort zone and interacting with relevant stakeholders (scientists and knowledge users) about what type of knowledge combinations make valuable contributions – acknowledging that actors may differ in their understandings.
Making a virtue out of necessity, Marres and De Rijcke (2020) highlight that the ambiguity and situated nature of interdisciplinarity allows for “interesting opportunities to redefine, reconstruct, or reinvent the use of indicators,” and propose a participatory, abductive, interactive approach to indicator development. In opening up the processes of measurement in this way, they bring about a leap in framing: from indicatORS (as closed outputs) to indicatING (as an open process).
Marres and De Rijcke’s (2020) proposal may not come as a surprise to project evaluators, who are used to choosing indicators only after situating the evaluation and choosing relevant frames and criteria – ie., in fact evaluators are used to indicatING. But this approach means that aggregated or averaged measures are unlikely to be meaningful.
The exciting next step is to develop the processes for indicating interdisciplinarity. How can we bring together stakeholder participation and scientometrics to point out where and how interdisciplinary research matter? I would be interested to hear your ideas for ways to do this.
To find out more:
This blog post draws on my presentation at the “Workshop on the implications of ‘convergence’ for how the National Center for Science and Engineering Statistics measures the science and engineering workforce” (https://www.nationalacademies.org/event/10-22-2020/a-workshop-on-the-implications-of-convergence-for-how-the-national-center-for-science-and-engineering-statistics-measures-the-science-and-engineering-workforce). It was organised by the US National Academies of Sciences, Engineering and Medicine in October 2020. It draws on another blog post: “On ‘measuring’ interdisciplinarity: from indicators to indicating” (https://leidenmadtrics.nl/articles/on-measuring-interdisciplinarity-from-indicators-to-indicating) which contains additional references.
Adams, J., Loach, T. and Szomszor, M. (2016). Interdisciplinary research: methodologies for identification and assessment. Do we know what we are measuring? Digital Science. (Online – open access): https://mrc.ukri.org/documents/pdf/interdisciplinarity-research-commentary/
Elsevier. (2015). A review of the UK’s interdisciplinary research using a citation-based approach. Report to the United Kingdom HE funding bodies and MRC, Elsevier. (Online – open access): https://www.elsevier.com/research-intelligence/resource-library/a-review-of-the-uks-interdisciplinary-research-using-a-citation-based-approach
Laursen, B. K., Anderson, K. and Motzer, N. (2020). Systematic review of the literature assessing interdisciplinarity from 2000 to 2019. Interactive visualization, Version 0.2. Producers: D. Quentin and K., L. Hondula, National Center for Socio-Environmental Synthesis, Annapolis, Maryland, United States of America. (Online – open access): https://shiny.sesync.org/apps/evaluation-sankey
Marres, N. and de Rijcke, S. (2020). From indicators to indicating interdisciplinarity: A participatory mapping methodology for research communities in-the-making. Quantitative Science Studies, 1, 3: 1041-1055. (Online – open access) (DOI): https://doi.org/10.1162/qss_a_00062
National Science Board. (2010). Science and Engineering Indicators 2010. National Science Foundation report NSB 10-01, Arlington, Virginia, United States of America (Online – open access): https://www.heri.ucla.edu/PDFs/NSB.pdf
Rafols, I. (2019). S&T indicators in the wild: Contextualization and participation for responsible metrics. Research Evaluation, 28, 1: 7-22. (Online) (DOI): https://doi.org/10.1093/reseval/rvy030
Wang, Q. and Schneider, J. W. (2020). Consistency and validity of interdisciplinarity measures. Quantitative Science Studies, 1, 1: 239-263. (Online – open access) (DOI): https://doi.org/10.1162/qss_a_00011
Biography: Ismael Rafols PhD is a senior researcher at the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands and associate faculty at the Science Policy Research Unit (SPRU) at the University of Sussex in the United Kingdom. He works on science policy developing novel approaches to science and technology, including interdisciplinarity, indicators, using mixed-methods for informing evaluation, foresight and research strategies.