Integration and Implementation Insights

‘Measuring’ interdisciplinarity: from indicators to indicating

By Ismael Rafols

author_ismael-rafols
Ismael Rafols (biography)

Indicators of interdisciplinarity are increasingly requested. Yet efforts to make aggregate indicators have repeatedly failed due to the diversity and ambiguity of understandings of the notion of interdisciplinarity. What if, instead of universal indicators, a contextualised process of indicating interdisciplinarity was used?

In this blog post I briefly explore the failure of attempts to identify universal indicators and the importance of moving from indicatORS to indicatING. By this I mean: An assessment of specific interdisciplinary projects or programs for indicating where and how interdisciplinarity develops as a process, given the particular understandings relevant for the specific policy goals.

This reflects the notion of directionality in research and innovation, which is gaining hold in policy. Namely, in order to evaluate research, analyses need go beyond quantity (scalars: unidimensional indicators) and to take into account the orientations of the research contents (vectors: indicatING).

The failure of universal indicators of interdisciplinarity

In the last decade there have been multiple attempts to come up with universal indicators based on bibliometric data. They include:

These critiques do not mean that one should disregard and mistrust the many studies of interdisciplinarity that use indicators in sensible and useful ways. The critiques point out that the methods are not stable or robust enough, or that they only illuminate a particular aspect. Therefore, they are valuable but only for specific contexts or purposes.

In summary, the failed policy reports and the findings of scholarly reviews suggest that universal indicators of interdisciplinarity cannot be meaningfully developed and that, instead, we should switch to radically different analytical approaches. These results are rather humbling for people like myself who worked on methods for ‘measuring’ interdisciplinarity for many years. Yet they are consistent with critiques of conventional scientometrics and efforts towards methods for ‘opening up’ evaluation, as discussed, for example, in “indicators in the wild” (Rafols, 2019).

From indicators to indicating of interdisciplinarity

Does it make sense, then, to try to assess the degree of interdisciplinarity? Yes, it may make sense in so far as the evaluators or policy makers are specific about the purpose, the contexts and the particular understandings of interdisciplinarity that are meaningful in a given project. This means stepping out of the traditional statistical comfort zone and interacting with relevant stakeholders (scientists and knowledge users) about what type of knowledge combinations make valuable contributions – acknowledging that actors may differ in their understandings.

Making a virtue out of necessity, Marres and De Rijcke (2020) highlight that the ambiguity and situated nature of interdisciplinarity allows for “interesting opportunities to redefine, reconstruct, or reinvent the use of indicators,” and propose a participatory, abductive, interactive approach to indicator development. In opening up the processes of measurement in this way, they bring about a leap in framing: from indicatORS (as closed outputs) to indicatING (as an open process).

Marres and De Rijcke’s (2020) proposal may not come as a surprise to project evaluators, who are used to choosing indicators only after situating the evaluation and choosing relevant frames and criteria – ie., in fact evaluators are used to indicatING. But this approach means that aggregated or averaged measures are unlikely to be meaningful.

The exciting next step is to develop the processes for indicating interdisciplinarity. How can we bring together stakeholder participation and scientometrics to point out where and how interdisciplinary research matter? I would be interested to hear your ideas for ways to do this.

To find out more:

This blog post draws on my presentation at the “Workshop on the implications of ‘convergence’ for how the National Center for Science and Engineering Statistics measures the science and engineering workforce” (https://www.nationalacademies.org/event/10-22-2020/a-workshop-on-the-implications-of-convergence-for-how-the-national-center-for-science-and-engineering-statistics-measures-the-science-and-engineering-workforce). It was organised by the US National Academies of Sciences, Engineering and Medicine in October 2020. It draws on another blog post: “On ‘measuring’ interdisciplinarity: from indicators to indicating” (https://leidenmadtrics.nl/articles/on-measuring-interdisciplinarity-from-indicators-to-indicating) which contains additional references.

References:

Adams, J., Loach, T. and Szomszor, M. (2016). Interdisciplinary research: methodologies for identification and assessment. Do we know what we are measuring? Digital Science. (Online – open access): https://digitalscience.figshare.com/articles/report/Digital_Research_Report_Interdisciplinary_Research_-_Methodologies_for_Identification_and_Assessment/4270289

Elsevier. (2015). A review of the UK’s interdisciplinary research using a citation-based approach. Report to the United Kingdom HE funding bodies and MRC, Elsevier. (Online – open access): https://www.ukri.org/publications/review-of-the-uks-interdisciplinary-research-2015/

Laursen, B. K., Anderson, K. and Motzer, N. (2020). Systematic review of the literature assessing interdisciplinarity from 2000 to 2019. Interactive visualization, Version 0.2. Producers: D. Quentin and K., L. Hondula, National Center for Socio-Environmental Synthesis, Annapolis, Maryland, United States of America. (Online – open access): https://leadllc.shinyapps.io/evaluation-sankey/

Marres, N. and de Rijcke, S. (2020). From indicators to indicating interdisciplinarity: A participatory mapping methodology for research communities in-the-making. Quantitative Science Studies, 1, 3: 1041-1055. (Online – open access) (DOI): https://doi.org/10.1162/qss_a_00062

National Science Board. (2010). Science and Engineering Indicators 2010. National Science Foundation report NSB 10-01, Arlington, Virginia, United States of America (Online – open access): https://www.heri.ucla.edu/PDFs/NSB.pdf

Rafols, I. (2019). S&T indicators in the wild: Contextualization and participation for responsible metrics. Research Evaluation, 28, 1: 7-22. (Online) (DOI): https://doi.org/10.1093/reseval/rvy030

Wang, Q. and Schneider, J. W. (2020). Consistency and validity of interdisciplinarity measures. Quantitative Science Studies, 1, 1: 239-263. (Online – open access) (DOI): https://doi.org/10.1162/qss_a_00011

Biography: Ismael Rafols PhD is a senior researcher at the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands and associate faculty at the Science Policy Research Unit (SPRU) at the University of Sussex in the United Kingdom. He works on science policy developing novel approaches to science and technology, including interdisciplinarity, indicators, using mixed-methods for informing evaluation, foresight and research strategies.

Exit mobile version