‘Measuring’ interdisciplinarity: from indicators to indicating

By Ismael Rafols

author_ismael-rafols
Ismael Rafols (biography)

Indicators of interdisciplinarity are increasingly requested. Yet efforts to make aggregate indicators have repeatedly failed due to the diversity and ambiguity of understandings of the notion of interdisciplinarity. What if, instead of universal indicators, a contextualised process of indicating interdisciplinarity was used?

In this blog post I briefly explore the failure of attempts to identify universal indicators and the importance of moving from indicatORS to indicatING. By this I mean: An assessment of specific interdisciplinary projects or programs for indicating where and how interdisciplinarity develops as a process, given the particular understandings relevant for the specific policy goals.

This reflects the notion of directionality in research and innovation, which is gaining hold in policy. Namely, in order to evaluate research, analyses need go beyond quantity (scalars: unidimensional indicators) and to take into account the orientations of the research contents (vectors: indicatING).

The failure of universal indicators of interdisciplinarity

In the last decade there have been multiple attempts to come up with universal indicators based on bibliometric data. They include:

  • a 2010 study commissioned by the US National Science Foundation from SRI International (a non-profit research institute) which concluded that it was premature “to identify one or a small set of indicators or measures of interdisciplinary research… in part, because of a lack of understanding of how current attempts to measure conform to the actual process and practice of interdisciplinary research” (National Science Board, 2010, p. 5-35).
  • Two independent reports commissioned by the UK research councils in 2015 to compare the overall degree of interdisciplinarity of countries.
    • One (Elsevier, 2015) produced the unforeseen result that China and Brazil were more interdisciplinary than the UK or the US – which I interpret as an artefact of unconventional (rather than interdisciplinary) citation patterns of ‘emergent’ countries.
    • The other (Adams et al., 2016), with a multiple-methods approach, was interestingly titled: ‘Do We Know What We Are Measuring?’ and concluded that: “…choice of data, methodology and indicators can produce seriously inconsistent results, despite a common set of disciplines and countries. This raises questions about how interdisciplinarity is identified and assessed. It also reveals a disconnect between the research metadata that analysts typically use and the research activity they assume they have analysed. The report highlights issues around the responsible use of ‘metrics’ and the importance of analysts clarifying the link between indicators and policy targets.
  • A quantitative literature review (Wang and Schneider 2020) “corroborate[d] recent claims that the current measurements of interdisciplinarity in science studies are both confusing and unsatisfying” and thus “question[ed] the validity of current measures and argue[d] that we do not need more of the same, but rather something different in order to be able to measure the multidimensional and complex construct of interdisciplinarity.” They also produced a heatmap depicting the correlation across a battery of measures of interdisciplinarity, which showed that there are many measures that are not in agreement.
  • A broader review of evaluations of interdisciplinarity by Laursen and colleagues (2020) also found a striking variety of approaches (and indicators) depending on the contexts, purpose and criteria of the assessment. They highlighted a lack of “rigorous evaluative reasoning”, ie., insufficient clarity on how criteria behind indicators relate to the intended goals of interdisciplinarity.

These critiques do not mean that one should disregard and mistrust the many studies of interdisciplinarity that use indicators in sensible and useful ways. The critiques point out that the methods are not stable or robust enough, or that they only illuminate a particular aspect. Therefore, they are valuable but only for specific contexts or purposes.

In summary, the failed policy reports and the findings of scholarly reviews suggest that universal indicators of interdisciplinarity cannot be meaningfully developed and that, instead, we should switch to radically different analytical approaches. These results are rather humbling for people like myself who worked on methods for ‘measuring’ interdisciplinarity for many years. Yet they are consistent with critiques of conventional scientometrics and efforts towards methods for ‘opening up’ evaluation, as discussed, for example, in “indicators in the wild” (Rafols, 2019).

From indicators to indicating of interdisciplinarity

Does it make sense, then, to try to assess the degree of interdisciplinarity? Yes, it may make sense in so far as the evaluators or policy makers are specific about the purpose, the contexts and the particular understandings of interdisciplinarity that are meaningful in a given project. This means stepping out of the traditional statistical comfort zone and interacting with relevant stakeholders (scientists and knowledge users) about what type of knowledge combinations make valuable contributions – acknowledging that actors may differ in their understandings.

Making a virtue out of necessity, Marres and De Rijcke (2020) highlight that the ambiguity and situated nature of interdisciplinarity allows for “interesting opportunities to redefine, reconstruct, or reinvent the use of indicators,” and propose a participatory, abductive, interactive approach to indicator development. In opening up the processes of measurement in this way, they bring about a leap in framing: from indicatORS (as closed outputs) to indicatING (as an open process).

Marres and De Rijcke’s (2020) proposal may not come as a surprise to project evaluators, who are used to choosing indicators only after situating the evaluation and choosing relevant frames and criteria – ie., in fact evaluators are used to indicatING. But this approach means that aggregated or averaged measures are unlikely to be meaningful.

The exciting next step is to develop the processes for indicating interdisciplinarity. How can we bring together stakeholder participation and scientometrics to point out where and how interdisciplinary research matter? I would be interested to hear your ideas for ways to do this.

To find out more:

This blog post draws on my presentation at the “Workshop on the implications of ‘convergence’ for how the National Center for Science and Engineering Statistics measures the science and engineering workforce” (https://www.nationalacademies.org/event/10-22-2020/a-workshop-on-the-implications-of-convergence-for-how-the-national-center-for-science-and-engineering-statistics-measures-the-science-and-engineering-workforce). It was organised by the US National Academies of Sciences, Engineering and Medicine in October 2020. It draws on another blog post: “On ‘measuring’ interdisciplinarity: from indicators to indicating” (https://leidenmadtrics.nl/articles/on-measuring-interdisciplinarity-from-indicators-to-indicating) which contains additional references.

References:

Adams, J., Loach, T. and Szomszor, M. (2016). Interdisciplinary research: methodologies for identification and assessment. Do we know what we are measuring? Digital Science. (Online – open access): https://mrc.ukri.org/documents/pdf/interdisciplinarity-research-commentary/

Elsevier. (2015). A review of the UK’s interdisciplinary research using a citation-based approach. Report to the United Kingdom HE funding bodies and MRC, Elsevier. (Online – open access): https://www.elsevier.com/research-intelligence/resource-library/a-review-of-the-uks-interdisciplinary-research-using-a-citation-based-approach

Laursen, B. K., Anderson, K. and Motzer, N. (2020). Systematic review of the literature assessing interdisciplinarity from 2000 to 2019. Interactive visualization, Version 0.2. Producers: D. Quentin and K., L. Hondula, National Center for Socio-Environmental Synthesis, Annapolis, Maryland, United States of America. (Online – open access): https://shiny.sesync.org/apps/evaluation-sankey

Marres, N. and de Rijcke, S. (2020). From indicators to indicating interdisciplinarity: A participatory mapping methodology for research communities in-the-making. Quantitative Science Studies, 1, 3: 1041-1055. (Online – open access) (DOI): https://doi.org/10.1162/qss_a_00062

National Science Board. (2010). Science and Engineering Indicators 2010. National Science Foundation report NSB 10-01, Arlington, Virginia, United States of America (Online – open access): https://www.heri.ucla.edu/PDFs/NSB.pdf

Rafols, I. (2019). S&T indicators in the wild: Contextualization and participation for responsible metrics. Research Evaluation, 28, 1: 7-22. (Online) (DOI): https://doi.org/10.1093/reseval/rvy030

Wang, Q. and Schneider, J. W. (2020). Consistency and validity of interdisciplinarity measures. Quantitative Science Studies, 1, 1: 239-263. (Online – open access) (DOI): https://doi.org/10.1162/qss_a_00011

Biography: Ismael Rafols PhD is a senior researcher at the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands and associate faculty at the Science Policy Research Unit (SPRU) at the University of Sussex in the United Kingdom. He works on science policy developing novel approaches to science and technology, including interdisciplinarity, indicators, using mixed-methods for informing evaluation, foresight and research strategies.

19 thoughts on “‘Measuring’ interdisciplinarity: from indicators to indicating”

  1. Colleagues, this is a very interesting discussion! However, it seems to me that behind the polite questions and answers there is a high tension of the problem!

    I think the tension will help explain the metaphor. This situation is caused by the necessity and unsuccessful attempts to describe a black cat in a black room. However, if the light flashes, it may turn out that under a single image of “a black cat”, a small fluffy kitten and a huge Bengal tiger are hiding. The history of the development of interdisciplinarity and transdisciplinarity shows that the role of “the kitten” is assigned to interdisciplinary research that can effectively solve complex problems of science and technology. The role of “the Bengal tiger” is given to a real transdisciplinary approach that is designed to solve the wicked problems of nature and society.

    Recently, Roderick John Lawrence reminded us that wicked problems involve social aspects and therefore they are often deduced from the composition of scientific problems. In this case, the idea arises to form two directions of interdisciplinarity: domestic (cats) and wild (tigers). This separation will allow you to form two independent contexts (two context spaces). In these spaces, the definitions of interdisciplinarity and transdisciplinarity will be fundamentally different.

    These differences will make it possible to form clear, well-founded and understandable definitions of interdisciplinarity and transdisciplinarity. As a result, such a division will destroy the problem of identifying universal indicators. In each of these areas, it will be necessary to justify and form groups of local indicators. It may be necessary to define local indicator parameters for low-threshold and high-threshold interdisciplinary and transdisciplinary tools and methods in each such group. And also to determine the local parameters of the synthesis and integration of disciplinary knowledge (the first direction) and their unification and generalization (the second direction).

    Are you interested in this idea?

    Reply
    • Certainly interested in this idea! I like the metaphor of domestic cats (academia) and tigers (transdisciplinary spaces). I believe that there is a wide variety of contexts — there are also wild cats and circus tigers (even paper tigers!!). Therefore, there can be a large variety of local indicators parameters. Now, to which extent they can be generalized (your second direction), it is a question to be found empirically — and I am rather sceptical about possibilities in this direction, in spite of high policy pressures.

      Reply
  2. Thanks for the great discussion, everyone. You might also be interested to know that Ismael Rafols has a follow-up blog post “Addressing societal challenges: From interdisciplinarity to research portfolios analysis” which will be published next week, on Thursday Feb 18th (Australian time, late Wednesday in much of the rest of the world).

    Reply
  3. Hi Ismael, and all,

    Beyond the pleasure of seeing Ismael in print once again, I applaud the move toward including stakeholders as co-equals in conversations about interdisciplinarity. But as Ismael may remember, my thoughts generally jet off in another direction. For I see ontological issues preceding and to some degree precluding scientometrics. I wonder, then, whether our concern should be with interdisciplinarity at all. I raise this question for a number of reasons:

    -I assume the real question at issue is the relevance of knowledge to a wider-than-disciplinary audience. Interdisciplinarity, however, limits its concerns to the interactions of disciplinary communities within the walls of academia

    -by my lights, the attempt to identify indicators of interdisciplinarity suffers shipwreck on the fact that the notion of a discipline is itself hopelessly vague, as disciplines are themselves the results of varied historical, political, and economic as well as epistemic factors, making attempts to apply metric analysis pretty well hopeless. The real issue is a rather simple one that metrics offers little help to: are questions of ethics, values, and policy brought into science-based discussions in a serious way?

    -finally, this problem with talking to stakeholders: teaching aside, academics understand their job as consisting in the production of new knowledge. Non-academics have a different priority: getting the job at hand done. In my experience, they have limited time or interest in acquiring additional knowledge unless it directly helps them with their task. Thus, whether the question concerns indicators or indicating, their concern will be with identifying information that helps them with their task rather than with interesting, arcane discussions.

    Best wishes, Bob

    Reply
    • Hi Bob, likewise, a pleasure in engaging even though in a distant exchange! I agree with you on the need not to take too seriously these ‘arcane discussions’, and instead shifting efforts to broader audiences on more directly relevant issues. This post was triggered by a workshop by the US National Center for Science and Engineering Statistics. Given policy pressures for indicators in a mode of justification, the hope is that these processes of indicating what is relevant in ‘interdisciplinary’ research might help create spaces in the directions you mention.

      Reply
    • Excellent points, Bob, about the difficulties of including stakeholders. However, one of your arguments suggests that stakeholder perspectives/knowledge are less than relevant because interdisciplinarity “limits its concerns to the interactions of disciplinary communities within the walls of academic”. I disagree with your characterisation of “interdisciplinarity”, and my disagreement illustrates the challenges of conceptualisation that Laura Cruz-Castro and Luis Sanz-Menéndez identify as a barrier in their comment.

      Defining what we are trying to measure is part of the problem. While your argument that interdisciplinarity is limited to the collaboration of disciplines in academia might be technically or semantically valid, the term (“interdisciplinary” or “interdisciplinarity”) is often used as a “catch all” term that includes the kind of research that *does* include and assign essential value to stakeholder knowledge (often referred to as “trans-disciplinary”). As such, in those cases where stakeholder knowledge is part of the research process, it seems that including stakeholder knowledge (and criteria for success or quality) in the evaluation of that process would be relevant.

      Your final paragraph does a great job of highlighting the general important differences between academics and “non-academics”. However, some of the growth of interdisciplinary research in the “catch all” sense (including trans-disciplinary research that integrates and values non-academic knowledge) is trying to move away from “either/or” to “both/and” – to actually harness those differences to move in new directions that are productive for both academics and the real world (Ismael’s directionality and indicaTING).

      It seems like you are saying that academic and non-academic orientations/aims are irreconcilable (“both/and” is impossible). I would suggest, however, that the characteristics of individual academics or non-academics are often more fluid, and that their aims, knowledge, and qualifications for evaluating inter/trans-disciplinary research can be adequately flexible, rigorous and robust to participate in joint or collaborative evaluation – within an innovative but systematic evaluation framework, as Ismael has prompted us to consider.

      Thank you for stimulating more great thought.

      Reply
      • Hi Caryn,

        Let me clarify some of my cryptic and perhaps inartful comments.

        Sure, ID is commonly used as a catch-all for ID and TD. But color me suspicious of this fact. TD points to a move beyond solely academic actors, where one’s epistemic efforts are made in concert with one or another part of the public. Transdisciplinarity thus marks a political as well as an epistemic change, for it implies that academics are giving up some of both their authority and autonomy.

        The slippage, then, between the two terms is no accident, for it allows what sounds like a commitment to greater relevance to become just another occasion where academics produce knowledge of no particular use to the wider world. Such slippage be surprising: few people willingly give up either their authority or autonomy.

        The real issue is our–ie, academic’s–commitment to constantly producing new knowledge. We merrily go forward producing new knowledge–that’s what we know how to do, and love doing. But the greater world is interested in getting a job done. And often new knowledge, or knowledge at all, isn’t a help. An example: additional knowledge about climate science isn’t going help us make the hard decisions that lie before us–all it does is provide an excuse for not making hard choices now.

        So it’s not stakeholder’s perspectives that are less relevant. It’s academic’s.

        Best, Bob

        Reply
        • Hi Bob,

          Thank you for that clarification – very helpful.

          To pull on one thread of your response… I agree that in moving into the domain of political change we (academics) are entering dangerous territory, especially if we presume to evaluate our work within that realm.

          While I still maintain that it is possible to develop evaluation protocols that allow for the context dependence of ID or TD research, your comments remind me to remain humble about the practical and political boundaries of “research” endeavours.

          To recapitulate crudely, we are providers of knowledge but not decision-makers or solvers of problems. Evaluation protocols that assume we have more power than we do or force a surrender of our legitimate expertise (authority and autonomy), will likely fail. As will those that mis-identify the problem our additional knowledge has the capacity to address (e.g., more climate change data doesn’t solve the problem of political inaction).

          Thank you for your time in responding, clarifying, and re-focusing my attention in an important direction.

          With respect,
          Caryn

          Reply
  4. Hello, Ismael: I am excited you are bending your brilliant mind and years of experience to this contextual view of ID assessment. Near the end, you acknowledge project evaluators have taken this approach for years, and you conclude, “But this approach means that aggregated or averaged measures are unlikely to be meaningful.” What do you mean by this? It kind of sounds like a critique of the entire approach you are advocating. If so, what are your thoughts on overcoming that limitation?

    Reply
    • Thanks Bethany — the argument is that given the variety of practices related to interdisciplinarity any specific operationalisation in terms of general indicators is going to miss important aspects — and this is very problematic because interdisciplinarity is conceptualised differently by various stakeholders. Therefore efforts of aggregation of ‘measures of interdisciplinarit’ across projects may be meaningful, but only in a narrow sense: from a very particular vantage point. The key aspect I advocate in this post, following Noortje Marres and Sarah de Rijcke’s proposal, is to be explicit about the context and focus on measuring process each time.

      Reply
        • If you aggregated many ‘indicating’ processes, you might find out that some indicators are shared across some contexts and perspectives. But I am sceptical that you could then assume that these indicators can be generalised. Because even under these contexts, some stakeholders may disagree about what types of knowledge are relevant. I agree with you, though, that this is something worth exploring empirically — i.e. that it worth comparing a variety of indicating process.

          Reply
            • To be honest, this is beyond the current scope, as I see this possibility rather far, and thus, I have vague ideas regarding comparisons. What I have in mind would be to take each “Indicating process” as a case study, and then compare a variety of case studies. This is what the ASIRPA team at INRAE (French Agriculture Institute) did in order to get an understanding and some indicators of processes leading to societal impact. They started with a few cases, and increasingly added more. See: https://www6.inrae.fr/asirpa_eng/ASIRPA-project/ASIRPA-s-approach

              Reply
              • I agree, Ismael, that this is a fertile area for further research and thanks for the reference to the work of the French Agriculture Institute.

                One of the challenges is that we do not have an agreed way of writing up the methods and processes used in interdisciplinary research. I have been working on a framework for this, as my colleague Caryn Anderson kindly pointed out in her comment. I’ve been pleased to see it used by Melissa Robson-Williams and colleagues in New Zealand to analyse their own case studies and that they found benefits in the framework for revealing blind-spots.

                Combining case study analysis and using this to test the Integration and Implementation Sciences (i2S) and other frameworks may be a productive way to proceed.

                References:
                The ‘methods section’ in research publications on complex problems – Purpose by Gabriele Bammer
                https://i2insights.org/2016/10/11/methods-section-purpose/

                Bammer, G. 2013 Disciplining Interdisciplinarity: Integration and Implementation Sciences for Researching Complex Real-World Problems. ANU Press (Open access). http://dx.doi.org/10.22459/DI.01.2013

                Melissa Robson-Williams, Bruce Small, Roger Robson-Williams, 2020 Designing transdisciplinary projects for collaborative policy-making. The Integration and Implementation Sciences framework as a tool for reflection GAIA 29/3 (2020): 170 – 175 Open access: https://doi.org/10.14512/gaia.29.3.7

                Reply
  5. Thank you from bringing to our attention the issue of measuring interdisciplinary and suggesting approaches to the development of measures, considering contextualization and directionality. In our own words, taking into account the purpose of developing indicators and subordinating the activity to conceptualization.

    As we all know, to talk about indicators is to refer to measures of the reality around, but rather often, those who propose measures do not make any theoretical connections explicit and assume a true correspondence between the indicators and the real world. In our view, such missing connections and taken for granted assumptions should be challenged.

    As “non-experts” in the word of S&T indicators, but with just some past explorations applied to the issues of interdisciplinary (Sanz-Menendez et al 2001) and other domains such as preferences for evaluation systems (Sanz-Menendez & Cruz-Castro 2019) or taxonomies of organizations (Cruz-Castro et al 2020), we would like to make a few comments and reflections:

    We are aware that measurement has two main potential instances (Carthwright & Runhardt 2014): “assigning a number to an individual unit” (e.g. citation to a country or institution) and “assigning the unit to a specific category” (e.g. characterizing a R&D project proposal as interdisciplinary).

    But in both instances, we found that many of the “problems” or “issues” with previous work relate with the absence of a balanced and interactive relation between the indicator (Merriam Webster dictionary: “an index hand” or a “pointer”) and the concept that is (or should be) behind. An Indicator is an indicator of something and that something is always a concept.

    To provide more relevance to the conceptualization, we probably need to revisit some classics, and recall, for instance Paul Lazarsfeld (1951, 1962) recommendation that, before moving to the empirical analysis, we need to conceptualize (in your text you use the term “notion” that somehow seems to downgrade the role of conceptualization). From this perspective, indicators and measurement are just the final phase of a process that needs a careful consideration and development of the concepts (see for example, Goertz 2020)

    In our opinion, what is needed is a more careful, and previous consideration of the conceptualization of interdisciplinarity (and categorization) in connection with the measurement approaches and indicator building processes. In more methodological language, we could think on “the necessary and sufficient conditions” of the concept of interdisciplinarity. Which features need to be present to identify a research object as interdisciplinary and is there any sufficient attribute?

    Let us to summarize some arguments, first more practical and then more theoretically oriented, that may add to the discussion. On the practical grounds we recommend to revisit some previous work (for example: (NRC 2011; Carwright & Bradburn 2011), that suggest, more or less, the following steps:

    The starting point is always the selection of a concept; but not all concepts refers to specific features or are single-value functions (e.g. age, etc.), the majority are “ballungen” (congestions) concepts (Neurath /Wittgenstein) and have loose criteria and fuzzy boundaries (Ragin 2000, Hannan 2010). There are absolute concepts and relative ones; we have the impression that “interdisciplinarity” or “research quality” are (at least until now) not “universal” concepts (here we converge with your view).

    Second is the selection of the unit of observation or analysis. In your case of interest, the “project proposal”. Here the issue of the contextualization you suggest is pertinent.

    Third is also the practical step (mentioned above) that indicators’ developers always face, namely, the selection of the properties to include in the concept to operationalize it.

    Fourth is the determination of the “threshold level” and, to decide how to adjust the level (of interdisciplinarity) across the units, times and location.

    Fifth, and final consideration, from the theory of measurement we know that the procedures (the rules for applying the metrical system to produce the measurement results) need to be consistent with the definition of the concept (ontology) in most cases, the procedures are context-specific and have a purpose. Taking into account the “purpose” (the audience) of the indicator is essential because the relation between concepts and measurement changes depending on the role of “indicating”, among others: “descriptive”, “explanatory” (either as independent or dependent variables”), “case selection and scope decision” for causal inference, or “normative” role in the allocation of resources.

    In sum, concepts ( e.g. interdisciplinarity, quality….) are used by people to classify the entities, objects and situations that they encounter in their social life; concepts can be regarded as expectations of what kind of properties an arbitrary object that we encounter will have. By category we understand “a set of objects that have been recognized as fitting a concept” (Hannan et al 2019). Categorizing means assessing the objects (that could be individuals, attributes, situations, etc.) in terms of their attributes,. When we categorize individuals, objects, situations, (or research projects, our addition) we identify objects and we provide them with meaning.

    Implicit in your blogpost, and in much of the policy discourse of the last decades, we see the idea that interdisciplinarity is a value. Indeed, this may be the case, but, from the point of view of the evaluation process, it is not always advantageous for a research proposal to fall into the category of interdisciplinary. This is precisely due to the fact that human cognition (reviewers too) work with concepts and categories (for instance disciplines) and the less an object fits with more or less established categories, the more demanding is the evaluation of such objects, and the less likely that evaluators could rely in heuristics or cognitive shortcuts. We believe that this at least deserves some reflection.

    Finally, just three arguments to consider: First, most concepts (interdisciplinarity too) are socially constructed. Second, concepts are not universally shared, individuals or groups may diverge about the meaning. Third, most of the concepts (especially in our activity that is socially constructed) are value-laden, and they have important consequences (remember the performative role of measuring too . Hannan and colleagues (2019) have suggested a research program to deal with three aspects, based on recent advancements of cognitive sciences, computational linguistics and Bayesian statistical approaches.

    Best regards.

    Laura Cruz-Castro and Luis Sanz-Menéndez, CSIC, Madrid

    References

    Cartwright, Nancy, and Norman M. Bradburn. 2011. “A Theory of Measurement.” in The Importance of Common Metrics for Advancing Social Science Theory and Research: A Workshop Summary, edited by National Research Council. Complete version at: https://dro.dur.ac.uk/18310/

    Cartwright, Nancy, and Rosa Runhardt. 2014. “Measurement.” Pp. 265–87 in Philosophy of Social Science: A New Introduction, edited by N. Cartwright and E. Montuschi. Oxford University Press.

    Cruz-Castro, Laura, Catalina Martínez, Cristina Peñasco, and Luis Sanz-Menéndez. 2020. “The Classification of Public Research Organizations: Taxonomical Explorations.” Research Evaluation. doi: 10.1093/reseval/rvaa013.

    Goertz, Gary. 2020. Social Science Concepts and Measurement: New and Completely Revised Edition. revised edition. Princeton.

    Hannan, Michael T 2010. “Partiality of Memberships in Categories and Audiences.” Annual Review of Sociology 36(1):159–81. doi: 10.1146/annurev-soc-021610-092336.

    Hannan, Michael T., Gael Le Mens, Greta Hsu, Balazs Kovacs, Giacomo Negro, Laszlo Polos, Elizabeth Pontikes, and Amanda J. Sharkey. 2019. Concepts and Categories: Foundations for Sociological and Cultural Analysis. New York: Columbia University Press.

    Lazarsfeld, Paul F. 1962. “Philosophy of Science and Empirical Social Research.” Pp. 463–73 in Logic, Methodology and Philosophy of Science Proceedings of the 1960 International Congress, edited by E. Nagel, P. Suppes, and A. Tarski. Stanford University Press.

    Lazarsfeld, Paul F., and Allen H. Barton. 1951. “Qualitative Measurement in the Social Sciences: Classification, Typologies and Indices.” Pp. 155–92 in The policy sciences: recent developments in scope and method, edited by D. Lerner and H. D. Lasswell. Stanford University Press.

    National Research Council. 2011. The Importance of Common Metrics for Advancing Social Science Theory and Research: A Workshop Summary. Washington, DC: National Academies Press. https://doi.org/10.17226/13034 .

    Ragin, Charles C. 2000. Redesigning Social Research. Fuzzy-Set Social Science. 1 edition. Chicago: University of Chicago Press.

    Sanz-Menéndez, Luis, and Laura Cruz-Castro. 2019. “University Academics’ Preferences for Hiring and Promotion Systems.” European Journal of Higher Education 9(2):153–71. doi: 10.1080/21568235.2018.1515029.

    Sanz-Menéndez, Luis, María Bordons, and M. Angeles Zulueta. 2001. “Interdisciplinarity as a Multidimensional Concept: Its Measure in Three Different Research Areas.” Research Evaluation 10(1):47–58. doi: 10.3152/147154401781777123.

    Reply
    • Thanks Laura and Luis! Let me share that your 2001 paper with Bordons and Zulueta, developing interdisciplinarity as a multidimensional concept has been a great source of inspiration. Regarding your comment, I appreciate your way of structuring the development of indicators in fifth steps. Now, the kernel of what I intended to convey is in your last paragraph. Yes, indicators are socially constructed, yes, different stakeholders view them differently, and yes they are value laden and performative. Precisely because of this, because indicators are constructed, based on particular perspectives, values and contexts, Marres and De Rijcke (2020) propose to move the attention to the process of constructing the indicators, making explicit the specific views and values mobilized — as a participatory process. This move builds from ‘Science Technology Studies’ work on pluralisation of expertise (e.g. Stirling on ‘opening up’ https://doi.org/10.1177/0162243907311265).

      Reply
  6. Thank you, Dr. Rafols, for raising the important issue of measurement and for prompting us to try to think in new and more effective ways about evaluating interdisciplinary research projects.

    The idea of directionality and context-based “indicating” makes so much sense. These are characteristics of interdisciplinary research generally. Learning to “do” good interdisciplinary research often involves getting researchers to re-orient their brains from searching for universal, immutable, free-standing “truths” to exploring unique combinations of “truths” to achieve a goal or purpose.

    Your proposal opens many avenues for thought, and I have a “train wreck in my head” of all the different ways you made me think at once. My first thought was of the framework for describing knowledge integration presented by Gabriele Bammer in various ways since 2005, emerging from a symposium on knowledge integration in natural resource management, sponsored by Land Water Australia (see references below for detail).

    The 6-question integration framework is simple and resembles the “5 Ws (+H)” questions from journalism:

    1. What is the integration aiming to achieve and who is intended to benefit?
    2. What is being integrated?
    3. Who is doing the integration?
    4. How is the integration being undertaken?
    5. What is the context for the integration?
    6. What is the outcome of the integration?

    These questions are oriented towards describing integration (not evaluation), but it seems to me that evaluation of inherently context-dependent interdisciplinary research needs to start with clarifying the description in order to conduct a two part evaluation based on the way the project was framed in the first place (the context) rather on “universal” standards. Thus I see a 3-part evaluation process:

    A. Establishing the context clearly (which the integration framework questions do),
    B. Evaluating whether the contextual boundaries established by each question are/were appropriate, legitimate, and/or defensible, and
    C. Evaluating the degree to which the project achieved the intent/goals inherent in each question (including accommodation for unanticipated obstacles beyond researcher control).

    Each of the two evaluation steps would need their own unique standards of evaluation, of course. Within the smaller scales of parts B and C it would be easier to include more traditional “scalar” measures. In the aggregate, however, by measuring a project against its own goals (Part A) rather than universal goals, the evaluation could be both systematic and respectful of the unique internal logic of each project – providing “vectors” of directionality and indicating.

    Just one train of thought from the many you inspired. I look forward to continuing this conversation and hearing more ideas from my brilliant and creative colleagues in this i2insights network!

    Cheers,
    Caryn

    REFERENCES

    Bammer, G. (2005) Guiding Principles for Integration in Natural Resource Management (NRM) as a Contribution to Sustainability. Australasian Journal of Environmental Management, 12:sup1, 5-7, DOI: 10.1080/14486563.2005.9725099 (https://www.tandfonline.com/doi/abs/10.1080/14486563.2005.9725099)
    * first publication of the framework (see section 3. Approaches to integration in NRM)

    Bammer, G. (2006) A systematic approach to integration in research. Integration Insights #1, September. (https://i2s.anu.edu.au/wp-content/uploads/2009/10/integration-insight_1-1.pdf)
    * outline of systematic approach, in general

    Bammer, G. (2006) Illustrating a systematic approach to explain integration in research – the case of the World Commission on Dams. Integration Insights #2, October. (https://i2s.anu.edu.au/wp-content/uploads/2009/10/integration-insight_2-1.pdf)
    * example of how to apply the framework/approach

    Reply
    • Many thanks, Caryn. Your questions are indeed very helpful to think about the relevant issues and dimensions to be taken into account when creating indicators (i.e. indicatING as a process). Your framework would be helpful for guiding this ‘Indicating’ — I hope indeed that a few of us have the chance to work together in these directions.

      Reply

Leave a Reply to frodeman Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: