Addressing societal challenges: From interdisciplinarity to research portfolios analysis

By Ismael Rafols

author_ismael-rafols
Ismael Rafols (biography)

How can knowledge integration for addressing societal challenges be mapped, ‘measured’ and assessed?

In this blog post I argue that measuring averages or aggregates of ‘interdisciplinarity’ is not sufficiently focused for evaluating research aimed at societal contributions. Instead, one should take a portfolio approach to analyze knowledge integration as a systemic process over research landscapes; in particular, focusing on the directions, diversity and synergies of research trajectories.

There are two main reasons:

1. since knowledge integration for societal challenges is a systemic and dynamic process, we need broad and plural perspectives and therefore we should use a battery of analytical tools, as developed for example in research portfolio analysis, rather than a narrow focus on interdisciplinarity.

2. while interdisciplinarity is an important (but not the only) relevant concept in knowledge integration, the concept of interdisciplinarity is too ambiguous, diverse and contextual to be captured by traditional indicators, as discussed in my previous blog post ‘Measuring’ interdisciplinarity: from indicators to indicating.

Fostering plural innovation pathways in the face of uncertainty and ambiguity

It has long been argued that addressing societal challenges, such as climate change or COVID-19, benefits from the combination of disparate types of knowledge. Societal challenges are ‘wicked’ problems, in the sense that the framings of both the problems and the solutions are complex, disputed and uncertain. Under these conditions of ambiguity and uncertainty, research contributions are likely to come from combinations of diverse types of knowledge (or ways of knowing) – but it is also important to have plurality of research trajectories, each of them made of different epistemic combinations. In other words, we do not know or even agree in advance on what types of expertise are appropriate to tackle a given problem.

Therefore, rather than aiming at fostering a particular ‘melting pot’ of disciplines, research systems should produce a high number of disparate research trajectories – knowing that only some of them will be ever technically successful.

Moreover, different research and innovation pathways are not equally desirable from a public value perspective – directionality matters. Some solutions are more socially preferable than others depending on their effects on public goods such as equity or environmental sustainability. Which means that public investment, while keeping a diverse portfolio of research strategies, should favour those which are perceived as more socially robust and relatively underfunded by the private sector.

From ‘measuring’ interdisciplinarity to multi-level mapping of knowledge integration

Measurement approaches for research aiming to address societal challenges should reflect this turn towards a systemic perspective on knowledge integration.

‘Solutions’ to societal challenges will not emanate from 1,000 labs with the same combination of disciplines, but from labs of various epistemic combinations and social embeddings.

Therefore, measurement should not focus on an average degree of interdisciplinarity. Instead, it should focus on mapping the directions and diversity of research approaches. To do this, we need statistical descriptions of the vectors and distributions of research trajectories over knowledge landscapes. A framing in terms of research portfolios can help conduct this type of analyses.

Portfolio analysis: exploring directions, diversity and synergies

In a nutshell, the key idea is that for a given societal issue, the contribution of research should be explored by mapping the relevant types of knowledge over a research landscape. The portfolio or repertoire of a given laboratory, university or territory can then be visualised by projecting (overlaying) their activities of this research landscape, as illustrated in the figure below for ‘rice research.’

rafols_mapping-relevant-knowledge-over-research-landscape
Comparison of the focus of rice research in India and the US (2000-12). Red areas indicate areas of high density of publications. From Ciarli and Rafols (2019).

Three benefits of a portfolio approach are that it:

  • provides information on the main directions that the research on a given topic is taking.
  • highlights the diversity of research efforts, ie., whether investments are heavily concentrated in a few areas, or distributed across a variety of fields. In the face of uncertainty and contested views of preferred innovation pathways, one would expect a variety of pathways to be supported. This way bets are hedged against unexpected scientific results or social reactions to a certain approaches.
  • helps think about the synergies or lack of thereof across research pathways by analysing the interrelations between innovation areas.

In summary, since social contributions are multifaceted, the analysis of research for societal challenges needs to adopt systemic perspectives, and thus take multidimensional forms such as conveyed by maps and networks.

Research portfolios analysis offers a battery of tools to conduct plural analysis. While interdisciplinary or transdisciplinary research is paramount at certain points, it is not required across the whole landscape. Therefore, rather than aggregates or averages, we need rich descriptions of knowledge landscapes including the directions, diversity and synergies of research trajectories.

How would a portfolio approach work for the societal issue that you address? Who are the key stakeholders and their social networks? What is the landscape of relevant types of knowledge? Where and how do diverse forms of knowledge interact and become integrated? What are the dominant and the marginalised trajectories?

To find out more:

This blog post draws on my presentation at the ‘Workshop on the implications of “convergence” for how the National Center for Science and Engineering Statistics measures the science and engineering workforce’ (https://www.nationalacademies.org/event/10-22-2020/a-workshop-on-the-implications-of-convergence-for-how-the-national-center-for-science-and-engineering-statistics-measures-the-science-and-engineering-workforce). It was organised by the US National Academies of Sciences, Engineering and Medicine in October 2020. It is adapted from “Knowledge integration for societal challenges: from interdisciplinarity to research portfolio analysis” (https://leidenmadtrics.nl/articles/knowledge-integration-for-societal-challenges-from-interdisciplinarity-to-research-portfolio-analysis), which also contains additional references.

Reference:
Ciarli, T. and Rafols, I. (2019). The relation between research priorities and societal demands: The case of rice. Research Policy, 48, 4: 949-967.

Biography: Ismael Rafols PhD is a senior researcher at the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands and associate faculty at the Science Policy Research Unit (SPRU) at the University of Sussex in the United Kingdom. He works on science policy developing novel approaches to science and technology, including interdisciplinarity, indicators, using mixed-methods for informing evaluation, foresight and research strategies.

8 thoughts on “Addressing societal challenges: From interdisciplinarity to research portfolios analysis”

  1. Ishmael Rafols:
    Your post triggered my interest because I am working on what I think are related problems: how to ‘measure’ the merit of contributions to the planning discourse about public policies or solutions to issues that call for decisions or agreements. Starting from the question of measuring the “weight” of arguments such as the ‘pro and con’ arguments in that discourse so as to more visibly connect the eventual decisions more transparently with those contributions (which traditional decision tools such as majority voting does not do well).

    Coming from architecture, I have not encountered the tools of ‘portfolio analysis’ , and I am wondering how this tool would contribute to the ‘toolkit” I envision for the design of a ‘planning discourse support platform that I am working on. Is there a brief way you might enlighten me about how portfolio analysis would connect to the development and evaluation of planning policies? I make a distinction between ‘evaluation criteria’ that explain the merit of, say, a proposed plan as assessed by individual participants in the discourse, and ‘decision criteria’ that influence or guide the acceptance or rejection decision by the actual decision-maker(s) — that may be statistics of individual evaluation judgments (aggregated in different ways and pointing to potentially very different decisions). I hope this makes sense enough for you to briefly explain the role of portfolio analysis in that process?

    Thorbjoern Mann

    Reply
    • Thanks for your interest. We use the notion of ‘portfolio analysis’ very loosely in this context. The idea in a nutshell is that a given S&T programme (e.g. on mental health) will support 20-30 different projects. Then you may want to pick up projects based on different approaches: some more psycho-social, some more educational, some more pharmacological — for good implementation, all may well be transdisciplinary, but the potential key contribution is different in each case. You could scale this approach when looking at a panel of programmes as well.

      I am not sure how this would translate into planning policies. I imagine that yo may have different types of planning interventions. Given the uncertainty about which ones may work, you may want to try some combinations that work well together, but not others. The underlying idea in portfolios is that in the face of uncertainty, it is best to try different things.

      Let me share some references. Linton and Vonortas (2015) (https://cyberleninka.ru/article/n/from-research-project-to-research-portfolio-meeting-scale-and-complexity/viewer) and Wallace and Rafols (2015) made general reviews on portfolios in S&T policy (https://doi.org/10.1007/s11024-015-9271-8). A shorter summary is available in Vonortas and Rafols (2019) (https://repository.fteval.at/423/).

      Linquiti, P. D. (2015). The Public Sector R&D Enterprise: A New Approach to Portfolio Evaluation, Palgrave-Macmillan (https://doi.org/10.1057/9781137542090) has a more concrete approach, when you can estimate valuation of outcomes, i.e. when there is less uncertainty.

      I hope this helps!

      Reply
      • Thank you for the reply. I get the sense that Portfolio Evaluation is more of an enhanced description tool than the kind of merit assessment evaluation I have in mind, but it looks very useful for a different purpose. (I have not had the time to explore the links yet).

        Reply
  2. Ismael, thank you for advocating for a multi-method, multi-measure approach to evaluating knowledge integration. Many evaluators have long advocated for this approach to complex evaluands (e.g., Davidson 2013). Your emphasis on ‘trajectories’ also takes that multidimensionality one step further into the dimension of time. I didn’t see mention of longitudinal mapping, monitoring, & evaluation in your post, but is that what you meant? Can you point us to any exemplars?

    I really appreciate your point that interdisciplinarity (ID) is only one aspect of knowledge integration; the INTEREACH community constantly reminds me that there are many boundaries to span besides disciplines, AND there are additional, important dynamics beyond boundary spanning. Much work ahead to articulate those (hopefully through a participatory process of indicatING, as you advocated previously).

    I heartily agree a research portfolio-of-indicators would be an improvement, so I wouldn’t want us to walk away from your post believing that we have to start from scratch because all or even the main way people have been evaluating ID has been to take the scalar, averaging indicator approach you critique. I have some data I can share about what approaches have actually been taken in the last 20 years to evaluate ID.

    As you know, my team created a dataset we presented at the measuring convergence workshop; thank you for citing and linking to it in your previous post. We weren’t able to report all of our findings in that short presentation (Laursen et al. 2020). Our detailed report is in a manuscript still under review. But I can share that the average study in our systematic review used 6.2 different measures of ID, with 64 studies using more than 5 measures. So there are lots of examples out there of a multi-measure approach. (But we don’t know how many were longitudinal).

    To see what general approaches have been taken in the literature, my team is doing a follow up cluster analysis on the dataset. We’ve found a total of 5 clusters, each representing a main ID evaluation approach. These are labeled in the interactive viz you linked previously (https://leadllc.shinyapps.io/evaluation-sankey/). These main approaches, which consider many variables about the approach and not just the number of measures used, do not correlate with whether an approach is multi-measure. Instead, what we see is that it is possible to use a multi-measure approach no matter what one’s overall approach is.

    That said, we found only one main approach that supports the level and kind of measurement diversity you are advocating for in a research portfolio approach: the second-most frequently used approach, which we call “Multidimensional evaluations of individual research.” That approach characteristically used RUBRICS to generate the multiple measures. Rubrics have several advantages for evaluating complex things like research integration, one of which is that they are an inherently mixed method that integrates qualitative and quantitative information in a transparent, coherent way (King et al 2013). Several other approaches used multiple measures but only this one did so in a transparent, cohesive way that naturally accommodates both qualitative and quantitative indicators and that would straightforwardly support the kind of inferences you’re calling for (and I agree we need them)–overall trajectories and character of knowledge integration efforts. Not to say rubrics are the only possible method, but they a good option with examples already published (e.g., Boix Mansilla et al 2009; Carmichael & LaPierre 2014; Zhang & Shen 2015)

    I will add that there is a large community of scholars who anchor their approach in critical questions as a method, and this community has always maintained that a proper & helpful evaluation of ID will always need multiple critical questions to measure the multiple facets of ID–especially since this group tends to focus on ID processes and specifically integration (e.g., Lyall et al. 2011; Pohl et al. 2010; Strang & McLeish 2015; Tate et al. 2018). My team thinks this approach has lots of promise but has yet to become really helpful because it lacks clear instructions for how to aggregate one’s answers to all the critical questions to make a coherent judgment; the questions tend to be phrased as Yes/No questions that do not produce much information; and the questions almost exclusively ask for qualitative information, ignoring the quantitative aspects of knowledge integration. We think multi-measure QUANTitative approaches, such as multi-level bibliometric studies, also hold similar potential and weaknesses, but I didn’t want to go into that here because you’ve already addressed some of that in your post. I just wanted to lift up the hard work of many of our colleagues and to point out that we don’t have to start from scratch in pursuing a research portfolio-of-measures approach.
    ______
    Boix Mansilla, V., Duraisingh, E. D., Wolfe, C. R., & Haynes, C. (2009). Targeted assessment rubric: An empirically grounded rubric for interdisciplinary writing. The Journal of Higher Education, 80(3), 334-353. https://doi.org/10.1080/00221546.2009.11779016
    Carmichael, T., & LaPierre, Y. (2014). Interdisciplinary Learning Works: The Results of a Comprehensive Assessment of Students and Student Learning Outcomes in an Integrative Learning Community. Issues in Interdisciplinary Studies, 32, 53-78.
    Davidson, E. J. Actionable Evaluation Basics: Getting Succinct Answers to the Most Important Questions. (Real Evaluation, Ltd., 2013). http://www.amazon.com/gp/search?index=books&linkCode=qs&keywords=9781480102699
    King, J., McKegg, K., Oakden, J., & Wehipeihana, N. (2013). Evaluative rubrics: A method for surfacing values and improving the credibility of evaluation. Journal of MultiDisciplinary Evaluation, 9(21), 11–20.
    Laursen, B. K., Motzer, N., and Anderson, K. (2020, November). Pathways for assessing interdisciplinarity. Workshop on the Implications of Convergence for How NCSES Measures the Science and Engineering Workforce. National Academies of Sciences, Medicine, & Engineering. Virtual. PLEASE NOTE THIS IS A PDF DOWNLOAD OF 8.7MB: https://www.nationalacademies.org/event/10-22-2020/docs/D676F50F0EA435579894EA2C1B697F02C4FD2646574C
    Lyall, C., Tait, J., Meagher, L., Bruce, A., & Marsden, W. (2011). A short guide to evaluating interdisciplinary research. ISSTI Briefing note, (9). https://www.research.ed.ac.uk/en/publications/a-short-guide-to-evaluating-interdisciplinary-research
    Pohl, C., Perrig-Chiello, P., Butz, B., Hadorn, G. H., Joye, D., Lawrence, R., et al. (2010). Questions to evaluate inter- and transdisciplinary research proposals (pp. 1-23). Berne, Switzerland: td-net. https://api.swiss-academies.ch/site/assets/files/14856/td-net_pohl_et_al_2011_questions_to_evaluate_inter-_and_transdisciplinary_research_proposals.pdf
    Strang, V., & McLeish, T. (2015). Evaluating Interdisciplinary Research: a practical guide (pp. 1-21). Durham, NC: Durham University. https://www.dur.ac.uk/resources/ias/publications/StrangandMcLeish.EvaluatingInterdisciplinaryResearch.July2015.pdf
    Tate, E., Decker, V., & Just, C. (2018). Evaluating Collaborative Readiness for Interdisciplinary Flood Research. Risk Analysis, 9(1), 1–8. https://doi.org/10.1111/risa.13249
    Zhang, D., & Shen, J. (2015). Disciplinary foundations for solving interdisciplinary scientific problems. International Journal of Science Education, 37(15), 2555-2576. https://doi.org/10.1080/09500693.2015.1085658

    Reply
    • Thanks for you thoughtful comment Bethany. I am afraid there is some misunderstanding — sorry I didn’t manage to be sufficiently clearer.

      Your comment is about ‘a multi-method, multi-measure approach to evaluating knowledge integration’ — with a focus on the evaluation of given project. I think that a battery of indicators is indeed the best way to approach the assessment of a particular transdisciplinary effort. This relates more closely to my blog last week: https://i2insights.org/2021/02/09/measuring-interdisciplinarity/. I agree that there is often good practice along these lines — I say in the blog that this is common practice for professional evaluators, though not among bibliometricians.

      But this post is not meant to be about the portfolio-of-indicators of a particular project, but about how science policy at a larger scale (large programme programme) addresses societal problems. I am afraid that I failed to be clear about this. The portfolio is about the variety of interdisciplinary projects that are pursued. I am told that this is often the case that programs aim at supporting a diversity of approaches or subproblems within a societal challenge. However, portfolio analysis often reveals that some research trajectories and innovation pathways receive most resources.

      Let me expose examples of alternative research trajectories: (i) in mental health, pharmacological research seems to get more resources than research on psychosocial approaches related more related to prevention. (ii) in rice research in rich countries like the US, research on transgenic seeds is prominent over research on agricultural field techniques for improving yield, as in India. (iii) in obesity, research on the relation between obesity and other diseases more important than the social determinants of research. (Examples are drawn from cases studies we have done).

      In all these cases, the different approaches can be transdisciplinary. A portfolio approach would help to bring to the discussion whether the distribution of resources across these approaches could be ‘better’ balanced.

      Reply
      • Thanks for clarifying, Ismael. Do funders not conduct portfolio analyses already? Or are you saying they do (or might) but are not looking for the right thing, i.e., a balanced variety of funded projects?

        Glad I understood your point about multi-indicator evaluations. I didn’t realize from the initial post you were mainly criticizing bibliometric approaches–or perhaps just a subset of people who use them in a scalar way. While you mentioned professional evaluators taking the recommended multi-indicator, contextual approach, I didn’t see mention of the other large group who does so: evaluation researchers who promote rubrics & critical questions. It sounded like you were describing the entire field (except professional evaluators) and the description simply wasn’t accurate. But if you were actually just describing the bibliometric portion of the field, or a common way bibliometrics are used, then your point is very important.

        Reply
  3. Many thanks for this stimulating blog post. I am particularly intrigued by the statement: “‘Solutions’ to societal challenges will not emanate from 1,000 labs with the same combination of disciplines, but from labs of various epistemic combinations and social embeddings.” It is self-evident when you think about it, but I’ve never really stopped to think about it, especially its implications.

    One implication is that it acknowledges one challenge of complex problems, which is that different people frame any given problem differently. It therefore encourages those leading research to be creative in how they frame the problem, to look for new productive angles and to challenge themselves to go outside their comfort zones in who they invite to collaborate with them.

    A second implication is that we need to better deal with incommensurability between a range of different approaches to any given issue. Rather than seeing incommensurability as a negative and “all too hard,” the lack of intersection or fit between the findings of two or more projects can be a stimulus for further creativity. Darryn Reid’s blog post, scheduled for March 30, is a masterful exposition of this issue.

    Third, it’s a challenge to academic tribalism. We all want to belong, but if a tribe is too narrow in its thinking this can stymie the value of the research. One of the aims of this blog is to bring together the interdisciplinarians, transdisciplinarians, systems thinkers, action researchers, post-normal scientists, implementation scientists etc etc into a broader and more diverse tribe seeking to tackle complex problems. One of the aims of such a bigger tribe then also needs to be to provide peer support and reward for diversity rather than conformity.

    Finally, you have given me a new lens through which to view my own work. It seems to me that the frameworks I’ve been working on could provide a way to start documenting research portfolios, see eg A checklist for documenting knowledge synthesis http://i2insights.org/2018/07/31/knowledge-synthesis-checklist/.

    I look forward to seeing how others react to the ideas you have presented.

    Reply
    • Thanks Gabriele — your comments is very helpful in re-interpreting the birdview / portfolio language used in the blog, in terms of researchers conducting transdisciplinary projects: encouragement to diversity of framings, respect for incommensurability of approach (and thus humility!), accepting that not everybody can belong to an academic tribe, and that it’s fine. Nice reflections!

      Reply

Leave a Reply to Ismael RafolsCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading