By Florian Keil, Melina Stein and Flurina Schneider.

2. Melina Stein (biography)
3. Flurina Schneider (biography)
Is artificial intelligence, a technology aggressively advertised as the ultimate cure-all, fundamentally incompatible with transdisciplinarity and its decades-old insight that the “wicked” problems of the real world do not lend themselves to one-dimensional solutions? Should transdisciplinary research outright reject a technology that is already undermining efforts to achieve social and environmental justice? Or can artificial intelligence actually support transdisciplinary research when used responsibly?
Using artificial intelligence in transdisciplinary research requires a critical mindset
We believe that, used responsibly, artificial intelligence can support transdisciplinary research, but argue that the decision to use artificial intelligence as a tool in transdisciplinary research must be based on an acute awareness of the risks involved and a critical mindset.
Such a critical mindset entails, first, a deconstruction of the term “artificial intelligence”. This involves realizing that it is not rigorously defined and has always been little more than a marketing slogan. A critical look at the term also considers the fact that it casually refers to a concept that is not yet well understood scientifically, namely intelligence. Furthermore, the term “artificial” is misleading, since, as Kate Crawford (2021) shows, artificial intelligence relies heavily on human labor and the exploitation of natural resources.
Second, a critical approach to artificial intelligence rejects the prevalent notion of an artificial intelligence in the singular in current discourse. It understands that what is currently referred to as “artificial intelligence or AI” is actually based on a rather mundane technology called “machine learning,” which has a wide range of applications. We focus on generative artificial intelligence (also known as GenAI), the technology behind today’s most popular artificial intelligence applications, which produces, when prompted, new content that is similar to its training data. Framed as a general-purpose technology, generative artificial intelligence is arguably driving expectations of a wave of innovation in science and research.
Third, critically using current artificial intelligence in transdisciplinary research requires an awareness of its fundamental limitations. Large language models, the figureheads of generative artificial intelligence, are not truth machines that understand language and reason their way through problems as humans do. Instead, large language models are highly advanced forms of autocomplete that often produce uncannily sophisticated, but sometimes also laughably flawed, output.
Finally, a critical mindset adopts a human-centered approach to artificial intelligence. This means that artificial intelligence systems should augment or enhance human capabilities rather than replace them. It also implies that the output of generative artificial intelligence applications must be reviewed by competent humans before being used for public purposes. This perspective aligns well with Faye Miller’s recent inspiring i2Insights contribution, Five capacities for human–artificial intelligence collaboration in transdisciplinary research, about the skill sets transdisciplinary researchers need to develop to “orchestrate human–artificial intelligence collaboration.”
Which tasks in transdisciplinary research can be addressed with generative artificial intelligence?
Adopting a critical mindset means understanding which artificial intelligence tools can be used for which tasks in transdisciplinary research, how to use them to leverage their potential while recognizing their inherent limitations, and how to identify and mitigate the risks they pose. In the table below, we highlight where the use of generative artificial intelligence is particularly relevant to transdisciplinary research: knowledge integration, stakeholder participation, and science communication. For each area, we identify key transdisciplinary research tasks amenable to artificial intelligence.
AI for Knowledge Integration
AI for Participation
AI for Science Communication
Problem framing:
– Problem identification
– Problem transformation
– Hypothesis generation
Design of knowledge integration processes
Literature search
Literature analysis
Analysis of unstructured data:
texts and images
Data integration
Automated transcription
Design of participation processes
Engagement of stakeholders
Communicating with stakeholders
Visualization in participatory processes
Citizen Science
Content generation:
– Text generation
– Image generation
– Video generation
– Speech synthesis
Simplifying scientific language
Knowledge access platforms
Key transdisciplinary tasks amenable to artificial intelligence (AI) (Source: Keil and Stein, 2025, p21)
Here we focus on one particular question: Can generative artificial intelligence help with knowledge integration? To answer this question, we distinguish between weak and strong knowledge integration. We define the latter as the cognitive process of combining different representations of knowledge about the world into a unified, coherent framework from which new insights can be generated. In contrast, weak knowledge integration is the process of extracting insights from the mere aggregation of existing knowledge.
Strong knowledge integration is so difficult because the many different representations of knowledge about the world that are relevant to a given problem are often incompatible in terms of epistemology and methodology, as well as in how they are codified and communicated. Integrating them involves an intricate act of reasoning, in which knowledge from one domain is interpreted and expressed in terms of another.
The “act of reasoning” involves several abilities: symbolic, common sense, analogical, and compositional reasoning. The last is particularly important because it involves the ability to recognize and understand novel relationships between known elements. However, evaluations of large language models show that they excel primarily at symbolic reasoning in tasks such as mathematics and coding, while struggling with other forms of reasoning in open-ended domains.
While experimentation with artificial intelligence around knowledge integration is warranted, we believe opportunities for using today’s artificial intelligence in transdisciplinary research pertain to tasks related to weak rather than strong knowledge integration.
Artificial intelligence research and development should learn from transdisciplinary theory and practice
Using artificial intelligence in transdisciplinary research with a critical mindset is an important starting point. As transdisciplinary researchers and practitioners we must, however, not stop here. Given how it is currently produced and deployed, any use of generative artificial intelligence contributes to various forms of severe social and environmental harm. While this realization might lead some to argue that the use of generative artificial intelligence (in transdisciplinary research) is principally unethical, we believe this conclusion is not inevitable.
On the contrary, we argue that, by productively engaging with artificial intelligence, the transdisciplinary community should help pave the way for its ethical use and development. As transdisciplinary practitioners and transdisciplinary institutions, one thing we can do is use open source artificial intelligence whenever possible and to advocate for a practice of openness that, crucially, includes the models’ training data.
Even more importantly, the transdisciplinary community should support initiatives that promote artificial intelligence in the public interest. Specifically, it should map out the argument that artificial intelligence research and development must be transdisciplinary or, otherwise, the benefits and burdens of the technology will continue to be shared unevenly.
What do you think? How have you used artificial intelligence in your work? Are there other advantages and cautions that you would add?
To find out more:
Keil, F and Stein, M. (2025). Opportunities and Risks of Using AI Technologies in Transdisciplinary Research. ISOE-Materialien Soziale Ökologie (ISOE Materials Social Ecology), 77. Institute for Social-Ecological Research (ISOE): Frankfurt am Main, Germany. (Online – open access) (DOI): https://doi.org/10.5281/zenodo.15535709
This report explains why and how generative artificial intelligence can help accomplish transdisciplinary research tasks and provides links to specific tools and resources for further learning. The report focuses on tools that do not require programming skills.
Reference:
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press: New Haven, USA and London, UK.
Use of Generative Artificial Intelligence (AI) Statement: Generative artificial intelligence was not used in the development of this i2Insights contribution. (For i2Insights policy on generative artificial intelligence please see https://i2insights.org/contributing-to-i2insights/guidelines-for-authors/#artificial-intelligence.)
Biography: Florian Keil PhD is the founder of ai4ki, a start-up that develops custom artificial intelligence solutions for science and science management. He focuses on building human-centered artificial intelligence systems for knowledge integration in transdisciplinary research. He is based in Berlin, Germany.
Biography: Melina Stein MA is a research associate at the Institute for Social-Ecological Research (ISOE) in Frankfurt am Main, Germany. She works on various transdisciplinary projects on the topics of sustainable mobility and biodiversity using expertise in empirical methods of social research.
Biography: Flurina Schneider PhD is scientific director of the Institute for Social-Ecological Research (ISOE) and professor in social ecology and transdisciplinarity at Goethe University, both in Frankfurt, Germany. Her research focuses on sustainability transformations, and the role of knowledge, science policy and transdisciplinary co-production of knowledge.
Thanks for this thought-provoking post! You’ve raised so many important points that I will definitely be reflecting on as part of our ongoing processing and learning how to use AI responsibly for TD research. Yes, I agree this really needs to be a collective reflection for public interest and I’m also inspired to see your published research in this area towards evidentially mapping out this argument, clarifying what can and cannot be done by AI and where human expertise is needed. The part I really wanted to draw attention to was your point on knowledge integration requiring the “act of reasoning… particularly compositional reasoning: the ability to recognize and understand novel relationships between known elements and LLMs struggling with other forms of reasoning in open-ended domains.” I like how you’ve distinguished between weak and strong integration. I think this is where human expertise in compositional reasoning is essential for strong integration; in my view, no matter how it evolves, there will always be unique connections that AI will overlook that only experienced human insight can uncover or join the not so obvious dots, and yes this does have implications for bridging inequities.
Thanks for this most useful exposition. I found that a book describing Nvidia’s development helped me start to understand how artificial intelligence works and it’s also a cracking good yarn. The book is Stephen Witt (2025) The Thinking Machine: Jensen Huang, Nvidia, and the world’s most coveted microchip. Penguin.
Your point about knowing what the specific artificial intelligence being used is trained on is so important. Just imagine if all the world’s research publications were available through an artificial intelligence and how that could transform not only transdisciplinarity but also all research. Do you know if there are any steps towards making this happen?
One could argue that most state-of-the-art large language models already are what you have in mind here, since they’ve probably ‘seen’ most of the scientific literature available on the internet. But if you mean “available” as in “available by scientific standards” with validated references, exact quotations, and so on, then these models must fall short of your vision, because they’re prone to “hallucinations” and factual errors. Also, since model developers usually keep their training data secret, we have no way of knowing what scientific literature a model has actually digested during training, and which parts of a paper went in (only the abstract or the full text and even appended data?).
One proven remedy here is to connect a model to a database of scientific literature and have it use relevant context drawn from this database to answer a query. This technique is generally known as Retrieval Augmented Generation, or RAG for short. There are actually a lot of apps out there that can do this, like Elicit or SciSpace (we talk more about these in our publication linked above). But the databases they use tend to focus on STEM subjects, and you might not be able to access the full content of papers. Also, when using these tools—or any RAG system—it’s important to keep in mind that they can reduce but not fully eliminate hallucinations.
As far as we know, nobody has tried to make your vision a reality on such a large scale. There are many reasons for this, including the desire of some scientific publishers to make money. But we believe that more scientific breakthroughs in AI are needed to make this huge effort worthwhile. But yes, wouldn’t it be great to have all the world’s research publications available for free and in full for human researchers and specialized AI applications to make use of?
Many thanks for this detailed and informative reply. It’s an interesting – and maybe also useful – exercise to think through the technical, scientific, cultural, political, practical, commercial, ethical and other considerations relevant to making the academic literature of the world available in such a way. It could include literature published in any language and ideally make the literature available in any language. But given where most of the published literature is produced, would it hamper efforts towards decolonisation? Should superseded papers be excluded? Who decides? How do we get all the publishers to work together? …