Integration and Implementation Insights

A framework for navigating the impact of using artificial intelligence on collaborative research communication

By Faye Miller.

faye-miller_2025
Faye Miller (biography)

How can research teams recognise when their use of artificial intelligence is affecting their ability to integrate different knowledge and perspectives? How can they navigate the impact of artificial intelligence on their collaborative processes?

When research teams use artificial intelligence in collaborative work, new complexities emerge, especially subtle shifts in communication patterns that can fundamentally alter how teams integrate different perspectives and knowledge forms. Consider an environmental team relying on artificial intelligence summaries across hydrology, ecology, and policy. They might miss crucial disciplinary nuances, or follow its “evidence-based” recommendations that may clash with community priorities. Such changes in communication rhythms can compromise decision-making quality and the integration of different viewpoints.

This i2Insights contribution presents a five-stage framework for identifying when the use of artificial intelligence disrupts a team’s communication patterns. While developed primarily for generative artificial intelligence tools (such as large language models), the principles apply broadly to artificial intelligence systems that synthesise or aggregate knowledge.

The framework builds on the integration capacity discussed in my previous i2Insights contribution Five capacities for human–artificial intelligence collaboration in transdisciplinary research.

The framework takes an adaptive approach to encourage ongoing reflection in the research process, adjusting observation practices and adoption of artificial intelligence based on what teams notice, and responding to different kinds of communication challenges as they arise, as indicated in the stages described below.

A five-stage framework for identifying when the use of artificial intelligence disrupts a team’s communication patterns

Stage 1: Map communication patterns before using artificial intelligence

This stage involves documenting your team’s communication patterns before introducing artificial intelligence, noticing how members express uncertainty, identifying who translates between disciplines, and mapping where information comes from, how it moves, and where interpretation happens.

Key questions to ask include:

These areas are important to document, as they become testable hypotheses for later stages.

Stage 2: Observe how artificial intelligence affects collaboration through early testing

This involves limited use of artificial intelligence while watching for emerging patterns. Assigning rotating ‘communication observers’ from different disciplines can help track how artificial intelligence changes team dynamics.

Hold brief structured discussions of 15-20 minutes to check whether collaborative rhythms are still working. These conversations may help teams notice when artificial intelligence converts dynamic knowledge into static, filtered outputs by asking:

For example, a public health team may discover that community advisors feel less included because their knowledge does not enter an artificial intelligence system trained on academic papers alone. To resolve this issue, parallel documentation processes can be created where oral knowledge is recorded and weighted equally.

A key decision point here is: if artificial intelligence limits the participation of stakeholders (declining participation, systematic filtering), the team should pause their use of artificial intelligence, but if issues are manageable, continue with proposed changes.

Stage 3: Watch for information flow changes during full use of artificial intelligence

This stage involves watching for patterns where filtered information affects the synthesis of different perspectives. One way to do this is by maintaining a reflective journal, noting when:

In addition, hold monthly reflection sessions, asking:

Stage 4: Analyse the results, especially by checking for missing knowledge

In this stage, a team might compare artificial intelligence-influenced conclusions to what each member would conclude independently. Such analysis can reveal what artificial intelligence might give less (or more) weight to, asking:

Regular validation with those affected by research decisions should become standard practice, not occasional checking. Their recognition (or non-recognition) of themselves in findings reveals which communication channels worked and which may have been bypassed. For example, an agricultural team might find that artificial intelligence consistently recommends capital-intensive solutions because training data come from industrial operations. Validation criteria requiring input from farmers with relevant experience can strengthen recommendations where artificial intelligence and farmer knowledge align, or prompt further investigation where they diverge, to avoid policy bias.

Stage 5: Document and share knowledge creation

This involves documenting and explaining what artificial intelligence did and how it influenced the research process, and recording which questions artificial intelligence generated or did not address, who was consulted, how artificial intelligence shaped uncertainty communication, and what criteria guided decisions.

Useful questions to consider include:

Towards building an adaptive approach

The methodological practices outlined here serve an ethical purpose, as well as a quality assurance practice, in helping teams fulfill their obligation to prevent bias in whose knowledge gets valued and whose voices shape research outcomes.

This framework assumes regular communication, resources for reflection, and a supportive culture. If these don’t exist, address these underlying issues first.

Some practical steps towards building the adaptive approach include:

I’m still learning what works. Every team teaches me something new. What have been your experiences? Which meetings could include these reflections? Who notices when artificial intelligence gives less emphasis to certain knowledge? Where might changes be hardest to spot?

Use of Generative Artificial Intelligence (AI) Statement: Generative AI (Claude, Sonnet 4.5) was used in this contribution to conduct initial research scoping and exploration of the literature. The output of the AI was reviewed for accuracy, bias, and appropriateness in relation to all sources reviewed. The AI was not used to develop the framework or arguments. (For i2Insights policy on generative artificial intelligence please see https://i2insights.org/contributing-to-i2insights/guidelines-for-authors/#artificial-intelligence.)

Biography: Faye Miller PhD is a research director, knowledge broker, and career educator. As Founder of Human Constellation Consulting, she collaborates globally with technology companies, universities, and research organisations on social and ethical aspects of science and technology, digital and information literacy, transdisciplinary knowledge integration and shared understanding, all of which are research areas in which she has published. She is currently based in Canberra, Australia.

Exit mobile version