By Faye Miller.

How can research teams recognise when their use of artificial intelligence is affecting their ability to integrate different knowledge and perspectives? How can they navigate the impact of artificial intelligence on their collaborative processes?
When research teams use artificial intelligence in collaborative work, new complexities emerge, especially subtle shifts in communication patterns that can fundamentally alter how teams integrate different perspectives and knowledge forms. Consider an environmental team relying on artificial intelligence summaries across hydrology, ecology, and policy. They might miss crucial disciplinary nuances, or follow its “evidence-based” recommendations that may clash with community priorities. Such changes in communication rhythms can compromise decision-making quality and the integration of different viewpoints.
This i2Insights contribution presents a five-stage framework for identifying when the use of artificial intelligence disrupts a team’s communication patterns. While developed primarily for generative artificial intelligence tools (such as large language models), the principles apply broadly to artificial intelligence systems that synthesise or aggregate knowledge.
The framework builds on the integration capacity discussed in my previous i2Insights contribution Five capacities for human–artificial intelligence collaboration in transdisciplinary research.
The framework takes an adaptive approach to encourage ongoing reflection in the research process, adjusting observation practices and adoption of artificial intelligence based on what teams notice, and responding to different kinds of communication challenges as they arise, as indicated in the stages described below.
A five-stage framework for identifying when the use of artificial intelligence disrupts a team’s communication patterns
Stage 1: Map communication patterns before using artificial intelligence
This stage involves documenting your team’s communication patterns before introducing artificial intelligence, noticing how members express uncertainty, identifying who translates between disciplines, and mapping where information comes from, how it moves, and where interpretation happens.
Key questions to ask include:
- Whose knowledge or expertise might become less visible?
- What knowledge forms will artificial intelligence favour?
- Are there power imbalances that artificial intelligence might amplify?
- What can’t we predict?
These areas are important to document, as they become testable hypotheses for later stages.
Stage 2: Observe how artificial intelligence affects collaboration through early testing
This involves limited use of artificial intelligence while watching for emerging patterns. Assigning rotating ‘communication observers’ from different disciplines can help track how artificial intelligence changes team dynamics.
Hold brief structured discussions of 15-20 minutes to check whether collaborative rhythms are still working. These conversations may help teams notice when artificial intelligence converts dynamic knowledge into static, filtered outputs by asking:
- Is artificial intelligence helping or limiting understanding?
- Whose voices are amplified or diminished?
- Are community partners being included?
- Can inclusion be improved?
For example, a public health team may discover that community advisors feel less included because their knowledge does not enter an artificial intelligence system trained on academic papers alone. To resolve this issue, parallel documentation processes can be created where oral knowledge is recorded and weighted equally.
A key decision point here is: if artificial intelligence limits the participation of stakeholders (declining participation, systematic filtering), the team should pause their use of artificial intelligence, but if issues are manageable, continue with proposed changes.
Stage 3: Watch for information flow changes during full use of artificial intelligence
This stage involves watching for patterns where filtered information affects the synthesis of different perspectives. One way to do this is by maintaining a reflective journal, noting when:
- artificial intelligence produces surprising results,
- human knowledge differs from artificial intelligence,
- participation changes,
- there are shifts in what counts as credible.
In addition, hold monthly reflection sessions, asking:
- Could we function without artificial intelligence?
- Is there heavy reliance by newer researchers?
- Has shared vocabulary limited perspectives?
- Are we still synthesising across disciplines?
Stage 4: Analyse the results, especially by checking for missing knowledge
In this stage, a team might compare artificial intelligence-influenced conclusions to what each member would conclude independently. Such analysis can reveal what artificial intelligence might give less (or more) weight to, asking:
- What insights would we miss with only artificial intelligence?
- What insights would we miss with only human analysis?
- Are we setting aside unexpected artificial intelligence results?
- Has artificial intelligence’s framing shaped the range of possible solutions?
- Would impacted communities recognise themselves in findings?
- What trade-offs exist between artificial intelligence and human analysis?
Regular validation with those affected by research decisions should become standard practice, not occasional checking. Their recognition (or non-recognition) of themselves in findings reveals which communication channels worked and which may have been bypassed. For example, an agricultural team might find that artificial intelligence consistently recommends capital-intensive solutions because training data come from industrial operations. Validation criteria requiring input from farmers with relevant experience can strengthen recommendations where artificial intelligence and farmer knowledge align, or prompt further investigation where they diverge, to avoid policy bias.
Stage 5: Document and share knowledge creation
This involves documenting and explaining what artificial intelligence did and how it influenced the research process, and recording which questions artificial intelligence generated or did not address, who was consulted, how artificial intelligence shaped uncertainty communication, and what criteria guided decisions.
Useful questions to consider include:
- Can we explain human-artificial intelligence collaboration meaningfully to affected communities?
- Are certainties being overstated?
- Can others easily trace how findings developed?
- Is there a more transparent open source alternative?
Towards building an adaptive approach
The methodological practices outlined here serve an ethical purpose, as well as a quality assurance practice, in helping teams fulfill their obligation to prevent bias in whose knowledge gets valued and whose voices shape research outcomes.
This framework assumes regular communication, resources for reflection, and a supportive culture. If these don’t exist, address these underlying issues first.
Some practical steps towards building the adaptive approach include:
- Link these five reflection stages to regular meetings. Add 15-minute communication check-ins to existing agendas.
- Make communication reflection as normal as discussing methods. Accept that you can’t predict everything while committing to noticing and responding. Pay attention to how people experience information flow, especially those whose knowledge forms are the least compatible with artificial intelligence.
- Build a team culture where mitigating communication changes is routine, dealing with unknowns is central, and integration across perspectives remains foundational. Share what you learn with other teams to build collective knowledge about the impact of artificial intelligence on collaborative research.
I’m still learning what works. Every team teaches me something new. What have been your experiences? Which meetings could include these reflections? Who notices when artificial intelligence gives less emphasis to certain knowledge? Where might changes be hardest to spot?
Use of Generative Artificial Intelligence (AI) Statement: Generative AI (Claude, Sonnet 4.5) was used in this contribution to conduct initial research scoping and exploration of the literature. The output of the AI was reviewed for accuracy, bias, and appropriateness in relation to all sources reviewed. The AI was not used to develop the framework or arguments. (For i2Insights policy on generative artificial intelligence please see https://i2insights.org/contributing-to-i2insights/guidelines-for-authors/#artificial-intelligence.)
Biography: Faye Miller PhD is a research director, knowledge broker, and career educator. As Founder of Human Constellation Consulting, she collaborates globally with technology companies, universities, and research organisations on social and ethical aspects of science and technology, digital and information literacy, transdisciplinary knowledge integration and shared understanding, all of which are research areas in which she has published. She is currently based in Canberra, Australia.
Thanks Faye, an excellent resource and great posts – I appreciate you sharing this. This afternoon I have been completing a first cut response to the questions posed in Stage 1. I think a co-benchmarking activity could be useful with my team. What risks do you think that might introduce? (what could go wrong or right)
Many thanks for your comment, Stuart. A great generative question, and I appreciate that you’re starting to use Stage 1 in practice! Interesting times ahead.
I think co-benchmarking your team’s communication patterns can be very useful. A few things might go right: the process might encourage the very reflective discussions the framework aims to facilitate. More people’s perspectives on the patterns might be useful, as they might notice things a solo observer wouldn’t.
I also think there are a few things to be mindful of. There could be a social desirability effect, that is, people might report how they would like things to be rather than how things are. In particular, this might be an issue for reporting power balances or whose knowledge is being excluded. There might be a risk of the process changing how people report things, especially if there are existing tensions within the team or if there are power imbalances within the team. There’s also a risk of the team being unable to deal constructively with the results of the benchmarking, for example, if it reveals existing tensions, such as some voices being excluded even without AI being involved.
One way to mitigate this might be to have people report back in private first and then have a facilitated discussion. That way you might avoid the influence of the group dynamic on reporting back.
I’m interested to hear how you get on with this. This framework is still very much a work in progress. Every context is different, and it’s always useful to see how and where patterns show up.