Five structural levers to reopen feedback loops that are resistant to external evidence

By Lachlan S. McGill.

lachlan-mcgill
Lachlan S. McGill (biography)

When feedback loops have become resistant to external evidence, what are some potential ways of intervening to reopen them?

This i2Insights contribution builds on my previous post which covers understanding why feedback loops can become resistant to external evidence and how to diagnose such a structural problem.

Here I introduce five structural ways to intervene in such a closed feedback loop. These are structural levers, each targeting a different aspect of how signals flow, how authority is allocated, and how evaluative standards are defined.

One practical note before beginning. Applying the interventions below often requires institutional authority, coalition building, or regulatory support, so that isolated actors may not be able to deploy them fully, leaving the problematic dominant structure intact. The five levers describe what structural intervention looks like but are not a guarantee that it will succeed.

Lever #1. Retag: change what the system is required to recognise

Retagging works when it changes what the system is required to recognise. Examples include treating environmental degradation as a liability rather than an externality, reclassifying community impacts as performance outcomes rather than consultation feedback and recognising care as a value-producing activity.

Retagging changes what the system must consider within decision processes, not just what it is invited to consider. Simply adding new indicators will not achieve this if the underlying categories remain intact.

Lever #2. Reweight: change who gains or loses institutional influence

Where retagging changes what is counted, reweighting changes who has standing to interpret evidence, define trade-offs and influence decisions. It works when people and perspectives that were previously marginal to evaluation are given real weight in decision making.

Examples include:

  • making funding contingent on multiple forms of value rather than throughput alone, so that researchers working on complex or unconventional problems are not systematically disadvantaged;
  • requiring decision makers to justify trade-offs explicitly across several indicators, which forces those with authority to acknowledge what the dominant metric excludes;
  • introducing formal roles for people whose expertise or experience falls outside the dominant evaluative frame, giving them standing in review processes rather than merely inviting their input.

A common failure is to add new indicators without changing who controls interpretation. Simply adjusting metrics while decisions still hinge on a single indicator will not succeed. This occurs when new indicators must ultimately be expressed in the system’s primary metric. When social, environmental, or long-term impacts must be translated into financial returns to be admissible, they are structurally discounted by that translation. Under these conditions the feedback loop remains intact regardless of how many new indicators are added.

The effect of genuine reweighting is that it reduces the dominance of a single signal and prevents it from controlling the evaluative frame.

Lever #3. Rewire: change the pathways through which signals travel

Rewiring is about changing the architecture of information flow so that evidence can reach decision forums through pathways the dominant metric does not control.

Examples include independent evidence pathways that feed directly into decision processes, cross-boundary review processes that cannot be overridden by internal gatekeepers, and parallel advisory structures with formal standing. For example, inviting non-academic community members to serve on university ethics committees allows risks and assumptions to be identified that would otherwise go unexamined. Their presence rewires the decision pathway by inserting external perspectives that the loop cannot absorb and neutralise.

Rewiring increases the system’s capacity to counterbalance dominant signals by introducing evaluative frames from outside the loop.

Lever #4. Reveal: make evaluation criteria explicit

Revealing works when it makes the rules by which data are interpreted visible and open to challenge.

Examples include making evaluation criteria explicit rather than treating them as self-evident, documenting how indicators were chosen and what they exclude, and requiring decision rationales to state which assumptions were applied.

Revealing opens the interpretive frame and makes the system’s logic available for challenge rather than treating it as background.

Lever #5. Regulate: introduce constraints that prevent any single metric from dominating

Regulating works when it introduces structural constraints that prevent any single metric from dominating the evaluative frame.

Examples include caps on metric-linked incentives, mandatory multi-criteria evaluation requirements, and external oversight bodies with binding rather than advisory powers.

Regulating prevents the self-reinforcing loop from accelerating unchecked and creates space for correction to occur.

Using the levers together

The interventions are most powerful in combination. Retagging and revealing together change what the system recognises as value and make that recognition contestable. Reweighting and regulating together change how authority is distributed and constrain runaway signals. Rewiring and revealing together change how evidence enters the evaluative frame and who controls that entry.

Conclusion

The goal is not to eliminate metrics. Metrics are essential for coordination and accountability. The goal is to prevent any single indicator from becoming the sole standard by which alternatives must justify themselves, and to restore feedback sensitivity so that systems can adjust in response to evidence about their own performance and consequences.

What do you think? Do you have examples to share of the successful use of these levers? Are there other levers or considerations that you think would be helpful?

Use of Artificial Intelligence (AI) Statement: Generative AI (Anthropic’s Claude Sonnet 4.6) was used in the drafting and editing of this contribution. All frameworks, arguments, and ideas are the author’s own, developed independently of AI assistance. AI-generated text was reviewed, revised, and approved by the author prior to submission. (For i2Insights policy on artificial intelligence please see https://i2insights.org/contributing-to-i2insights/guidelines-for-authors/#artificial-intelligence.)

Biography: Lachlan S. McGill BA/LLB works as a business analyst and architect in information technology (IT), and is an independent researcher working across systems theory, organisational dynamics, and integration science. His current research applies recursive dynamics, a cross-domain framework for analysing feedback, persistence, and structural change, to problems of institutional lock-in, knowledge integration, and evidence uptake. He is based in Canberra Australia.

2 thoughts on “Five structural levers to reopen feedback loops that are resistant to external evidence”

Leave a Reply to Lachlan McGillCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading