By Lachlan S. McGill.

Why does better evidence sometimes fail to improve decision making? How can we tell if this is caused by feedback loops becoming resistant to external evidence?
Understanding how structural patterns become problematic
In most organisations, decisions are embedded in feedback loops that connect indicators, incentives, and authority structures. These loops determine what counts as success, which signals influence decisions, and how performance is evaluated over time.
When feedback loops are well aligned with system goals, they support learning. However, feedback loops can also evolve in ways that reinforce a narrow definition of success. This is generally associated with a system relying on a small number of indicators to guide decisions. Common examples include financial return on investment, productivity or output measures, growth targets, publication counts or grant income, and compliance indicators.
Problems arise when success is defined in the language of the metric, proposals must translate their value into that language, and review processes privilege arguments that reinforce the metric. The metric effectively acts as a rule-set that determines what counts as credible evidence or a legitimate proposal. It does not merely measure performance, it defines the conditions under which performance can be recognised. When metrics stop measuring performance and start defining what performance is, the system becomes self-justifying.
Once a metric becomes central to evaluation and incentives, a reinforcing cycle can emerge. Actors who perform well on the metric gain authority, resources, and influence within the system. Those resources allow them to shape evaluation processes and institutional priorities. Over time, behaviour adapts to maximise the indicator, decision processes increasingly privilege it, and alternative forms of value become harder to recognise. This reinforcing pattern can be thought of as an advantage loop.
Importantly, the system does not necessarily ignore other forms of value. Instead, those forms must be translated into the dominant metric to be taken seriously. Evidence that cannot make that translation tends to lose traction — not because it is wrong, but because it is inadmissible.
When a metric becomes the standard by which evidence must justify itself, the problem is no longer informational. It is structural. Producing more evidence, improving synthesis, or refining communication will not change outcomes if the system cannot admit what the evidence reveals. But how do we know when a system has entered this state? And how do we distinguish a system that is genuinely learning from one where the feedback loops have become resistant to external evidence?
Diagnosis
The Recursive Disclosure Index, which I developed, is a practical diagnostic tool for making that distinction. It assesses four aspects of how a system’s feedback loops are functioning. Each is evaluated through observation and questioning rather than formal measurement.
Recursion: does success reshape the rules?
This dimension examines whether those who perform well under the dominant metric gain disproportionate influence over how success is defined.
The key question is: does success in this system increase the ability to define what counts as success?
When recursion is high, evidence that challenges dominant metrics is unlikely to gain traction regardless of its quality.
Disclosure: what can be questioned?
This dimension concerns whether the assumptions underlying the system’s evaluative standards are visible and open to challenge.
The key question is: are the assumptions that structure decision making open to challenge, or effectively closed?
Low disclosure means efforts to respond to external evidence will struggle even when evidence is strong, because the frame through which evidence is interpreted cannot itself be questioned.
Braking: what slows runaway signals?
This dimension assesses whether effective mechanisms exist to counterbalance dominant feedback signals.
The key question is: what, if anything, can slow or counteract the dominant feedback signals in this system?
Weak braking allows self-reinforcing dynamics to accelerate unchecked.
Buffering: can the system tolerate correction?
This dimension considers whether the system can absorb challenge without becoming destabilised.
The key question is: can this system tolerate meaningful correction, or does challenge itself create instability?
Low buffering often explains why systems resist change even when problems are widely acknowledged.
Interpreting the diagnosis
The four dimensions interact. When recursion is high and disclosure is low, systems become increasingly resistant to external evidence. Weak braking allows dominant signals to accelerate. Low buffering makes the system fragile to challenge.
In this configuration, the recursive disclosure index helps make the structural problem explicit. It supports researchers in recognising when the failure to respond to external evidence is structural, and when the appropriate response is not more evidence but a different kind of engagement with the system itself.
For researchers working on complex societal problems, this distinction is consequential.
When feedback loops are healthy, strategies focused on evidence synthesis, knowledge translation, and improved communication are well suited to the task.
When feedback loops have become self-reinforcing, those same strategies are unlikely to succeed. The system is not failing to understand the evidence. It is structured in a way that prevents the evidence from altering decisions.
Conclusion
Diagnosis is a first step. A companion i2Insights contribution introduces five structural ways to intervene in a closed feedback loop, each targeting a different aspect of how signals flow, how authority is allocated, and how evaluative standards are defined.
Do these ideas resonate with you? Have you seen feedback loops that are resistant to external evidence? Do the diagnostic criteria look helpful? Do you have experience using any of them?
Use of Artificial Intelligence (AI) Statement: Generative AI (Anthropic’s Claude Sonnet 4.6) was used in the drafting and editing of this contribution. All frameworks, arguments, and ideas are the author’s own, developed independently of AI assistance. AI-generated text was reviewed, revised, and approved by the author prior to submission. (For i2Insights policy on artificial intelligence please see https://i2insights.org/contributing-to-i2insights/guidelines-for-authors/#artificial-intelligence.)
Biography: Lachlan S. McGill BA/LLB works as a business analyst and architect in information technology (IT), and is an independent researcher working across systems theory, organisational dynamics, and integration science. His current research applies recursive dynamics, a cross-domain framework for analysing feedback, persistence, and structural change, to problems of institutional lock-in, knowledge integration, and evidence uptake. He is based in Canberra, Australia.