Skip to content
Diosh Lequiron
Systems Thinking13 min read

Feedback Loop Design: Why Most Organizations Cannot See Their Own Failures

Organizations rarely fail because they do not know — they fail because their information architecture decays. Evidence from PMO turnaround work and Australian agency recovery.

The organizations I have been asked to turn around have almost always known something was wrong. They could feel it — missed dates, shrinking margins, customers going quiet, senior people leaving. What they could not do was locate it. The sensation of failure was real, but the signal was absent. The information architecture that was supposed to tell them what was happening had decayed, or had never existed, or had been built for a different organization and never updated when this one changed.

Across 19 years of program delivery in more than ten countries — founding PMOs at OpenText and Full Potential Solutions, directing multi-million-dollar programs at HPE, and recovering an Australian agency network that had been losing between twenty and sixty percent across multiple offices — the pattern has been consistent. Failure is rarely hidden. It is visible to somebody, somewhere in the system, almost immediately. The question is whether the information gets from the place it is visible to the place a decision can be made about it, on a timeline that still allows for correction.

The information architecture that moves that signal is what I mean by feedback loop design. It is not reporting. It is not dashboards. It is the structural property of an organization that determines what it can see about itself.

This article explains why most feedback loops fail, what structural design actually works, and where the discipline does not apply.


Why Conventional Reporting Fails

Three structural patterns account for most of the organizational blindness I have diagnosed. They appear in enterprise programs, agency operations, and venture portfolios with very similar mechanics.

The Latency Gap. By the time a signal reaches the people who can act on it, the window for low-cost correction has closed. A monthly report surfaces a trend that was visible in week one of the month and was acute by week three. The decision maker sees it in week five. The response, if there is one, lands in week six. The cost of correction has grown by a factor of ten or more over the course of five weeks, not because the underlying problem got that much worse, but because the supporting relationships, commitments, and downstream dependencies all hardened around the failing state while it was invisible.

At one of the multi-million-dollar programs I directed, the standard portfolio reporting cadence was monthly. The program had nine workstreams, each with its own weekly standup that produced a status color. The aggregation pipeline rolled those weekly statuses up through three management layers before landing in the portfolio dashboard. By the time a red status reached the portfolio view, it had been red at the workstream level for between two and four weeks. The reporting architecture was not broken — it was functioning exactly as designed. The design was the problem. The latency between sensing and decision was longer than the cost curve of the failures the architecture was supposed to surface.

The Abstraction Loss. Signals travel better when they are concrete. Dashboards travel better when they are abstract. Organizational reporting almost always trades the first for the second — it aggregates specific observations into summary statistics, and the specific detail that made the observation actionable is stripped out in the process. A project manager who writes "integration risk emerging on the payments stream, vendor change in week of sprint nine, no mitigation identified" produces a signal that is operationally useful. By the time that signal becomes "one amber risk" in a portfolio status, the information content is nearly zero. The abstraction preserves the shape of the problem and loses the substance.

I have seen this pattern in every large organization I have worked in. The reporting system is technically accurate at every layer. The accuracy of the aggregate is preserved. The actionability of the detail is destroyed. Decision makers end up with a summary that is internally consistent and externally useless — a color code that cannot tell them what to do next.

The Incentive-Driven Distortion. The most corrosive failure mode is not latency or abstraction. It is that reporting systems, once they become tied to evaluation, stop transmitting signal and start transmitting self-protection. A status color that triggers executive review will be negotiated downward before it is submitted. A project that is late will be re-baselined so that it appears on-track against the new baseline. A capacity utilization number that is below target will be padded with work that does not require the hours it is charged against.

The distortion is usually not malicious. People are responding rationally to the structure they are operating in. If red statuses produce personal consequences and amber statuses do not, statuses that should be red will be amber. If a re-baseline is available as an option, projects that should be flagged will be re-baselined instead. By the time a senior leader is looking at the portfolio, the portfolio has been filtered through multiple layers of people whose personal interests were served by making the numbers look better than they were. The dashboard is technically correct. It is also, in aggregate, a lie.

These three patterns do not respond to more reporting. Adding more frequent status submissions increases the latency problem by spreading attention thinner. Adding more granularity increases the abstraction loss by creating more aggregation layers. Adding more oversight increases the incentive-driven distortion by raising the stakes on every submission.

The problem is architectural, not procedural.


The Structural Design

The alternative is not better reporting. It is a different information architecture — one that moves specific signal at operational speed, that couples sensing to action, and that is protected from the incentive distortions that break conventional reporting.

Short Loops Over Clean Dashboards

The first design principle is that short loops beat long ones. A feedback loop that closes in days is structurally different from one that closes in weeks — not because it produces faster reports, but because it changes what it is possible to see.

Short loops catch signal while it is still specific. A team lead who raises a concern in a Monday standup and gets a response in the same week is working with the original context — the particular vendor, the particular estimate, the particular dependency. The same concern raised in a monthly report arrives stripped of that context, and the response, when it comes, is abstract because the context has been lost. Latency does not just slow down response. It degrades the quality of the signal that arrives.

At Full Potential Solutions, when I founded the PMO, the first structural change was not a new reporting template. It was a new cadence — daily operational touchpoints at the workstream level, weekly rollups at the program level, monthly only at the portfolio level. The information still aggregated. It just did so without accumulating a month of latency before the first decision-capable human saw it.

Signal at Source, Not Summary

The second design principle is that the operationally useful signal lives at the source, not in the summary. Architectures that preserve access to source-level detail — not just the aggregate — outperform architectures that collapse everything into summary statistics at every layer.

This does not mean decision makers should be reading every standup note. It means the aggregation pipeline should be inspectable. When a portfolio status is amber, the decision maker should be able to trace, in a small number of steps, to the specific observation that produced the amber. If the trace is not possible — if the amber is just a number with no path back to a concrete observation — the signal is structurally degraded and the decision made on top of it will be correspondingly weak.

I have applied this discipline across enterprise programs and across the ventures I operate. The structural question is always the same: can I trace any summary metric on any dashboard back to the source observation in fewer than three steps, in fewer than three minutes? If not, the architecture is degrading signal faster than it is producing value.

Separate Sensing From Evaluation

The third design principle is the most important and the most often violated. The people who report signal must not be the people whose performance is being evaluated by the signal they report. If the reporter and the reported-on are the same person, the incentive-driven distortion is structural and cannot be trained out.

The fix is architectural. In the Australian agency network, part of the delivery governance framework I put in place was that certain operational signals — utilization, realization, quality defect rates — were sensed and reported by a governance function that did not have the offices in its reporting line. The office managers still saw the numbers. They still had to respond to them. But they were not the source of the numbers, which meant the numbers were not negotiated downward before they were submitted. Once the sensing was decoupled from the evaluation, the portfolio-level picture changed almost immediately — not because the underlying performance changed, but because the information architecture stopped filtering out the bad news before it reached the people who could act on it.

The principle generalizes. In any organization, the sensing function and the evaluated function should be architecturally separate. If they are the same, the reporting system will tell you what it is safe to say, not what is true.

Couple Sensing to Structural Action

The fourth design principle is that sensing without a structural action path degrades into noise. If a signal is produced but nothing is obligated to happen when it arrives, people eventually stop producing it. The information architecture must couple specific signals to specific actions — not to optional reviews that may or may not occur, but to structural responses that are enforced by the system rather than chosen by a human under pressure.

In the governance frameworks I build for the ventures, this takes the form of phase-gate enforcement. A specific signal — a failing test, an empty evidence block, a regression in a core metric — is coupled to a specific structural response — a blocked gate, a required remediation, a mandatory review. The coupling is automatic. The human does not have to decide whether to act on the signal. The architecture decides, and the human decides what to do once the action has already been triggered.

This is different from workflow automation. It is about ensuring that the sensing layer does not sit disconnected from the response layer. When sensing produces no structural consequence, the sensing atrophies. When sensing reliably produces structural consequence, the sensing is maintained because it actually governs outcomes.


Operational Evidence

Scale. Across 18 ventures operating under HavenWizards 88 Ventures OPC, the shared governance architecture produces feedback at three distinct cadences: daily operational signals within each venture, weekly cross-venture reviews at the portfolio level, and monthly strategic reviews for structural decisions. The architecture is consistent across a 66-module agricultural SaaS serving Filipino cooperatives, a fintech venture, a basketball affiliate content venture, and more than a dozen others. The domains are unrelated. The sensing architecture is the same. What this produces, operationally, is the ability to see an anomaly in one venture within days, and to act on it before the anomaly propagates structurally.

Recovery. The Australian agency network had been losing between twenty and sixty percent across multiple offices for more than a year before the structural intervention. The financial information had been available to senior leadership the whole time. What had been missing was the operational information underneath it — which specific workstreams were unprofitable, which delivery patterns were producing rework, which client engagements were structurally misaligned with the agency's delivery capacity. After I put in place a governance function that sensed and reported those operational signals independently of the offices being measured, the picture clarified within weeks. The interventions that reversed losses to between forty and sixty percent profit were not new interventions. They were the ones that had been needed for more than a year. The difference was that they were finally visible in time to be applied.

Prevention. For the US health and nutrition brand whose operations I governed, the intervention that reversed losses of forty percent to profits of sixty percent was structurally a feedback architecture change. The brand had a reporting system that produced accurate monthly numbers. What it did not have was a weekly operational signal tied to the drivers of those numbers — cost per acquisition, fulfillment accuracy, refund rate by cohort. The structural change was to introduce weekly sensing at the driver level, coupled to specific response paths at the operational level. The monthly financial results followed from the weekly operational control. This is the standard pattern: financial reporting is a lagging output of operational sensing, and organizations that have strong financial reporting and weak operational sensing are flying blind with good instruments.

Compounding. Over the first eighteen months of operating the venture portfolio with shared feedback architecture, the number of anomalies caught within the first week of occurrence increased materially, while the number of anomalies discovered only during monthly reviews decreased. The architecture was not finding more problems. It was finding the same problems earlier. That temporal shift — from monthly detection to weekly detection to daily detection — is where the compounding effect lives. Problems caught in the first week cost a small fraction of problems caught in the fourth. The portfolio becomes structurally cheaper to operate even as it becomes more complex, because the sensing layer is catching failures while they are still contained.


Where This Does Not Apply

Structural feedback architecture has costs. It is not the right default for every context, and recognizing where it does not apply matters for using it well.

Very small teams. Below a certain scale, feedback architecture is overhead. A team of three people working in a shared space, in continuous conversation, has a feedback loop that closes in minutes and requires no formal structure. Imposing daily operational reporting on a team that size adds friction without adding signal. The threshold at which structural feedback pays off is roughly where informal conversation can no longer cover the ground — which in my experience is somewhere around eight to twelve people per workstream.

Stable, mature operations. Not every operation needs short loops. A fulfillment operation running at steady state on well-understood processes may be served adequately by monthly reporting, because the cost of ambiguity is low and the cost of the sensing architecture is not. Feedback architecture earns its cost when the environment is changing faster than the reporting cadence can track. In stable environments, it is a tax.

Highly regulated reporting. In some regulated contexts, the reporting cadence and content are fixed by external requirements. The structural principles still apply, but they have to be layered over the regulated reporting rather than replacing it. The internal feedback architecture becomes a secondary system, separate from the regulatory submission, with different cadences and different audiences.

Organizations in active crisis. Paradoxically, during an acute crisis, elaborate feedback architecture can slow response. In a genuine emergency the relevant feedback loop is the one between the operators and the decision maker, and it should be direct and verbal. Structural feedback architecture is a tool for sustaining operational awareness over time. It is not a substitute for immediate, direct communication during incidents.


The Principle

Organizations rarely fail because they do not know what is wrong. They fail because the architecture that is supposed to tell them what is wrong does not do so on a timeline that allows for correction, or does so in abstractions that strip out the actionable detail, or does so through people whose interests are served by softening the signal.

The discipline is to design the architecture that moves signal — not the architecture that produces reports. The two are not the same. Reports are outputs. Architecture is the structural property that determines what the organization can see about itself.

The test is simple: when something goes wrong in your operation today, how long will it take before the people who can act on it know, with enough specificity to do something? If the answer is more than a week, your architecture is telling you what is comfortable, not what is true. That is not a reporting problem. It is a design problem. And it is the one that has to be solved before any of the other operational disciplines will hold.

ShareTwitter / XLinkedIn
Diosh Lequiron
Diosh Lequiron
Systems Architect · 19+ years designing operating systems for complexity across technology, education, agriculture, and governance.
About

Explore more

← All Writing