Skip to content
Diosh Lequiron
knowledge-management

Knowledge capture integrated into workflow via definition-of-done gate, Onboarding time reduced, Institutional memory survived personnel transitions

Knowledge as Infrastructure: A KM System for a 200-Person Tech Services Firm

By Diosh LequironRico IncApril 2026
Key Outcomes

Knowledge capture integrated into workflow via definition-of-done gate

Onboarding time reduced

Institutional memory survived personnel transitions

Knowledge capture moved from an optional documentation burden to an embedded part of the workflow's definition of done. Onboarding time for new employees decreased because the knowledge base matched the current state of the systems it described. Institutional memory survived personnel transitions that would previously have cost the company months of retraining and rediscovery.

The starting state: Rico Inc, a technology services company with more than two hundred employees, whose critical operational knowledge lived in people's heads rather than in any system the organization could operate against. Prior attempts to fix this — wikis, documentation drives, knowledge-sharing sessions — had each failed for the same underlying reason.

The challenge: redesign knowledge management without adding documentation as overhead on top of already-pressured engineering work, and without depending on individual discipline to hold the system together when deadlines compressed.


Starting Conditions

Rico Inc operated as a technology services company delivering client engagements across multiple practice areas. The company had grown past the size at which tribal knowledge transfer through proximity could work, but had not replaced it with any structure that was load-bearing. The gap was producing a specific and recurring operational cost: every time a senior engineer left the company, months of institutional context left with them, and the projects they had been responsible for experienced a quality drop that persisted until new engineers had rebuilt enough context to operate independently.

The organization had already tried the obvious fixes. A company-wide wiki. A documentation drive. Periodic knowledge-sharing sessions. None stuck. The wiki filled up quickly and then stopped growing. The documentation drive produced a burst of content that was out of date within months. The sessions were well-attended but the content did not persist into operational use. The pattern across all three failed attempts was identical: the intervention produced a one-time artifact, the artifact decayed, and the organization was left with the same gap plus cynicism that any future attempt would produce the same outcome.

The team's own diagnosis was that they needed better tools. This is the most common misdiagnosis in knowledge management engagements. The proposed next step before I arrived was to evaluate a different wiki platform. A better wiki would have reproduced the same failure pattern in a newer interface.

Operational constraint. Engineers were already fully loaded on client-billable work. Any approach that required engineers to set aside dedicated time for documentation outside their normal workflow would have been the same intervention that had already failed. The engagement had to operate under the constraint that no new time could be allocated for documentation as a separate activity.

The interview sweep. I spent two weeks interviewing teams across the organization before proposing any design. The interviews were not about tool preferences. They were about understanding what happened at the moments when knowledge needed to be captured and why it did not get captured. Four patterns emerged consistently. Engineers knew documentation was important but were never given time for it. Existing documentation was outdated within months of creation. There was no feedback loop, so nobody knew whether their documentation was being used. Knowledge-sharing sessions were attended but the content was not retained into practice. These four patterns pointed at an incentive architecture problem, not a tooling problem.


Structural Diagnosis

Three structural problems explained why every prior attempt had failed and why a better version of any of them would have failed again.

The incentive misalignment. The organization incentivized output — shipping features, delivering client engagements, meeting deadlines — and did not incentivize knowledge capture. Documentation was treated as overhead, which meant it was the first thing to be cut when the work got pressured, which was most of the time. Engineers were rational in deprioritizing it. Under the reward system as it existed, an engineer who spent an afternoon on documentation was trading measurable output for unmeasured contribution, and the performance review cycle had nothing to say in defense of the unmeasured contribution. No wiki platform will fix this. A better wiki in the same incentive structure is a better place to not write documentation.

The after-the-fact trap. Every prior intervention assumed documentation happened after the work was done. Write up the architecture decision once the system is stable. Document the incident after it is resolved. Record the client context after the engagement closes. This sequencing guarantees failure for two reasons. First, the engineer has already moved on — the work is already underway on the next project by the time the previous project's documentation is due, and context-switching back is costly enough that it does not happen. Second, the knowledge is most valuable to capture at the moment it is being produced, because that is when the rationale is still accessible. Capturing it later produces a reconstruction of the rationale, not the rationale itself. Reconstructions are lossy. The after-the-fact trap is not a discipline problem. It is a sequencing problem, and no amount of reminding will unscramble the sequence.

The missing feedback loop. Neither the wiki nor the documentation drive had any mechanism for telling the authors whether their documents were being used. An engineer who wrote a careful architecture decision record had no way to know whether anyone read it, whether anyone applied it, or whether it was silently bypassed in favor of rediscovery. In the absence of a feedback loop, documentation feels like shouting into a void. Even a disciplined engineer loses motivation to document when the output appears to vanish. The failure here is not lack of discipline. It is lack of observability. The system that produces the documentation had no instrumentation on whether the documentation was alive.


The Intervention

The redesign was built around three principles, applied in a dependency sequence. Each principle addressed one of the three structural failures, and each had to be in place before the next one could function.

Phase 1: Capture at the Point of Work

What was built: Knowledge capture was moved from a post-hoc activity to a step inside the workflows that were already happening. Code reviews were modified to require architecture decision records for any change that crossed a threshold of architectural significance. Incident responses were modified to produce post-mortem output as a closing step of the incident, not as a separate activity scheduled later. Client handoffs were modified to require a knowledge transfer checklist before the engagement could be marked closed.

Why this came first: The other two principles are downstream of this one. Living documents only exist if there is something to live. A usage feedback loop only measures output if there is output. The capture mechanism is the load-bearing wall — until it exists, the other principles are decorations on an empty building.

The mechanism: The workflow modifications are the mechanism, not the guidance. Telling engineers to document their architecture decisions is guidance and had already been tried. Making the code review process require an architecture decision record before the review can be approved is a structural gate. The engineer does not have to remember to document. The workflow does not proceed until the document exists. This is the same principle I apply to governance gates in every other engagement: structural gates survive deadline pressure, and guidance does not.

First-phase outcome: Documentation started accumulating inside the normal flow of work rather than as a separate activity that had to be scheduled. The volume was not the point — the coherence was. Every documented artifact now corresponded to a real decision made at a real moment, with the rationale captured while it was still accessible.

Phase 2: Living Documents Over Static Docs

What was built: Every document now has an owner and a review cadence. Quarterly reviews verify that documentation matches current reality. Outdated documents are flagged rather than silently ignored — a stale document is now a visible problem with a named owner, not a latent liability that nobody owns.

Why this phase depended on Phase 1: Review cadences only work when the underlying documents are substantive enough to be worth reviewing. Quarterly review of the kind of shallow wiki pages the previous attempts had produced would have been a waste of the reviewer's time. With capture-at-the-point-of-work producing real architecture decision records, incident post-mortems, and client handoff artifacts, quarterly review had something to evaluate against.

The mechanism: The ownership assignment is the mechanism. Every document has a named owner, and the owner's quarterly review is a tracked responsibility. When the reviewer encounters a document whose content no longer matches the system it describes, the response is not silent decay. The response is either an update, a hand-off of ownership, or an explicit decommission. Documents move through state transitions instead of rotting quietly. This is the difference between a knowledge base that is alive and a knowledge base that is a museum of previous beliefs.

First-phase outcome: Documentation stopped going stale as a default behavior. The quarterly review pressure kept the documents accurate to the systems they described, which is the property that determines whether a knowledge base is operationally useful or operationally dangerous.

Tradeoff introduced: Quarterly review is ongoing work. The organization is trading the one-time cost of a documentation drive — which had already been paid several times without producing value — for an ongoing governance cost that has to be sustained. This was a deliberate trade. An unsustainable zero-overhead model had been tried and had failed. A sustainable low-overhead model was the only configuration that could work.

Phase 3: The Usage Feedback Loop

What was built: Instrumentation on document access. Which documents are being read, by whom, and when. Which documents are frequently accessed but rarely updated. Which documents have not been accessed at all since creation. The data does not judge the documents — it informs the people who own them about whether their work is being used.

Why this phase depended on Phases 1-2: Access instrumentation on the old-model wiki would have produced depressing data and nothing actionable. With documents captured at the point of work and maintained through quarterly review, the access data becomes signal: a frequently-accessed document is load-bearing infrastructure, a document that has not been accessed in a year is either latent or dead, and the owner now has a decision to make.

The mechanism: The feedback loop converts documentation from an act of faith into an observed behavior. An engineer who writes an architecture decision record can now see that the record is being consulted during later decisions. The reward for writing the document is real, visible, and returns to the author. This is the element the prior attempts had been missing — not the writing but the observation that the writing was being used. Writing for an invisible audience fails reliably. Writing for a visible audience, even a small one, is self-sustaining.

First-phase outcome: Documents that were frequently accessed but rarely updated were flagged as high-risk — load-bearing and stale is the worst possible combination, and surfacing it lets the owner act on it. Documents that were never accessed were flagged as candidates for decommission, freeing the review cadence to focus on the documents that mattered.

Phase 4: Governance — Definition of Done

What was built: Knowledge capture became a non-negotiable component of the definition of done. A feature is not complete until its architecture decisions are documented. An incident is not resolved until its post-mortem is published. A client engagement is not closed until its knowledge transfer is executed. This is not additional documentation. It is the same documentation that Phase 1 was already producing, now formally protected from being dropped under deadline pressure.

Why this phase came last: Declaring knowledge capture part of the definition of done before the capture mechanism existed would have been a hollow edict. Declaring it before the living-document system existed would have required maintenance discipline the organization did not yet have. Declaring it before the feedback loop existed would have been writing into a void. Only once all three prior phases were live could the governance layer protect them from being bypassed.

The mechanism: The definition of done is enforced structurally by the same workflow gates that Phase 1 introduced. The feature ticket cannot be closed without the architecture decision record. The incident cannot be marked resolved without the post-mortem. The client engagement cannot be archived without the knowledge transfer checklist. The workflow gate is the mechanism, not the policy. Policies are negotiated around at deadline. Gates are not.

Constraint and tradeoff: The definition-of-done gate creates friction at the exact moment engineers most want to finish a piece of work. This friction is intentional. The alternative — allowing the gate to be bypassed when deadlines are tight — is the same failure mode every prior attempt had collapsed into. The friction has to be accepted as the cost of the knowledge infrastructure holding together. If the gate is ever softened under delivery pressure, the system begins to erode in the same direction it eroded the last three times.


Results

Knowledge capture integrated into workflow. Documentation stopped being a separate activity that had to be scheduled and started being a step inside the activities that were already happening. Architecture decision records, incident post-mortems, and client handoff checklists became routine artifacts produced as part of normal delivery, not as overhead on top of it.

Onboarding time reduced. New employees joined the company and reached operational independence faster than they had under the previous regime. The mechanism was straightforward: the knowledge base now matched the current state of the systems it described, which meant that a new engineer could learn the system from the documentation rather than from informal mentoring by the one senior engineer who remembered why things worked the way they did. The previous regime had forced onboarding to depend on tribal knowledge transfer, which scales linearly with available senior attention. The new regime let onboarding scale with the documentation, which is a more favorable curve.

Institutional memory survived personnel transitions. When a senior engineer left the company, the context they had been holding in their head was already in the system. Their replacement — whether internal or external — could reconstruct the operational picture from the architecture decision records, the post-mortem history, and the client knowledge transfer artifacts. The months of re-discovery that had previously followed any senior departure was no longer the default outcome.

Governance protected the new equilibrium. The definition-of-done gate held through several cycles of deadline pressure. Without the gate, the system would have reverted toward the previous failure mode the first time a project shipped under compressed time. With the gate, the knowledge capture requirement was non-negotiable, which is the property that distinguishes sustainable governance from optimistic policy.

Counterfactual. Without the redesign, Rico Inc would have continued losing operational knowledge at every personnel transition. The accumulating cost was visible in the pattern of projects experiencing quality drops after senior departures — a pattern that would have continued indefinitely and would have worsened as the company grew, because each additional engineer expands the surface area over which tribal knowledge has to flow. At some point in the continued growth curve, the tribal transfer system would have broken entirely, and the company would have been forced into the same redesign under much worse conditions — after a high-profile failure, under regulatory or client pressure, with the engineering team already demoralized by the quality decline. Doing the redesign proactively saved the cost of doing it reactively.


The Diagnostic Pattern

Rico Inc did not have a tools problem. They had tried tools. More tools would not have fixed it. They did not have a discipline problem. The engineers were disciplined about the work they were measured on, which was output, not knowledge capture. The organization had an incentive architecture problem that manifested as a documentation problem, and every previous intervention had attacked the symptom without touching the structure.

The transferable insight is that knowledge management is infrastructure, not content. Infrastructure is built into the workflow and protected by structural gates. Content is produced on the side and decays into irrelevance. The distinction matters because it predicts which interventions will work. Any intervention that treats documentation as a separate activity to be scheduled alongside real work will collapse under deadline pressure. Any intervention that embeds knowledge capture into the workflow, with ownership, review cadence, usage observability, and a definition-of-done gate that cannot be bypassed, will compound instead of decay.

The diagnostic pattern transfers to any organization that has lost critical operational context to personnel transitions. The question to ask is not "what documentation tool should we use?" It is: at what moments is knowledge being produced, what is preventing it from being captured at those moments, and what would a structural gate in the workflow look like that makes capture non-negotiable without adding hours to the engineer's day? Once those three questions are answered, the intervention designs itself. Until they are, each new tool will produce another round of the same decay the previous tools produced.

Interested in similar results?