Ask a simple question in a leadership meeting.
"What's our cash position?"
You will not get one answer. You will get a small democracy of answers. Someone opens the bank app and reads the balance. Someone else pulls cash from the accounting system, which includes uncleared items and excludes pending ones. A third person has a spreadsheet that adjusts for "known" obligations — a list that exists in their head and isn't exportable. Someone mentions the credit line and says that if you count availability, "we're fine."
Everyone nods as if this is normal.
It is normal. But it rests on a premise that doesn't hold: that "cash" means the same thing to everyone in the room, and that the right answer is the one from the most authoritative data source. For the last thirty years, the response to this premise was the SSOT — the Single Source of Truth. Build one authoritative system, pipe all reports from it, end the debates.
The SSOT was a coherent idea. It is also increasingly insufficient for the way modern businesses actually operate.
What the SSOT Got Right — and Where It Broke
The SSOT idea captured something important: uncontrolled data proliferation creates problems that no analytical sophistication can overcome. If different teams work from different numbers, the resulting disagreements aren't analytical — they're political. Each team's version reflects its perspective and incentives. The SSOT was the right response in an era of few data sources, slow decision cycles, and clear system hierarchies.
That era is over. A modern business generates data from dozens of systems that all have legitimate claims to authority within their domain. The CRM is authoritative for pipeline. The HRIS is authoritative for headcount. The billing system is authoritative for invoices. The bank feed is authoritative for cash movement. The accounting system is authoritative for the accounting treatment of transactions — but it is not the authoritative source for the operational events that generated those transactions.
None of these is subordinate to any other. The ERP can't tell you whether the promises made in the CRM were kept — that requires delivery data. The CRM can't tell you what the financial treatment of its contracts should be — that requires accounting judgment. The bank feed can't tell you what cash movements mean — that requires the context of committed obligations, accrued receivables, and forward projections.
The SSOT was a useful fiction when businesses could consolidate into one system. It became dysfunctional when businesses accumulated more authoritative sources than any single system could absorb. The honest description of the modern data environment is not "one source of truth" but "many authoritative records, requiring a layer of context to interpret."
The Missing Layer
A System of Record stores facts. It answers: what happened?
A customer was invoiced. A transaction was posted. A contract was signed. These are facts. They're authoritative within their domain. They can be reconciled and audited.
But leadership doesn't run a business by asking "what happened?" They run it by asking: what does what happened mean? Is this margin movement structural or temporary? Is the pipeline number reflecting actual deal quality or the optimism of the last quarter-end push? Is cash tight because the business is stressed or because timing is temporarily unfavorable?
These questions require interpretation — and interpretation requires something Systems of Record were never designed to provide. It requires a shared understanding of what the terms mean, how the metrics are computed, how the different records relate to each other, and what thresholds distinguish normal variation from meaningful signal.
This is the System of Context: the governance layer that sits above the authoritative records and converts them from raw data into decision-ready information.
Building Context, Not Just Storing Facts
The System of Context is not a single tool or platform. It's an operational layer — a set of linked governance practices that together produce the interpretive capability no individual System of Record can provide. And unlike a data architecture, it develops through a sequence where each component creates the conditions for the next.
It starts with the controlled vocabulary: the shared glossary the organization has agreed on. Not terms everyone recognizes — terms everyone uses identically, with explicit rules for what counts and what doesn't. Revenue means this, not that. A customer is defined this way. Gross margin includes these costs and excludes those. The vocabulary is the prerequisite because every subsequent component depends on shared language. Defining metrics before agreeing on vocabulary is building on sand — the formula may be precise but the terms in it are ambiguous.
With the vocabulary established, the metrics dictionary becomes possible: a specification of every KPI the organization tracks, with formula, source, owner, freshness standard, and decision link. The dictionary is what makes metrics reproducible — the same metric, computed by different people in different systems, yields the same result because they're all following the same specification. Without the vocabulary, the dictionary would inherit the ambiguity of undefined terms. With it, the dictionary becomes a genuine contract between the organization and its own data.
The dictionary enables lineage: documentation of where each piece of data originated, how it was transformed, and how it arrived at the dashboard or report where it appears. Lineage makes trust verifiable — when a number looks right, you can confirm it came from where it should have. When a number looks wrong, you can trace backward to where the error entered. Lineage depends on the dictionary because tracing data movement requires knowing what the data represents at each stage of its journey.
And lineage makes possible the decision framework: the explicit specification of what the organization does when its key metrics cross defined thresholds. This is the component most commonly absent — the one that converts data from something the organization reports on into something the organization acts on. The decision framework sits at the top of the stack because it depends on all three layers beneath it: it needs trustworthy signals (lineage), computed consistently (dictionary), using shared definitions (vocabulary).
Together, these four components provide what the SSOT was always supposed to deliver but structurally couldn't: not one place where data lives, but one language in which data is interpreted and one set of rules governing how it produces decisions.
Why This Matters Now More Than It Used To
The limitations of the SSOT model would be manageable if the stakes were static. They aren't.
AI has changed what data governance failures cost. An AI system operating within a governed context layer — where terms are defined, definitions are machine-readable, and the metrics dictionary is available as reference — produces outputs that are not just factually accurate but contextually appropriate. It knows what "revenue" means in this business. It knows the threshold at which cash position requires escalation.
Without the context layer, the same AI system produces outputs that are mathematically coherent and contextually meaningless — because it processed "revenue" from two systems that define it differently and produced an analysis of a composite entity that doesn't correspond to any real business quantity. The output will look professional. It will be wrong in a way that's harder to detect than the manual version.
The gap between AI applied to governed data and AI applied to ungoverned data is not a technology gap. It's a governance gap. And the governance is the System of Context — the same layer that makes the human executive's decisions trustworthy also makes the AI's outputs trustworthy. The investment is the same. The leverage has multiplied.
The Practical Path
Building a System of Context doesn't require boiling the ocean. It requires a sequence that matches the organization's current constraint.
Start with the vocabulary. Pick the ten terms that cause the most confusion — the ones that produce different numbers when different people compute them. Define each one. Write it down. Assign an owner. Enforce it. This step produces immediate relief: the definition meetings that consumed hours become unnecessary because the terms are settled.
Install record integrity. Establish which system is authoritative for each domain of data — one answer per domain, documented and shared. This doesn't require a new platform. It requires a decision about what's already there: the CRM is the system of record for pipeline. The ERP is the system of record for financial transactions. The bank feed is the system of record for cash movement. Write it down. Publish it. Stop the debates.
Build the dictionary. Starting from the governed vocabulary and the established systems of record, specify the critical metrics: formula, source, owner, freshness, decision link. Start with the metrics that drive the most consequential decisions — the ones that appear in leadership reviews and inform pricing, hiring, and investment choices.
Expand in steady sprints. Each sprint adds coverage — more terms, more metrics, more governed connections between systems. The sequence matters because the temptation in every governance initiative is comprehensive coverage before foundational coverage. Comprehensive governance of poorly-defined records produces a larger version of the original problem. Foundational governance of the critical few produces immediate improvement in decision quality that justifies continued investment.
The goal is not one database. It is a decision infrastructure that makes the business legible — to the people running it today, to the investors who may evaluate it tomorrow, and to the AI systems that are increasingly augmenting the decisions made by both.
The End of One True Number explores the evolution from SSOT to layered data governance. Related: The Metrics Contract, The Dashboard That Lies, The Integration Tax.