Governance failures rarely appear at the point of approval.
They become visible only after the system has been deployed, integrated, and relied on.
By that point, the decision has already been operationalized. What surfaces is the cost of the decision.
THE PROBLEM
At the moment of approval, AI investments are typically evaluated on capability, cost, implementation timeline, and expected benefit. The decision is framed as a technology investment.
But the system that gets deployed does not behave like one.
It produces outputs continuously. It interacts with real users. It introduces exposure that is not observable at approval — regulatory, reputational, and operational.
The decision is made under one set of assumptions and executed under another.
The governance structure, designed for the approval moment, does not evolve with the system. That is where risk accumulates.
THE FRAMEWORK
The Governance Signal Stack
The gap between what governance structures are designed to catch and what AI systems actually expose operates across three distinct layers:
Layer 1: Information Signal Does the board receive information structured around governance risk exposure — or around technology progress and adoption metrics? Most board AI reporting describes deployment milestones, capability updates, and adoption rates. That information tells the board whether the AI program is moving forward. It does not tell the board whether the AI capital is adequately governed. |
Layer 2: Approval Architecture Are AI investments approved through a framework that classifies them by risk type — or through standard IT governance and CapEx criteria that were not built to surface AI-specific risks? Model drift, regulatory invalidation, ethical liability, vendor dependency at scale — these are not standard implementation risks. When an AI investment travels through a standard approval process, it often arrives at approval looking clean because the criteria being applied do not ask the right questions. |
Layer 3: Escalation Threshold Are there defined triggers that escalate AI governance issues based on risk type — not just financial size? Most frameworks escalate when an investment exceeds a dollar threshold. AI governance failures do not produce a financial signal until the exposure has already compounded. A model drifting for months, a regulatory posture becoming non-compliant, an ethical liability building in a deployed system — none of these cross a financial threshold until the lawsuit arrives, or the regulator does. |
An organization that scores poorly on all three layers is not ungoverned. It is mis-governed — operating governance processes designed for a different class of investment.
WHAT THE DATA SHOWS
This is not a theoretical gap. It is visible in public disclosures.
An ISS-Corporate analysis of S&P 500 proxy statements found that AI governance disclosure grew over 84% between 2023 and 2024. But the substance of those disclosures tells a different story: only 11% of S&P 500 companies disclosed explicit board or committee oversight of AI as of 2024.
The majority of disclosures that do exist describe director AI expertise, deployment milestones, or technology capability — not governance architecture, risk exposure frameworks, or accountability structures for AI outputs.
More companies are talking about AI in proxy statements. Very few are disclosing how they actually govern it.
That is the information signal gap made visible at scale.
A CASE THAT MADE THE GAP VISIBLE
In early 2024, a civil tribunal held Air Canada liable for information provided by its AI chatbot to a customer.
The system generated an incorrect response related to fare policy. The customer relied on it. Air Canada argued that the chatbot was a separate entity and the company was not responsible for what it said.
The BC Civil Resolution Tribunal rejected that position.
The ruling made something explicit that the Governance Signal Stack is designed to surface before it reaches a courtroom:
The system was already operating. The exposure already existed. The accountability structure for its outputs had not been defined.
All three layers of the Signal Stack were present in this failure:
Information Signal: Leadership received deployment metrics for the chatbot. Not liability exposure data. Not a structured assessment of what the system could claim on the company’s behalf.
Approval Architecture: The system was deployed without a formalized accountability structure for its outputs — a risk category that standard IT governance was not designed to surface.
Escalation Threshold: The governance failure existed the entire time. It only became visible when it created a financial liability.
This is not about the size of the company or the system. It is about the absence of a defined accountability structure for outputs. That gap exists in enterprises of every size. The court made it visible here first.
Different system. Same governance structure.
Same outcome.
WHY THIS PATTERN PERSISTS
This is not specific to chatbots. The pattern appears wherever:
Outputs are continuous — the system makes claims or decisions on the company’s behalf without human review of each instance.
Behavior changes over time — the system’s operating reality diverges from the assumptions present at approval.
Exposure depends on real-world interaction — the risk profile is determined by what users do with the system, not by what the system was designed to do.
Traditional governance structures were designed to approve and review discrete investments. They were not designed to continuously monitor systems whose risk profiles evolve after deployment.
The system enters production with governance aligned to the approval moment, not to the operating reality.
MONDAY QUESTIONS Three questions to bring to your next governance meeting or capital review. 1. Does the AI reporting your leadership receives contain structured information about governance risk exposure — or only technology progress metrics? If you cannot point to a document that describes the gap between required and actual governance, you have an information signal gap. 2. Does your capital approval process assess AI-specific risk types — model drift, regulatory exposure, ethical liability, vendor dependency — or does it apply standard criteria that were not designed to surface those risks? 3. Are there formal escalation triggers in your governance framework based on risk type, not just investment size? If not, your board is not positioned to engage on AI governance until after the financial or legal threshold is crossed — and as Air Canada showed, by then the governance failure has already run its course. |
WHAT’S NEXT
Next week’s episode goes deeper into a pure AI governance autopsy: IBM Watson Health — the most thoroughly analyzed AI investment failure on public record. What the board received. What the governance process produced. What a better architecture would have looked like.
If this analysis was useful, forward it to someone who shapes what their board receives about AI.
Governance designed for the approval moment does not survive contact with the operating reality. |
Strategic Risk Lab
AI Strategy · Risk · Governance · Capital Allocation
YouTube · Newsletter · strategicrisklab.com
