If governance feels broken, it is time to redesign it, not remove it.
If you’ve ever sat through a “data governance council” meeting where everyone agreed that definitions matter and then immediately went back to shipping whatever, congratulations: you’ve witnessed governance theater.
Camden Willeford’s provocation — “data governance is dead, we will now call it AI readiness” — lands because it describes what’s actually happening: humans used to patch ambiguity with tribal knowledge; AI systems can’t. They don’t “remember where the bodies are buried.” They average contradictions into confident nonsense.R1
But here’s the upgrade we’d make to the headline:
Governance isn’t dead. It’s moving from meetings into systems — and AI forces the move.[R1]R2
Because in the agent era, inconsistent definitions aren’t just annoying — they’re risk multipliers.
Why AI exposes the cracks faster than BI ever did
Traditional BI tolerated ambiguity because humans provided judgment and context. AI does the opposite: it scales outputs and automates decisions, which means it scales your inconsistencies too.[R1]R3
When the “truth” is split across Salesforce, product events, and warehouse SQL glue, your model will still answer questions — just not the questions you think you asked.[R1]R2
And when that model can take actions (send emails, create tickets, update records), you’ve graduated from “metric disagreement” to “workflow incident.”R2
The helpful reframe: treat metrics like APIs (contracts, not vibes)
Willeford’s strongest idea is to treat definitions like APIs: versioned, explicit, and reviewed before you ship changes.R1
That’s the right direction. But “AI readiness” requires more than semantic cleanup.
To make it real, you need three additional pillars that most “AI readiness” writeups skip:
- Identity + access governance for agents
- Change control for definitions / prompts / tools
- Evidence (lineage, audit trails, tests) for regulators & incident response
Let’s make those concrete.
1) Identity + access governance for agents (your new favorite “employee” is a service account)
If an agent can call internal APIs, it’s a privileged actor. If you don’t manage its identity like you manage your humans’, you’ve created a shadow superuser with none of the HR paperwork.
Minimum viable agent IAM looks like:
- Named owner + purpose for every agent (no “misc-agent-prod” mysteries)
- Least privilege roles: only the endpoints the workflow requires (read-only by default)R4
- Scoped credentials per agent, per environment (dev/stage/prod separation)
- Rotation + revocation as a routine, not an emergency
- Just-in-time elevation for high-impact actions (approve, then execute)R4
This is the boring stuff that prevents exciting headlines.
2) Change control for definitions, prompts, and tools (because “prompt edits” are production changes)
In the agent era, a prompt is configuration and configuration is production.
If you let prompts drift and tools expand informally, you’ll recreate the warehouse problem — except now it’s not just “SQL in the warehouse,” it’s “instructions in the agent.”
A lightweight but real change-control loop:
- Version everything: definitions, prompts, tool schemas, routing logic R3
- PR-based review for changes that affect customer or financial outcomes
- Test gates: “did this prompt change alter tool usage or outputs?”R3
- Rollback path: keep the last known-good bundle ready
- Approval gates for irreversible actions (writes, sends, deletes)R2
If your organization has change management for infra but not for agent behavior, you’re securing the door while leaving the window open.
3) Evidence (lineage, audit trails, tests) — the difference between “we think” and “we can prove”
Executives don’t want dashboards. Regulators don’t want dashboards. Incident response teams definitely don’t want dashboards.
They want evidence:
- Lineage: where did the data come from and how was it transformed?
- Audit trails: what did the agent see, decide, and do?
- Tests: what controls were in place to prevent bad outcomes?
This aligns cleanly with established risk and control frameworks:
- AI governance systems and lifecycle controls (ISO/IEC 42001)R5
- Risk functions and continuous monitoring (NIST AI RMF)R3
- Baseline security controls like least privilege and logging (NIST 800-53)R4
When something goes sideways (and it will), you don’t want a Slack archaeology project — you want a replayable record.
The uncomfortable truth: semantic clarity reduces risk — it doesn’t remove guardrails
Even in an AI-ready organization with clean definitions, agents still face well-known classes of risk:
- Prompt injection and tool abuse R2
- Confused-deputy behavior (the model doing what the attacker wants, using your privileges)R2
- Overreach (agents “helpfully” doing extra things you didn’t ask for)
That’s why “AI readiness” needs both:
- Architectural consistency (definitions/metrics-as-contracts), and
- Operational safety (IAM, change control, evidence).
The practical playbook (engineers + execs in the same room)
If you need one slide’s worth of alignment:
- Choose one workflow (not “AI strategy”) and define success + failure
- Lock definitions: what is a customer, revenue, usage, healthy?R1
- Ship with boundaries:
- constrained tools
- least privilege
- approval for irreversible actions R4
- Put it under change control:
- version prompts and tools
- test and rollback R3
- Log like you mean it:
- lineage + audit trails + evaluations[ R3 ] R4
Do that, and you’ll move fast without breaking trust — which is the real competitive advantage.
References
- R1 (Board-friendly) Data Governance is Dead — And we will now call it AI readiness — Camden Willeford
- R2 (Security deep-dive) OWASP Top 10 for Large Language Model Applications
- R3 (Board-friendly) NIST AI Risk Management Framework (AI RMF 1.0) Overview
- R4 (Security deep-dive) NIST SP 800-53 Rev. 5 — Security and Privacy Controls for Information Systems and Organizations
- R5 (Board-friendly) ISO/IEC 42001:2023 — AI management systems
Photo by Valentin Petkov on Unsplash