Toddlers can be messy
Agentic AI Is No Longer A Demo
It calls APIs. It writes to databases. It triggers workflows that affect customers, revenue, and operations.
That’s powerful. It’s also a fundamental shift in risk posture.
Because once an AI system moves from advisory to executive, it becomes part of your control plane — whether you intended that or not.
Let’s talk about the security and governance concerns you can’t hand-wave away, without turning this into either a panic session or a hype reel.
Security concerns you can’t hand-wave away
1. Agentic systems expand your attack surface overnight
The moment an AI system can execute actions, it becomes a privileged actor.
Unlike a service account, however, it:
- interprets untrusted input
- reasons probabilistically
- decides which tools to invoke
Prompt injection exploits this exact gap. What started as a chatbot curiosity now applies to systems that ingest documents, scrape web content, or process third-party data and then act on it12.
In practice, this means a single poisoned input — a document, ticket, or API response — can override safeguards and trigger unintended actions3.
The risk isn’t malice. The risk is misplaced confidence at machine speed.
2. Speed ≠ safety when systems are non-deterministic
Traditional software gives you repeatability. LLMs give you behavioral ranges.
Two identical prompts can yield different outputs. Add tool selection, retries, or chaining and reproducibility becomes conditional. This complicates testing, incident response, and auditability — especially when the system is executing real-world actions4.
For engineers, this means the same request may not always produce the same result. For executives, it breaks assumptions about predictability.
If AI is automating core processes, you need compensating controls:
- execution boundaries
- approval thresholds
- rollback and containment strategies
Logging alone is observability, not safety.
3. Data leakage remains the most common failure mode
AI doesn’t reduce data sensitivity — it increases exposure.
Without explicit data classification, redaction, and egress controls, sensitive information routinely flows into prompts, embeddings, logs, or third-party tools. In many cases, that data cannot be fully recalled or deleted56.
For engineers, this creates persistent risk artifacts. For executives, it creates regulatory and reputational exposure.
AI systems remember in ways contracts and policies can’t always unwind.
4. Identity and access models weren’t designed for agents
Most organizations are good at managing human identities and service accounts.
AI agents blur that boundary.
Who owns the agent’s credentials? How is access scoped and rotated? What happens when an agent chains actions across systems with inconsistent authorization models?
Without clear answers, organizations unintentionally create always-on, high-privilege actors operating outside traditional IAM assumptions4.
This is less an implementation bug and more a governance gap.
Governance gaps hiding in the “ship it this week” mindset
1. No ownership = no accountability
When an AI-driven workflow causes harm, “it was the model” is not an acceptable explanation.
Every production agent needs:
- a named owner
- a defined scope
- an explicit kill switch
Ownership isn’t bureaucracy — it’s how you move fast without guessing who’s responsible when things go wrong.
2. Policy debt grows faster than technical debt
AI deployed as infrastructure inherits existing obligations:
- auditability
- change management
- incident response
- compliance
Skipping governance early feels efficient. In practice, it leads to expensive retroactive reconstruction when auditors or regulators ask, “How does this system make decisions?”5
The cleanup phase is always slower than the design phase.
3. Drift turns pilots into production risks
That one “small” automation rarely stays small.
Prompts evolve. Tools are added. Data sources expand.
Without versioning, testing, and periodic risk review, safe pilots quietly become critical systems — without the controls that critical systems require7.
Drift is not failure. It’s what happens when systems succeed.
4. Regulators don’t care how fast you iterated
Regulatory obligations apply regardless of how novel or experimental a system felt at launch.
If AI systems influence customers, employees, or financial outcomes, expect requirements around explainability, oversight, and accountability — even if no new AI-specific law is invoked68.
Velocity is not a compliance strategy.
The real takeaway
AI should be treated like infrastructure.
But infrastructure is defined by controls, standards, and governance, not just throughput.
The teams that win won’t be the ones that automated first. They’ll be the ones that automated safely, with:
- constrained agents, not free-roaming ones
- least-privilege access, not convenience credentials
- human oversight where impact is irreversible
- clear ownership when things go wrong
Moving fast is table stakes. Moving fast without breaking trust is the actual competitive advantage.
And yes — you can do both.
Zero trust is the practical security baseline for agentic AI: treat every prompt, tool call, identity, and downstream system interaction as untrusted until verified in context. That means continuous authentication, least-privilege authorization, explicit policy checks per action, and full observability for every decision path9.
References
Photo by Zachary Kadolph on Unsplash
Footnotes
- Simon Willison, "Prompt Injection Explained". https://simonwillison.net/2023/May/11/prompt-injection/ ↩
- OWASP Foundation, "OWASP Top 10 for Large Language Model Applications". https://owasp.org/www-project-top-10-for-large-language-model-applications/ ↩
- UK National Cyber Security Centre (NCSC), "Prompt injection attacks may be impossible to fully mitigate". https://www.ncsc.gov.uk/news/prompt-injection-attacks ↩
- Chen et al., "On the Risks of Autonomous LLM Agents" (arXiv:2307.15043). https://arxiv.org/abs/2307.15043 ↩ ↩2
- U.S. Federal Trade Commission, "Keep Your AI Claims in Check". https://www.ftc.gov/business-guidance/blog/2023/04/keep-your-ai-claims-check ↩ ↩2
- UK Information Commissioner’s Office (ICO), "AI and Data Protection". https://ico.org.uk/for-organisations/ai/ ↩ ↩2
- The Data Consulting Company, "How Prompt Injection Attacks Actually Work". https://www.thedataconsultingcompany.com/blog/how-prompt-injection-attacks-actually-work ↩
- OECD, "AI Principles". https://www.oecd.org/ai/principles/ ↩
- NIST, "Zero Trust Architecture" (SP 800-207). https://csrc.nist.gov/pubs/sp/800/207/final ↩