The AI Agent Problem

According to a recent report by Arkose Labs, 97% of enterprise leaders expect a major AI agent security incident within the next 12 months.
That’s not a threat assessment. It’s a confession.
The same leaders who are pushing for accelerated adoption of AI agents across their organizations are now acknowledging that governance has not kept pace with deployment.
According to the Arkose Labs 2026 Agentic AI Security Report, 57% of organizations have no formal governance controls for AI agents. Only 26% are very confident they could definitively prove an AI agent was responsible for a security or fraud incident.
This is not just a security gap. It is a visibility and accountability gap.
Teams are deploying agents across workflows, tools, and data environments faster than security and IT functions can track, review, or govern them. Many of these agents operate with real permissions, real data access, and the ability to take real actions.
Yet in many organizations, they exist outside of formal controls.
Would you deploy a third-party system into production without a security review? Would you grant privileged access without oversight, documentation, and controls?
Probably not.
Yet with AI agents, organizations are doing exactly that, while calling it innovation and transformation.
Here is what has changed this year.
This is no longer just a technical or cybersecurity risk. It is now a regulatory and reputational liability.
The EU AI Act is already in force, with major obligations for high-risk systems applying from August 2026. Organizations will be expected to demonstrate not just that policies exist, but that governance is implemented, documented, and enforceable.
That is where the main exposure lies.
Most executives say they are confident their policies protect against unauthorized AI actions. But only a minority have mature, AI-specific security and governance frameworks in place.
Confidence without controls is not governance. It’s hope.
The organizations that act now, not to achieve perfection but to establish visibility, ownership, and board-level accountability for AI systems, will be the ones with a defensible position when incidents occur.
The question is whether you are ready to explain what happened, why it happened, and what you had in place when it did.
What does your AI agent inventory look like today? Do you actually know how many are running and what they are authorized to do?
Most organizations cannot answer that question today.