"Vibe Coding" The Real Risk Is Not the Code. It's the Lack of Control.

The UK's National Cyber Security Centre (NCSC) has issued one of the clearest signals yet to security leaders: AI-generated software, often called "vibe coding," is inevitable.

The real question is whether it becomes a force for secure software, or a mechanism to scale insecurity.

At the RSA Conference 2026, NCSC CEO Richard Horne made the position explicit:

  • AI-generated code could transform software security outcomes
  • But without safeguards, it could propagate vulnerabilities at unprecedented scale

This is not a warning against adoption, but rather a warning against uncontrolled adoption.

What "Vibe Coding" Actually Changes

Vibe coding is not just AI writing code.

It fundamentally changes the software production model to humans describe intent, AI generates implementation and iteration happens at machine speed.

The constraint is no longer development capacity. It is control capacity. Exactly where most organizations are currently weakest.

Are we scaling a broken system?

One of the most important points in Horne's keynote was that modern software is already insecure. The NCSC explicitly points to a "fundamental issue with the quality of technology we use".

So the comparison is not AI-generated code vs perfect human code, it is AI-generated code vs today's already vulnerable software baseline.

This reframes the discussion. AI is not entering a stable system, it is entering a system with known structural weaknesses.

The Current Reality: "Intolerable Risk"

The NCSC is direct about where things stand today, AI-generated code currently presents "intolerable risks for many organisations".

Why?

Because four systemic gaps exist:

1. Unknown Provenance: Organizations do not know, where code patterns come from, how models were trained and what risks are embedded.

2. Vulnerability Replication at Scale: AI does not just introduce bugs, it amplifies patterns, including insecure ones.

3. Human Oversight Does Not Scale: AI accelerates output beyond what traditional review models can handle.

4. No Clear Trust Model: There is no widely adopted framework for answering, when is AI-generated code "safe enough"?

The Uncomfortable Truth

Most organizations are already vibe coding where developers are using AI tools informally, code is entering production without attribution and security teams often lack visibility.

This is therefore not a future risk, but a current blind spot.

At RSA 2026, multiple sessions confirmed that "shadow AI" usage is widespread and often invisible to security teams. This is one of the primary AI governance challenge.

Which leads to a simple conclusion that AI-generated code is already in your environment. You just don't know where.

What the NCSC Is Actually Asking Leaders to Do

The NCSC is not asking organizations to slow down. It is asking them to get ahead of the control problem. Because once adoption scales, retrofitting security becomes significantly harder.

The Six Control Domains That Matter

Translating NCSC guidance into leadership action, six control domains emerge:

1. Secure-by-Default Models: AI tools must be trained to avoid generating insecure code from the outset. This shifts security into model assurance, not just code review.

2. Risk-Based Validation (Not Full Review): Manual review of all AI-generated code will not scale. Focus human attention on, identity and access logic, data flows and critical system components. This is a governance decision, not a developer preference

3. Constrained Execution Environments: Assume generated code is untrusted by default. Apply sandboxing, least privilege and dependency controls. Security moves from code quality to runtime containment.

4. AI Securing AI: The NCSC explicitly points to AI as part of the solution with automated code review, test generation and vulnerability discovery. You will not secure AI-generated systems without AI.

5. Platform-Level Controls: Security must extend beyond code to monitoring, isolation and incident response. This becomes a platform governance problem

6. Traceability and Accountability: Organizations must track, which model generated what, under which conditions and with what validation. AI-generated code requires auditability equivalent to regulated systems.

The Big Shift: Software Becomes a Supply Chain Problem

The part that most organizations are missing is that vibe coding turns software development into a high-speed supply chain as code is produced faster, dependencies increase and change velocity accelerates.

At the same time, market signals suggest that AI may reshape the build-vs-buy equation, with internal tools increasingly replacing SaaS in some cases

Which means the competitive advantage shifts from building software to governing how software is created, validated, and controlled.

Final Insight: This Is Not an AppSec Problem

Most organizations are still asking: "How do we use AI to write code faster?"

The NCSC is asking something fundamentally different which is how do we prevent AI from scaling insecurity faster than we can control it?

That is not a development question. That is not even a security question. It is a governance problem at scale.

Closing Thought

AI does not just accelerate software development. It compresses the time available to get governance right and in that window, security leaders have a choice to shape how this scales or inherit the consequences later.

Immediate Actions for Security Leaders

Based on NCSC guidance, prioritize these steps:

1. Assess current AI coding tool usage within your development teams 2. Establish governance frameworks for AI-generated code review 3. Pilot deterministic controls that constrain AI-generated code execution 4. Develop security review capacity that can match AI development velocity 5. Engage with vendors on their security-by-design practices 6. Monitor NCSC updates as safeguards and standards evolve

Key Takeaway

The NCSC frames vibe coding as a "narrow window of opportunity" meaning that security professionals have a limited time to establish safeguards before AI-generated code becomes pervasive. Acting now offers the chance to shape a future where AI doesn't just code faster, but codes more securely than the vulnerable software that preceded it.

As Horne concluded, security professionals have "both the opportunity and responsibility" to ensure that a vibe-coded future is "a net positive for security."