AI has clearly entered the boardroom.

Directors are asking better questions. Regulators are raising Obligations. Audit and risk committees increasingly want to know where AI is used, who is accountable, and what could go wrong.

That is progress.

But in many organizations, the visible maturity is still misleading.

The board deck looks polished. The principles sound sensible. The policy exists. The steering committee has been announced.

And underneath that, the operating model is still missing.

That gap matters more than most companies think.

The New Pressure Is Real

Over the past two years, AI governance has moved from an optional innovation topic to a leadership issue with real oversight implications.

That shift is being driven by several forces at once:

  • boards are under pressure to demonstrate oversight of AI-related risk and opportunity
  • regulations and frameworks like the EU AI Act and the NIST AI Risk Management Framework, and emerging sector guidance are making governance expectations more explicit
  • AI adoption is going faster than most internal control models were designed to handle
  • executives are realizing that informal coordination is ineffective once AI use scales across functions

This is why the conversation has changed. A year ago, many leadership discussions still focused on whether AI governance was necessary. Now the real question is instead, what does a workable AI operating model actually look like?

Governance Theater Is Still Common

A lot of AI governance activity still falls into what I would call governance theater. It produces documents that are easy to present upward, but hard to operationalize downward.

Typically, it looks like this:

  • a set of high-level principles
  • a policy that says teams must use AI responsibly
  • a cross-functional committee with unclear authority
  • scattered inventories or risk registers
  • training that raises awareness, but does not change decision rights or workflow
  • no standardized tools across the enterprize
  • no controls to mitigate risk or enforce compliance to policies

None of these things are useless. But on their own, none of them, make up an effective operating model.

An AI governance operating model answers practical questions such as:

  • who is allowed to approve which use cases?
  • what standard should every business team must meet before deployment?
  • when must legal, security, risk, and procurement be involved?
  • how are exceptions handled?
  • how do monitoring, incident escalation, and review actually work?
  • what does the board see, how often, and in what form?

If these questions can't be answered, then the organization does not yet have operational AI governance. It has AI Governance intent.

Boards Are Right to Ask Harder Questions

This is where I think many organizations are underestimating the board's role.

Good boards are not asking to manage AI day to day. They are asking whether management has implemented systems to manage AI adoption.

That is a different standard from approving a policy.

A board should reasonably want to know:

  • where is AI significantly used across the enterprise
  • which executives own the major risk and implementation decisions
  • whether management has defined escalation paths for higher-risk or more sensitive use cases
  • how AI literacy is being deployed beyond a small expert group
  • whether oversight is centralized, federated, or improvised by business units
  • what evidence shows the governance model is working in practice

These are not just theoretical questions, but rather operating model questions.

The Missing Piece is Decision Rights

When AI governance stalls, one of the biggest missing pieces is decision rights. There is general consensus that AI should be governed. But few rganizations are clear on who gets to decide what.

For example:

  • Can a business unit adopt a low-risk generative AI tool on its own?
  • Who decides whether a use case is high risk, sensitive, or customer-facing?
  • Who signs off on external models, vendors, or data-sharing arrangements?
  • Who can accept residual risk?
  • Who has authority to stop or suspend a system already in use?

Without explicit decision rights, organizations default to one of two bad models.

Either everything becomes centralized and slow, or everything becomes federated and inconsistent. Neither of which scales well.

The better answer for most enterprises is usually federated governance with clear minimum standards, defined escalation thresholds, and real accountability at both the center and the edge.

That means the center defines policy, guardrails, security and oversight expectations. Business and functional teams operate within that structure, but they do not invent their own governance.

AI Literacy Is Not a Side Program

Another weakness I see often is the treatment of AI literacy as an awareness campaign.

A few workshops happen. The board gets a briefing. Some executives attend a session on trends. Then everyone moves on. That is not enough.

AI literacy only becomes valuable when it is connected to role-specific decisions. The board needs enough literacy to challenge management on oversight, accountability, and risk posture. Executives need enough literacy to understand where strategic ambition exceeds operating discipline. Control functions need enough literacy to review AI use without every issue causing frustrations and delays. Product, data, engineering, HR, procurement, and business teams need enough literacy to recognize when an AI decision is no longer routine and requires escalation.

In other words, literacy is part of the operating model. It is how governance becomes executable.

What An Effective AI Governance Operating Model Usually Includes

The exact design will vary by sector and risk profile, but most serious AI governance operating models include six elements.

  1. A clear governance structure, not just committees, but defined roles across the first line, second line, and leadership layer.

  2. Decision rights and approval thresholds, essentially a practical way to distinguish routine use from higher-risk use, and a clear path for review and escalation.

  3. A standardized intake and classification process, so the organization can evaluate AI use cases consistently instead of rediscovering the same questions every time.

  4. Embedded controls in existing workflows, like procurement, model development, SDLC, deployment, vendor review, change management, and incident response all need AI-specific integration.

  5. Role-based literacy where different roles need different levels of understanding, tied to the decisions staff actually make.

  6. Board-facing reporting that reflects reality, not just vanity metrics like training completion rates. The board needs visibility into material use cases, concentration of risk, policy exceptions, incidents, and unresolved management choices.

Slightly Contrarian View: The Biggest Problem Is Not Usually Policy

Many companies still assume their AI governance problem is a policy gap.

In my view, it is more often an operating model gap.

Most organizations are not struggling because they lack a principle that says AI should be fair, secure, transparent, or accountable. They are struggling because the organization has not translated those principles into repeatable management mechanisms. That is a harder problem, but also a more useful one to address.

Because once AI becomes embedded in products, services, internal workflows, and third-party dependencies, governance cannot remain a slide deck supported by good intentions. It has to become management capability.

What Boards and Executives Should Do Next

If I were advising leadership teams right now, I would focus less on producing another top-level statement and more on testing whether the operating model actually exists.

Start with five questions:

  1. Do we know where material AI use cases actually are?
  2. Are decision rights explicit, or is there informal coordination?
  3. What minimum control standard applies across the enterprise?
  4. Where is federated ownership working, and where is it creating inconsistency?
  5. What evidence could management show the board that governance is functioning in practice?

Those questions tend to expose the real maturity level very quickly.

Final Thought

Board oversight of AI is getting real, but organizations that will benefit from that pressure are not the ones with the most polished principles. They are the ones that build an effective operating model underneath it.

In AI governance, the distance between performative maturity and actual control is still surprisingly wide. In many companies, that distance is where a meaningful share of enterprise AI risk is still hiding.