In 2023, Samsung Electronics made headlines when employees inadvertently uploaded sensitive information to ChatGPT, including proprietary source code and internal meeting notes. Samsung's response was immediate: they banned consumer AI tools while building controlled internal alternatives. This became the most widely cited example of what we now call shadow AI.

Shadow AI is artificial intelligence used within an organization without the knowledge, approval, or oversight of IT, legal, compliance, or governance functions. It is the AI equivalent of shadow IT—but with risks that shadow IT never carried.

The Scale of Shadow AI

A 2024 survey by Salesforce found that 28% of employees were using AI tools at work that had not been approved by their employer. Among knowledge workers, that figure was significantly higher. The tools range from well-known consumer platforms to dozens of specialist AI productivity tools covering writing, research, legal review, image generation, and code assistance.

Categories of Risk

Shadow AI presents several distinct categories of risk:

Data Exposure: When employees use consumer-grade AI tools, the data they input—customer names, contract terms, financial projections, personal information—may be used to train the model or stored on third-party servers.

Output Reliance: Employees acting on AI-generated outputs that have not been validated, particularly in areas like legal research or financial modeling, where an authoritative-sounding hallucination can cause serious harm.

Accountability Opacity: When something goes wrong with a shadow AI output, there is no audit trail, no approved use case documentation, and no clear owner.

The EU AI Act Dimension

The European Union's AI Act, which entered into force in August 2024, creates legal obligations that make shadow AI a compliance issue. Organizations deploying high-risk AI systems without proper governance face fines of up to €15 million or 3% of global annual turnover.

Practical Response: Discovery and Channeling

Blanket bans, as Samsung's experience suggests, are temporary measures at best. The more durable solution is a Shadow AI Discovery and Response Programme:

1. Technology Scanning: Identify unauthorized AI tool usage on corporate networks 2. Policy Clarity: Create an AI Acceptable Use Policy that is visible, understandable, and fair 3. Sanctioned Pathway: Enable employees to request evaluation and approval of tools they want to use

The tone matters enormously. Employees who fear punishment for honest answers will not give them.

Assessment: Where Is Your Organization?

The first step is understanding your current state. Ask these questions:

  • Can you name the AI systems currently in use across your organization?
  • Do you have visibility into generative AI usage by employees?
  • Is there an AI Acceptable Use Policy communicated to staff?
  • Have you assessed your AI systems against the EU AI Act risk tiers?

Organizations that cannot answer these questions confidently have a shadow AI governance gap that requires immediate attention.

Published on LinkedIn, March 25, 2026