Most organizations have deployed AI. Most of their employees don’t understand it well enough to be accountable for it. That gap has been treated as a training problem — something for L&D to address when bandwidth allows. It isn’t. It’s a regulatory exposure, and it’s already enforceable.
Article 4 of the EU AI Act entered into application on 2 February 2025. The obligation to take measures to ensure AI literacy of their staff already applies. European Commission A recent industry survey puts the size of the problem into sharp relief: 78% of organizations lack structured AI training programs. That means most organizations deploying AI systems — including your organization, more likely than not — are in breach of a live legal obligation right now.
The enforcement clock hasn’t fully started, but time is always of the essence.
The Obligation Is Already in Force — Most Organizations Don’t Know It
Article 4 of the EU AI Act requires providers and deployers of AI systems to take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in. Euaiact
This is not a high-risk-only obligation. It is not limited to AI developers. It applies to any organization that deploys AI systems — which, in 2025, covers virtually every enterprise using AI-assisted hiring tools, customer service automation, fraud detection, or document processing.
National market surveillance authorities will start supervising and enforcing the rules as of 2 August 2026. EU Member States are due to adopt the relevant national penalty laws by 2 August 2025. European Commission That gives organizations a window. But the obligation itself isn’t waiting for enforcement to become real. From 2 August 2025, providers and deployers of AI systems may face civil liability, for instance if the use of AI systems by staff who have not been adequately trained causes harm to consumers, business partners, or other third parties. Latham & Watkins
The enforcement-first instinct — waiting until regulators fine someone before acting — is the wrong posture here. The civil liability exposure arrived before the regulator’s enforcement tools did.
What “Sufficient AI Literacy” Actually Means
The EU AI Act’s definition of AI literacy is deliberately functional, not technical. AI literacy is defined as the skills, knowledge and understanding required to facilitate the informed deployment of AI systems and to gain awareness about the opportunities and risks of AI and possible harm it can cause. Mayer Brown
Article 4 not only requires employees to be trained and qualified, but also states that the specific context of the AI systems used by companies and the target groups for whom these systems are used have to be considered. Noerr A compliance officer reviewing AI-generated credit decisions needs different literacy than a developer building the model. The obligation is role-calibrated, not one-size-fits-all.
That flexibility is also an opportunity. Organizations aren’t required to build AI engineering curricula. Companies have significant flexibility in devising the content and format of their AI training for their staff. Base-level training is certainly better than doing nothing. Latham & Watkins But the floor is genuine understanding: staff need to know what systems they’re working with, what those systems are designed to do, where they can fail, and when human judgment must override the output.
That last point connects directly to how compliance teams have been [framing autonomous agent risk on this publication](https://aicompliancea insider.com/compliance-ai-autonomous-agents/): human-in-the-loop is only a meaningful control if the human in the loop is actually equipped to exercise judgment. Untrained reviewers don’t provide oversight. They provide the appearance of it.
The FCA’s Accountability Angle Compounds the Exposure
For financial services firms, Article 4’s obligation lands alongside a parallel accountability framework that has been tightening independently of the EU AI Act.
A central FCA message is accountability. The AI Update underscores that senior managers and boards remain responsible under SM&CR. Responsibility cannot be outsourced to a model or vendor. A-Team
The FCA confirmed that its outcomes-focused approach to regulation and supervision applies equally to AI. The FCA is relying on existing regulatory and legislative frameworks — specifically Consumer Duty and the accountability and governance requirements under the Senior Managers and Certification Regime — to mitigate the risks associated with the use of AI. Passle
The practical implication: a senior manager whose team is using AI systems they don’t adequately understand is personally on the hook for the outcomes those systems produce. The SM&CR doesn’t make exceptions for algorithmic decisions. If a firm can’t demonstrate that the people responsible for an AI-driven process understood that process — and documented that understanding — regulators have everything they need.
This is why the 78% training gap is a financial services problem with an especially sharp edge. It’s not just EU AI Act exposure. It’s SM&CR exposure, too. For firms that also handle EU customer data or operate in EU markets, both regulatory regimes apply simultaneously.
The Enforcement Timeline Is Closer Than It Looks
Compliance teams accustomed to long regulatory runway should look carefully at where the AI Act deadlines actually sit. Prohibited AI practices and AI literacy obligations entered into application from 2 February 2025. The governance rules and the obligations for GPAI models became applicable on 2 August 2025. European Commission High-risk system obligations follow in 2026 and 2027.
But the civil liability window for AI literacy violations opened in August 2025 — not 2026. During legal actions concerning liability, the failure to take appropriate measures can be considered a breach of a duty of care. Especially in cases involving malfunctions or damage caused by AI systems, courts could examine whether the company has implemented appropriate training and qualification measures. Noerr
Organizations that have [already worked through their EU AI Act compliance posture](https://aicompliancea insider.com/compliance-ai-regulator-ready-eu-ai-act/) know that the high-risk provisions command most of the internal attention. That’s rational. But Article 4 deserves parallel treatment — not because the fines are largest there, but because it’s the obligation that applies most broadly, came into force first, and is easiest for a plaintiff’s counsel to demonstrate was ignored.
Training records are either there or they’re not. This is not a gray-zone compliance question.
Three Actions for Compliance Teams Right Now
The 78% training gap isn’t a surprise. It reflects where most organizations are — deploying AI faster than their governance programs have caught up. The question is whether compliance and legal teams can close the gap before it becomes a formal liability.
First, audit current AI training against Article 4’s literacy standard by role. The question isn’t whether your organization has done any training. It’s whether the people actually operating AI systems — including the HR teams using [AI hiring tools](https://aicompliancea insider.com/ai-hiring-fcra-eightfold-lawsuit/), the reviewers working with automated decision systems, and the senior managers accountable for AI-driven outcomes — can demonstrate contextual understanding of what those systems do and where they fail.
Second, treat training completion as a governance record, not just an HR record. Article 4 compliance is demonstrable only if documentation exists. The same goes for SM&CR accountability under the FCA framework. When enforcement arrives — or when a civil claim does — the audit trail will matter.
Third, pressure-test your human oversight controls. As we’ve covered in our reporting on [AI data handling and privilege risk](https://aicompliancea insider.com/ai-data-handling-compliance-privilege-risk/), the legal and compliance exposure created by AI systems is often compounded by human reviewers who don’t understand what they’re reviewing. Literacy is what makes oversight real. Without it, the control exists on paper only.
The regulatory gap here is unusual. The obligation came first. The enforcement came second. Most organizations are sitting between them, unaware of which side they’re on.
AI Compliance Insider covers the regulations, tools, and incidents shaping enterprise AI governance. Subscribe for weekly updates.