The EU AI Act Enforcement Begins: What Companies Need to Do Now
The EU AI Act enforcement phase kicked off in March 2026. After two years of preparation, the regulation now carries real penalties: up to 7% of global annual revenue for companies that deploy banned AI practices, and up to 3% for non-compliance with transparency and risk management requirements. This is not a future concern. If your company sells AI products or services to European customers, you are subject to these rules today.
The challenge for most companies is not whether they need to comply. It is figuring out exactly what “compliance” means for their specific products. The AI Act uses a risk-based classification system that puts different obligations on different types of AI, and the practical details matter more than the headlines about EU AI Act enforcement 2026.
What You Need to Know Right Now
- Banned practices are already enforceable. Social scoring, real-time biometric identification in public spaces (with limited exceptions), and manipulative AI techniques are prohibited as of March 2026.
- High-risk AI systems must register and comply by August 2026. This includes AI used in hiring, credit scoring, law enforcement, healthcare diagnostics, and critical infrastructure management.
- General-purpose AI models (including LLMs) face transparency requirements. Providers must publish technical documentation, comply with EU copyright law, and disclose training data summaries.
- Fines are substantial. The maximum penalty of 7% of global revenue can reach hundreds of millions of dollars for large tech companies.
The Risk Classification System Explained
The AI Act sorts AI systems into four tiers. Your compliance obligations depend on which tier your product falls into.
Unacceptable risk (banned): AI systems that manipulate behavior to cause harm, exploit vulnerabilities of specific groups, or perform social scoring by governments. These are prohibited outright.
High risk: AI used in areas like employment screening, educational assessment, critical infrastructure, law enforcement, and border control. These systems must meet strict requirements for data quality, documentation, human oversight, accuracy, and cybersecurity.
Limited risk: AI systems like chatbots and content generators that interact with people. These need transparency measures. Users must know they are interacting with AI, and AI-generated content must be labeled.
Minimal risk: Most AI applications, including spam filters, recommendation engines, and inventory management systems. No additional obligations beyond existing law.
“The hardest part of AI Act compliance is not the technical requirements. It is classifying your systems correctly. Many companies discover they have high-risk AI systems they did not realize fell under the regulation.” — EU compliance attorney at a major tech law firm.
Practical Compliance Checklist for US Companies
If you are a US-based company that serves European customers, here is what to do now:
Immediate Actions (March-April 2026)
- Audit your AI systems. List every AI-powered feature in your products. Classify each one using the EU’s risk tiers.
- Check for banned practices. Verify that no feature uses manipulative techniques, discriminatory biometrics, or social scoring.
- Appoint an EU representative. Non-EU companies deploying high-risk AI in Europe must designate an authorized representative within the EU.
- Review your training data practices. If you provide general-purpose AI models, prepare a summary of training data sources and your approach to copyright compliance.
Before August 2026
- Implement risk management systems for high-risk AI. This includes documented testing procedures, bias monitoring, and incident reporting protocols.
- Set up human oversight mechanisms. High-risk systems must have clear paths for human intervention and override.
- Create transparency documentation. Users of high-risk AI must receive clear information about what the system does, its limitations, and who is responsible for it.
- Register in the EU AI database. High-risk AI systems must be registered before they can be deployed in the EU market.
Where Companies Are Getting Tripped Up
Three compliance gaps keep appearing in early enforcement consultations.
AI in hiring tools. Many companies use AI-powered resume screening or interview analysis without realizing these qualify as high-risk under the AI Act. If your ATS vendor uses AI to rank candidates, you are responsible for ensuring that system meets EU requirements.
General-purpose AI model providers. Companies that fine-tune or deploy foundation models (not just the original developers) may have compliance obligations. If you host a fine-tuned version of an open-source LLM that European users access, the regulation may apply to you.
AI-generated content labeling. Deepfakes and AI-generated images, audio, or video must be labeled as artificial. This affects marketing teams, content agencies, and media companies that use AI for content creation.
How This Affects the Global AI Market
The EU AI Act is already influencing AI development outside Europe. Companies like Microsoft, Google, and OpenAI are building compliance features into their global products rather than maintaining separate EU versions. This means EU standards are becoming de facto global standards for AI transparency and risk management.
For smaller companies, the compliance burden is real but manageable. The regulation is designed to be proportional: a startup with a single AI feature faces lighter requirements than a company deploying dozens of high-risk systems. But the clock is ticking, and waiting until August to start is a losing strategy.
Start the audit now. Classify your systems. Fix the obvious gaps. The cost of compliance is a fraction of the cost of enforcement.