AI Regulation Tracker: New Laws Passed in Q1 2026
The first quarter of 2026 produced more AI-specific legislation than all of 2024 combined. From the EU AI Act enforcement beginning in March to new state-level bills in the US and comprehensive frameworks in Asia, AI regulation Q1 2026 marks the shift from policy discussions to enforceable law. This tracker summarizes every significant AI regulation passed or enacted in the first three months of 2026.
We cover the EU, United States (federal and state), United Kingdom, China, Japan, South Korea, and Canada. Each entry includes the regulation name, effective date, what it requires, and who it affects.
European Union
EU AI Act — Enforcement Phase (March 2026)
What it does: Bans unacceptable AI practices (social scoring, manipulative AI, unauthorized biometric identification). Requires high-risk AI systems to register and comply with transparency, data quality, and human oversight requirements by August 2026. General-purpose AI model providers must disclose training data summaries and comply with copyright rules.
Who it affects: Any company deploying AI in the EU or serving EU consumers, regardless of where the company is headquartered.
Penalties: Up to 7% of global annual revenue for deploying banned practices. Up to 3% for non-compliance with other provisions.
United States — Federal
AI Transparency Act (Signed February 2026)
What it does: Requires federal agencies to disclose when AI systems are used in decisions affecting individuals (benefits eligibility, law enforcement, immigration). Agencies must publish annual AI impact assessments and maintain human appeal pathways for AI-influenced decisions.
Who it affects: Federal government agencies and their contractors. Does not directly regulate private sector AI use.
United States — State Level
- California AI Accountability Act (SB-1047 successor, effective January 2026): Requires companies developing frontier AI models to implement safety testing protocols and report dangerous capabilities. Applies to models trained with computing power above a defined threshold.
- Colorado AI Consumer Protection Act (effective March 2026): Requires businesses to disclose AI use in consumer-facing decisions including hiring, insurance, and lending. Consumers can request human review of AI decisions.
- Illinois AI Video Interview Act Extension (effective February 2026): Expands existing rules to cover AI analysis of text-based interviews and written assessments, not just video. Employers must obtain consent and provide alternative evaluation options.
- New York City Local Law 144 Update (effective January 2026): Strengthens the existing automated employment decision tool law with more detailed bias audit requirements and quarterly public reporting.
“The US is regulating AI through a patchwork of state laws and sector-specific federal rules. Companies operating nationally face a complex compliance landscape that resembles privacy regulation before GDPR.” — AI policy researcher.
United Kingdom
AI Safety and Innovation Framework (Published January 2026)
What it does: Establishes a principles-based framework rather than prescriptive rules. Existing regulators (FCA, Ofcom, ICO, CMA) apply AI principles within their sectors. The AI Safety Institute conducts pre-deployment testing of frontier models.
Who it affects: UK-based AI developers and deployers. Sector regulators have independent enforcement authority.
Key difference from EU: The UK approach is lighter-touch and sector-specific rather than horizontal. No single AI act covers all sectors.
China
Comprehensive AI Management Regulations (Effective March 2026)
What it does: Consolidates previous rules on generative AI, deepfakes, and recommendation algorithms into a unified framework. Requires all AI services operating in China to register with the Cyberspace Administration. Mandates content labeling for AI-generated material. Requires domestic storage of training data for services operating in China.
Who it affects: All companies offering AI services to Chinese users, including foreign providers.
Japan
AI Governance Guidelines (Updated February 2026)
What it does: Non-binding guidelines that encourage responsible AI development but do not impose legal requirements. Focuses on voluntary risk assessment, transparency, and privacy protection.
Who it affects: Japanese companies and foreign companies operating in Japan. Compliance is voluntary but expected to become mandatory by 2027.
South Korea
AI Industry Development and Trust Act (Effective January 2026)
What it does: Creates a classification system similar to the EU AI Act (high-risk, limited risk, minimal risk). Requires impact assessments for high-risk AI. Establishes an AI Ethics Committee with advisory authority. Includes significant government investment in AI research alongside regulatory requirements.
Who it affects: AI developers and deployers operating in South Korea.
Canada
Artificial Intelligence and Data Act (AIDA) — Implementation Phase (February 2026)
What it does: Requires “high-impact” AI systems to implement risk mitigation measures, provide explanations for AI decisions, and maintain transparency about how systems work. Establishes the AI and Data Commissioner with enforcement authority.
Who it affects: Companies developing or deploying high-impact AI in Canada.
What This Means for Global Companies
Companies operating internationally now face AI regulations in almost every major market. The practical approach for 2026:
- Map your AI systems to each jurisdiction’s risk classification. A system that is “minimal risk” in the EU may be “high impact” in Canada.
- Default to the strictest standard. Building to EU AI Act requirements usually satisfies other jurisdictions as well. Maintaining separate compliance programs per country is more expensive than building to the highest common standard.
- Budget for compliance. Legal, technical, and documentation costs for AI compliance are real. Estimate 10-15% of your AI development budget for compliance activities in 2026.
The patchwork era of AI regulation has arrived. Companies that build compliance into their development process now will have a significant advantage over those who treat regulation as an afterthought.