The White House released a National AI Policy Framework in March 2026, providing Congress with a blueprint for federal AI governance. The framework proposes a risk-based approach that distinguishes between low-risk AI applications, which would face minimal oversight, and high-risk deployments in sectors like healthcare, financial services, and criminal justice, which would require mandatory transparency and testing requirements.
Key Elements of the Framework
- Risk-based classification system for AI applications with three tiers: low, medium, and high risk
- Mandatory algorithmic impact assessments for high-risk AI deployed in healthcare, finance, and law enforcement
- Voluntary guidelines for general-purpose AI models including transparency about training data
- Federal preemption provisions to prevent a patchwork of state-level AI laws
- AI Safety Institute empowered to conduct pre-deployment testing of frontier models
Why Federal AI Policy Matters Now
Without federal legislation, AI governance in the United States has defaulted to a state-by-state approach. Colorado’s AI Act takes effect in June 2026. California, New York, and Texas have their own AI bills in various stages. This patchwork creates compliance complexity for companies that operate nationally and risks inconsistent consumer protections depending on where someone lives.
The framework aims to replace the growing patchwork of state AI laws with a consistent federal standard, balancing innovation incentives with mandatory safeguards for high-risk applications.
The federal framework seeks to establish a unified standard while allowing states to enforce existing consumer protection laws. The preemption provisions have already drawn criticism from state attorneys general who argue that federal standards should set a floor, not a ceiling, for AI consumer protection.
The AI Safety Institute’s Expanded Role
The framework proposes expanding the AI Safety Institute’s authority to include pre-deployment testing of foundation models above a compute threshold. Companies developing models that exceed this threshold would be required to submit models for safety evaluation before public release. The AI Safety Institute would assess risks including bias, misuse potential, and capability overhangs.
Industry groups have expressed mixed reactions. Large AI companies generally support the framework’s innovation-friendly tone. Smaller companies and civil society organizations argue that the voluntary approach for most AI applications leaves too many potential harms unaddressed.
What Comes Next for Federal AI Legislation
The framework is a policy proposal, not legislation. Congress must draft, debate, and pass bills based on the framework’s recommendations. Given the current political environment, comprehensive AI legislation is unlikely before late 2026 at the earliest. In the meantime, the framework provides guidance that federal agencies can use to shape enforcement priorities under existing regulatory authority.