Trump’s AI Framework: Federal Rules, Less State Power

Trump’s AI Framework: Federal Rules, Less State Power

Trump’s AI Framework Wants One Federal Rulebook for Artificial Intelligence

On March 20, 2026, the Trump administration released a legislative AI framework designed to centralize AI regulation at the federal level. The framework would preempt state AI laws, overriding the recent wave of state efforts to regulate AI development and use. It outlines seven objectives that prioritize innovation and scaling AI, proposes soft expectations for platform accountability, and shifts child safety responsibility toward parents rather than tech companies. The message is clear: Washington wants one rulebook, and it wants that rulebook to be light.

Key Points in the Federal AI Framework

  • Proposes federal preemption of state AI laws to create a uniform national standard
  • Calls for “minimally burdensome” regulation that removes barriers to innovation
  • Places child safety responsibility primarily on parents, not platforms
  • Addresses copyright through “fair use” language that favors AI companies
  • Focuses free speech provisions on preventing government-driven censorship of AI platforms

What Preemption Means for State AI Regulation

Over the past two years, states have moved aggressively on AI regulation. California became the first state to regulate AI companion chatbots. Others passed laws aimed at protecting minors and placing accountability on tech companies. This framework would override those efforts.

The White House statement reads: “This framework can only succeed if it is applied uniformly across the United States. A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”

The framework follows Trump’s December 2025 executive order directing federal agencies to challenge state AI laws. The Commerce Department was given 90 days to compile a list of “onerous” state regulations, though that list has not yet been published.

Child Safety: Parents Over Platforms

The framework states that “parents are best equipped to manage their children’s digital environment.” It calls on Congress to give parents tools like account controls to protect privacy and manage device use. It says AI platforms should implement features to reduce risks of sexual exploitation and self-harm, but uses qualifiers like “commercially reasonable” and stops short of clear enforceable requirements.

This approach contrasts with state legislation that placed direct responsibility on platforms. With social media companies currently facing lawsuits over harm to minors, the framework’s parent-first stance is likely to face criticism from child safety advocates.

Copyright and Free Speech Provisions

On copyright, the framework tries to balance creator protection with AI training needs. The “fair use” language mirrors arguments AI companies have made in court as they face a growing number of copyright lawsuits over training data.

The free speech section focuses on preventing government agencies from “coercing technology providers to ban, compel, or alter content based on partisan or ideological agendas.” Critics point out that this language is vague enough to make it difficult for regulators to coordinate with platforms on misinformation or election interference.

Samir Jain of the Center for Democracy and Technology noted the contradiction: “The framework rightly says the government should not coerce AI companies to ban or alter content based on partisan agendas, yet the administration’s ‘woke AI’ executive order this summer does exactly that.”

What Comes Next

The framework is a legislative proposal, not a law. Congress will need to draft and pass actual legislation based on these guidelines. Given the political divisions around AI regulation, the path from framework to law is uncertain. But the direction is set: less regulation, federal control, and a pro-growth stance that favors the companies building AI over the states trying to regulate it.