State Leaders Oppose Preemption of AI Regulation
A bipartisan coalition of over 40 state attorneys general from across the United States has formally called on Congress to reject provisions that would prevent states from enacting their own artificial intelligence (AI) regulations. This effort, supported by attorneys general representing both Democratic and Republican states, comes in response to a proposed 10-year federal ban on state and local laws governing AI.
The Stakes: Local Protections vs. Federal Preemption
According to the letter delivered to congressional leaders, the attorneys general argue that states have historically acted as the “laboratories of democracy,” innovating with policies that address emerging harms and gaps in federal oversight. In recent years, states have implemented or proposed a range of laws to:
- Protect against AI-generated explicit material
- Prohibit deceptive use of AI in political ads
- Ensure algorithmic transparency and fairness in areas like hiring, housing, and healthcare
- Address concerns over privacy and consumer safety
The AGs warn that a sweeping federal law preempting state authority would roll back these protections, leave consumers vulnerable, and prevent states from responding nimbly to new risks posed by rapidly developing technologies like
ChatGPT and other advanced AI tools[2].
Bipartisan Pushback and Broader Concerns
Resistance to preemption is not only coming from state officials; it has also appeared in Congress itself. In a recent Senate vote, lawmakers overwhelmingly agreed to remove a proposed 10-year ban on state AI rules from a larger legislative package, with a 99–1 vote highlighting broad, bipartisan opposition to federal overreach on this issue[1].
Senator Amy Klobuchar (D-MN) characterized the preemption draft as "unlawful" and warned that it would "attack states for enacting AI guardrails"[1]. Legislative critics argue that blocking state innovation would tie the hands of governors and lawmakers working to protect their constituents from the unchecked harms of unregulated AI technologies[3].
State AGs: Ready to Collaborate on Responsible AI Policy
The attorneys general acknowledge the importance of balancing innovation and protection, inviting Congress to collaborate on a responsible framework. They argue that any federal approach to AI regulation should:
- Focus on high-risk AI systems
- Require robust transparency, testing, and assessment standards
- Empower state attorneys general to enforce AI regulations alongside federal authorities
They emphasize that, in the absence of adequate federal action, state governments must retain the power to address consumer complaints and evolving risks.
Rapidly Evolving Technology Requires Local Flexibility
With AI technologies such as
ChatGPT and other advanced systems seeing widespread adoption, the attorneys general stress that state-level innovation is more critical than ever. They caution that barring state regulation would "directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers"[2].
As the debate continues, state leaders urge Congress to engage in a transparent, bipartisan process for crafting national AI policy—one that preserves state flexibility to safeguard their residents in the face of both present and unforeseen technological challenges.
As the debate continues, state leaders urge Congress to engage in a transparent, bipartisan process for crafting national AI policy—one that preserves state flexibility to safeguard their residents in the face of both present and unforeseen technological challenges.
As the debate continues, state leaders urge Congress to engage in a transparent, bipartisan process for crafting national AI policy—one that preserves state flexibility to safeguard their residents in the face of both present and unforeseen technological challenges.