Image

The race to control AI has sparked a federal vs state showdown

For the first time, Washington is getting close to deciding how to regulate artificial intelligence. And the fight that’s brewing isn’t about the technology, it’s about who gets to do the regulating. 

In the absence of a meaningful federal AI standard that focuses on consumer safety, states have introduced dozens of bills to protect residents against AI-related harms, including California’s AI safety bill SB-53 and Texas’s Responsible AI Governance Act, which prohibits intentional misuse of AI systems. 

The tech giants and buzzy startups born out of Silicon Valley argue such laws create an unworkable patchwork that threatens innovation. 

“It’s going to slow us in the race against China,” Josh Vlasto, co-founder of pro-AI PAC Leading the Future, told TechCrunch. 

The industry, and several of its transplants in the White House, is pushing for a national standard or none at all. In the trenches of that all-or-nothing battle, new efforts have emerged to prohibit states from enacting their own AI legislation. 

House lawmakers are reportedly trying to use the National Defense Authorization Act (NDAA) to block state AI laws. At the same time, a leaked draft of a White House executive order also demonstrates strong support for preempting state efforts to regulate AI. 

A sweeping preemption that would take away states’ rights to regulate AI is unpopular in Congress, which voted overwhelmingly against a similar moratorium earlier this year. Lawmakers have argued that without a federal standard in place, blocking states will leave consumers exposed to harm, and tech companies free to operate without oversight. 

Techcrunch event

San Francisco
|
October 13-15, 2026

To create that national standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a package of federal AI bills that cover a range of consumer protections, including fraud, healthcare, transparency, child safety, and catastrophic risk. A megabill such as this will likely take months, if not years, to become law, underscoring why the current rush to limit state authority has become one of the most contentious fights in AI policy.

The battle lines: NDAA and the EO

US President Donald Trump displays an executive order on artificial intelligence he signed at the "Winning the AI Race" AI Summit at the Andrew W.
Trump displays an executive order on AI he signed on July 23, 2025. (Photo by ANDREW CABALLERO-REYNOLDS / AFP) Image Credits:ANDREW CABALLERO-REYNOLDS/AFP / Getty Images

Efforts to block states from regulating AI have ramped up in recent weeks. 

The House has considered tucking language in the NDAA that would prevent states from regulating AI, Majority Leader Steve Scalise (R-LA) told Punchbowl News. Congress was reportedly working to finalize a deal on the defense bill before Thanksgiving, Politico reported. A source familiar with the matter told TechCrunch negotiations have focused on narrowing the scope to potentially preserve state authority over areas like kids’ safety and transparency.

Meanwhile, a leaked White House EO draft reveals the administration’s own potential preemption strategy. The EO, which has reportedly been put on hold, would create an “AI Litigation Task Force” to challenge state AI laws in court, direct agencies to evaluate state laws deemed “onerous,” and push the Federal Communications Commission and Federal Trade Commission towards national standards that override state rules. 

Notably, the EO would give David Sacks – Trump’s AI and Crypto Czar and co-founder of VC firm Craft Ventures – co-lead authority on creating a uniform legal framework. This would give Sacks direct influence over AI policy that supersedes the typical role of the White House Office of Science and Technology Policy, and its head Michael Kratsios. 

Sacks has publicly advocated for blocking state regulation and keeping federal oversight menial, favoring industry self-regulation to “maximize growth.”

The patchwork argument

Sacks’s position mirrors the viewpoint of much of the AI industry. Several pro-AI super PACs have emerged in recent months, throwing hundreds of millions of dollars into local and state elections to oppose candidates who support AI regulation.

Leading the Future – backed by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale – has raised more than $100 million. This week, Leading the Future launched a $10 million campaign pushing Congress to craft a national AI policy that overrides state laws.

“When you’re trying to drive innovation in the tech sector, you can’t have a situation where all these laws keep popping up from people who don’t necessarily have the technical expertise,” Vlasto told TechCrunch.

He argued that a patchwork of state regulations will “slow us in the race against China.” 

Nathan Leamer, executive director of Build American AI, the PAC’s advocacy arm, confirmed the group supports preemption without AI-specific federal consumer protections in place. Leamer argued that existing laws, like those addressing fraud or product liability, are sufficient to handle AI harms. Where state laws often seek to prevent problems before they arise, Leamer favors a more reactive approach: let companies move fast, address problems in court later. 

No preemption without representation

Alex Bores speaking at an event in Washington, D.C., on November 17, 2025.
Alex Bores speaking at an event in Washington, D.C., on November 17, 2025. Image Credits:TechCrunch

Alex Bores, a New York Assembly member running for Congress, is one of Leading the Future’s first targets. He sponsored the RAISE Act, which requires large AI labs to have safety plans to prevent critical harms.

“I believe in the power of AI, and that is why it is so important to have reasonable regulations,” Bores told TechCrunch. “Ultimately, the AI that’s going to win in the marketplace is going to be trustworthy AI, and often the marketplace undervalues or puts poor short-term incentives on investing in safety.”

Bores supports a national AI policy, but argues states can move faster to address emerging risks. 

And it’s true that states move quicker. 

As of November 2025, 38 states have adopted more than 100 AI-related laws this year, mainly targeting deepfakes, transparency and disclosure, and government use of AI. (A recent study found that 69% of those laws impose no requirements on AI developers at all.) 

Activity in Congress provides more evidence of the slower-than-states argument. Hundreds of AI bills have been introduced, but few have passed. Since 2015, Rep. Lieu has introduced 67 bills to the House Science Committee. Only one became law. 

More than 200 lawmakers signed an open letter opposing preemption in the NDAA, arguing that “states serve as laboratories of democracies” that must “retain the flexibility to confront new digital challenges as they arise.” Nearly 40 state attorneys general also sent an open letter opposing a state AI regulation ban.

Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders – authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship – argue the patchwork complaint is overblown. 

AI companies already comply with tougher EU regulations, they note, and most industries find a way to operate under varying state laws. The real motive, they say, is avoiding accountability.   

What could a federal standard look like?

Lieu is drafting an over 200-page megabill he hopes to introduce in December. It covers a range of issues, like fraud penalties, deepfake protections, whistleblower protections, compute resources for academia, and mandatory testing and disclosure for large language model companies. 

That last provision would require AI labs to test their models and publish results – something most do voluntarily now. Lieu hasn’t yet introduced the bill, but he said it doesn’t direct any federal agencies to review AI models directly. That differs from a similar bill introduced by Sens Josh Hawley (R-MS) and Richard Blumenthal (D-CN) which would require a government-run evaluation program for advanced AI systems before they deployed.

Lieu acknowledged his bill wouldn’t be as strict, but he said it had a better chance at making it into law. 

“My goal is to get something into law this term,” Lieu said, noting that House Majority Leader Scalise is openly hostile to AI regulation. “I’m not writing a bill that I’d have if I were king. I’m trying to write a bill that could pass a Republican-controlled House, a Republican-controlled Senate, and a Republican-controlled White House.”

SHARE THIS POST