GOP lawmakers advance House proposal to preempt state AI laws until 2035.
House Republicans have embedded a controversial provision in their marquee tax legislation that would prevent states and local governments from regulating artificial intelligence for the next 10 years. The move has triggered immediate resistance from state officials and raised questions about the measure’s viability in the Senate.
The AI provision appears in President Donald Trump’s comprehensive economic package, dubbed the “big, beautiful” bill. The legislation aims to extend his 2017 tax reductions while boosting military and border security funding. These increases would be offset by cuts in Medicaid, food assistance programs, and clean energy incentives. Republican leadership intends to utilize budget reconciliation procedures to advance the bill through the Senate with a simple majority vote.
The AI moratorium explained

Hidden within extensive amendments from the House Energy and Commerce Committee, the AI clause states: “No state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.”
The proposal faces substantial opposition from state legal authorities. A coalition of 40 bipartisan state attorneys general has urged Congress to abandon the moratorium, arguing it would deprive states of essential tools to shield consumers from potential AI risks and algorithmic harms.
States fight back: Current protections at risk
“I firmly reject any attempt to prevent states from creating and implementing sensible regulation,” California Attorney General Rob Bonta stated. “States must retain the authority to protect residents by responding to rapidly evolving AI technology and machine learning advancements.”
California has already enacted several digital protection measures. These include prohibitions on AI-generated deepfakes in political advertising, bans on non-consensual explicit imagery, and requirements for healthcare providers to inform patients when artificial intelligence, rather than human judgment, influences medical decisions.
Debate over federal vs. state governance
Defending the regulatory pause, Rep. Jay Obernolte, R-Calif., contended that diverse state regulations would create impossible compliance challenges for companies operating nationwide.
“Any organization functioning across all states would find it practically impossible to comply with such a patchwork of technology governance,” he explained.
The proposal has received mixed reactions in the Senate. Members from both parties acknowledge the need for federal leadership on AI policy but question this specific approach. “I’m uncertain whether it satisfies the Byrd Rule requirements,” noted Sen. John Cornyn, R-Texas, referencing the reconciliation procedure requirement that provisions primarily address budgetary matters.
Sen. Bernie Moreno, R-Ohio, expressed support for a national regulatory framework while doubting the procedural approach. “AI technology doesn’t recognize state boundaries, making federal oversight of interstate commerce essential—it’s in our Constitution,” he said, while expressing skepticism about the moratorium’s procedural viability.
Tech industry pushes for unified, light-touch approach

Technology corporations have actively lobbied for minimal federal oversight. During a Senate panel appearance, OpenAI CEO Sam Altman warned that inconsistent state regulations “would create significant operational burdens and substantially impair our development capabilities.”
“A single federal framework with light-touch oversight that allows technological innovation to proceed at the pace this moment demands seems appropriate,” Altman added.
At that same hearing, Sen. Ted Cruz, R-Texas, proposed a decade-long “learning period” for state AI regulation. When asked about supporting such a pause or federal preemption to create consistent conditions for AI developers, Altman responded cautiously: “While I’m not entirely clear what a 10-year learning period entails, establishing one federal approach focused on minimal intervention and a level competitive environment sounds beneficial.”
Microsoft President Brad Smith reinforced support for federal leadership in digital governance, drawing parallels to early internet development. “While many details require refinement, empowering federal authorities to provide direction would foster industry growth and technological advancement,” he stated.
Critics call the moratorium “federal overreach”
California state Sen. Scott Wiener sharply criticized the proposal as “truly appalling,” lamenting that Congress appears “unable to implement meaningful AI safeguards to protect the public while simultaneously blocking states from taking protective action.” His comments highlight growing concerns about algorithmic accountability and data protection.
South Carolina Attorney General Alan Wilson, a Republican, characterized the measure as federal overreach into state sovereignty. “Congress seeks to restrict our authority while imposing a one-size-fits-all mandate from Washington without clear direction. This represents governmental overreach, not leadership,” he declared.
Current landscape and legislative outlook
Currently, half of all U.S. states have implemented laws addressing AI deepfakes in political contexts, according to tracking by Public Citizen, reflecting widespread concerns about technological interference in electoral processes and digital rights.
One exception within the reconciliation package is a bipartisan measure expected to become law next week. This legislation establishes stricter penalties for non-consensual “revenge porn,” regardless of whether the content is authentic or AI-generated, demonstrating some areas of cross-party consensus on tech regulation.
With Democrats unanimously opposed and Republican unity uncertain, proponents face significant challenges in the evenly divided Senate. The Byrd Rule and other procedural obstacles could ultimately derail the AI regulatory moratorium before it reaches implementation.
What do you think about balancing innovation with protection in AI regulation? Share your comments below.

