By a Senior Technology & Policy Correspondent
A New Chapter in America’s AI Power Struggle
On a winter afternoon inside the Oval Office, President Donald Trump took a decisive step that could reshape the future of artificial intelligence regulation in the United States. Surrounded by senior advisers, the president signed an executive order designed to prevent individual US states from enforcing their own artificial intelligence laws.
The message from the White House was clear: when it comes to AI, Washington wants to be the final authority.
“We want one central source of approval,” President Trump told reporters after signing the order, signaling a shift toward federal control over a technology that is evolving faster than lawmakers can keep up.
The decision instantly elevated a long-running debate into a national showdown—one that pits innovation and global competition against consumer protection and states’ rights.
Why the White House Is Cracking Down on State AI Rules
According to David Sacks, the White House’s chief adviser on artificial intelligence, the administration believes state-level AI laws have become too restrictive and fragmented.
He described some state regulations as “onerous,” arguing they could stifle innovation and place unnecessary burdens on companies racing to develop next-generation AI tools. Sacks emphasized that the federal government would still support rules focused on children’s safety and consumer protection, but only within a unified national framework.
Behind the scenes, the move reflects mounting pressure from major technology firms that have warned against a regulatory “patchwork” across the country.
Big Tech’s Long-Standing Complaint: Too Many Rules, Too Many Risks
The United States currently has no comprehensive federal AI law, but the states have been busy filling the gap.
According to the White House, more than 1,000 AI-related bills have been introduced across the country. In 2025 alone, 38 states passed nearly 100 new AI regulations, covering everything from chatbots to copyright and robotics.
For companies investing billions into AI development, complying with dozens of different legal standards is costly and risky.
Executives argue that inconsistent rules could slow progress and weaken America’s position in its technological rivalry with China, where AI development is heavily state-driven and centrally regulated.
What States Have Been Regulating So Far
The laws now facing federal pushback vary widely in scope and intent:
- California requires platforms to clearly inform users when they are interacting with AI chatbots, a measure aimed at protecting children and teenagers. The state has also mandated that large AI developers outline how they plan to reduce catastrophic risks from powerful models.
- North Dakota has outlawed the use of AI-powered robots for stalking or harassment.
- Arkansas has introduced protections to prevent AI-generated content from violating intellectual property and copyright laws.
- Oregon has barred AI systems from using licensed medical titles, such as “registered nurse,” to prevent public confusion.
Supporters of these laws argue that they exist because Congress has failed to act.
Critics Warn of a Regulatory Vacuum
Opposition to the executive order was swift and vocal.
Advocacy groups warn that blocking state action without replacing it with strong federal safeguards could leave consumers exposed.
“Stripping states of their ability to enact AI protections undermines their responsibility to safeguard residents,” said Julie Scelfo of Mothers Against Media Addiction.
California Governor Gavin Newsom went further, accusing the president of prioritizing tech industry interests over public safety.
In a sharply worded statement, Newsom said the executive order was designed to benefit powerful allies while weakening protections against “unregulated AI technology.”
Industry Reaction: Relief, With Conditions
Major AI companies, including OpenAI, Google, Meta, and Anthropic, declined to comment immediately. However, tech lobbying groups welcomed the move.
NetChoice, which represents major digital platforms, praised the administration’s push toward national standards.
“A clear federal rulebook is essential for innovation,” said Patrick Hedger, the group’s director of policy.
Legal experts largely agree that a single national framework would be preferable, but only if it is well designed.
Michael Goodyear, an associate professor at New York Law School, noted that while companies are justified in fearing conflicting state laws, the success of this approach depends on what comes next.
“One federal law is better than dozens of state laws,” he said. “But that assumes the federal law is actually strong, clear, and enforceable.”
What Comes Next for AI Regulation in the US?
President Trump’s executive order does not instantly erase state AI laws, but it gives federal agencies new authority to challenge and block enforcement.
The larger question remains unanswered: Will Congress step in with comprehensive AI legislation, or will this order simply pause progress?
For now, the United States stands at a crossroads, balancing innovation, global competition, and public safety in an era where artificial intelligence is no longer experimental but has become deeply embedded in daily life.
Final Thought
This executive order may streamline AI regulation, but it also raises fundamental questions about accountability, transparency, and who gets to decide how powerful technologies shape society.
As AI continues to accelerate, the absence of a strong federal framework could prove just as risky as the patchwork the White House is trying to dismantle.
