Trump Pushes for Unified AI Regulations Amid State Initiatives

Mar 29, 2026, 2:19 AM
Image for article Trump Pushes for Unified AI Regulations Amid State Initiatives

Hover over text to view sources

In recent months, President Trump has expressed a desire for Congress to establish a unified framework for regulating artificial intelligence (AI), particularly as states have begun enacting their own laws in the absence of federal action. This push comes amid concerns that varying state regulations could hinder innovation and create confusion for tech companies operating across state lines.
State lawmakers, including some from Trump's own party, are increasingly frustrated by the lack of federal direction. Many states have taken it upon themselves to create regulations aimed at ensuring child safety, increasing transparency in technology, and providing whistleblower protections. For instance, the SAFECHAT Act in Pennsylvania requires AI companies to implement safeguards to prevent harmful content from their chatbots.
Despite these efforts, the Trump administration has actively pushed back against state legislation. Michael Kratsios, the head of the White House Office of Science and Technology Policy, emphasized that a cohesive national framework is essential for fostering an environment conducive to innovation. He stated, “We want to create an environment where innovators have certainty about the way that they can develop their products and it's something only Congress can provide.”.
The administration's recent regulatory framework outlines principles for AI governance, focusing on protecting children and consumers while addressing the rising costs associated with data centers. However, state lawmakers have voiced concerns about the lack of detail in these proposals. Riki Parikh, policy director at the Alliance for Secure AI, remarked that while a federal standard is preferable to a fragmented state approach, the administration's current framework falls short in holding tech companies accountable and addressing job displacement issues.
Utah State Representative Doug Fiefia, who attempted to introduce a bill for greater transparency in AI practices, described the resistance he faced from the Trump administration. He noted that a memo from the White House indicated opposition to his proposed legislation, which ultimately did not proceed to a vote. Fiefia highlighted the challenges posed by congressional gridlock, asserting that states must step in to protect their citizens, especially regarding child safety.
Other Republican lawmakers echo this sentiment, stating that state governments can react more swiftly to emerging issues than the federal government. Pennsylvania State Senator Tracy Pennycuick acknowledged, “I think states are the first ones to see when there's a problem and they have the ability to pivot and act quickly.” This sentiment is shared by Texas State Senator Angela Paxton, who expressed the need for state laws while awaiting comprehensive federal legislation.
The mixed reactions to the White House’s framework extend to the public as well, with a significant number of Americans believing that the Trump administration is too closely aligned with Big Tech. Polling indicates that even within Republican circles, there is strong support for regulating AI technologies.
On Capitol Hill, while some Republican senators have shown support for Trump's framework, concrete legislative movement remains elusive. Senator Marsha Blackburn has stated her intention to expand upon the administration's proposals with her own TRUMP AMERICA AI Act, aiming to create laws that protect Americans while fostering AI innovation. The White House maintains that productive discussions with Congress are ongoing, but the path forward remains uncertain as states continue to take the initiative in regulating AI technologies.
In conclusion, as the debate over AI regulation unfolds, the tensions between federal aspirations and state actions highlight the complexities of governance in an era of rapid technological advancement. The outcome of this legislative struggle could significantly shape the future landscape of AI in the United States.

Related articles

Pussy Riot Protests Ubiquiti Over Alleged War Crimes Support

Pussy Riot staged a protest at Ubiquiti's Manhattan offices, accusing the tech company of facilitating Russian war crimes in Ukraine. The group's demands include compliance with US sanctions and acknowledgment of Ubiquiti's role in the conflict.

Jury Verdicts Against Meta and Google Spark Legal Battle Over Tech Liability

A recent jury in Los Angeles found Meta and Google liable for the mental health issues of a young woman due to their social media platforms' addictive designs. This landmark ruling has significant implications for the tech industry, potentially reshaping legal accountability and prompting a wave of similar lawsuits.

Trump Appoints David Sacks as Co-Chair of Tech Advisory Council

David Sacks has been appointed co-chair of the President's Council of Advisors on Science and Technology (PCAST), expanding his role in shaping US technology policy. The council aims to enhance American leadership in science and technology, focusing on artificial intelligence and cryptocurrency.

Custody Battle Highlights Dangers of AI in Legal Practice

A custody dispute over a dog named Kyra has raised concerns about the reliability of AI in legal contexts. The case exemplifies how lawyers can inadvertently rely on fabricated citations generated by AI, resulting in significant professional repercussions and eroding trust in the judicial system.

Landmark Verdicts Against Meta and Google Challenge Tech Liability Shield

Recent jury verdicts in California and New Mexico have found Meta and Google liable for harm caused to young users through their social media platforms. These rulings could reshape legal protections for tech companies, particularly regarding their responsibility for user addiction and mental health impacts.