This Week in Tech: AI Moratoriums and Support for Small Innovators

Mar 21, 2026, 2:40 AM
Image for article This Week in Tech: AI Moratoriums and Support for Small Innovators

Hover over text to view sources

In the realm of technology legislation, several significant bills have been introduced this week, primarily focusing on the evolving landscape of artificial intelligence (AI) regulation. Lawmakers are responding to both the threats posed by an AI moratorium and the need to support small AI businesses.
On one front, a coalition of House Democrats has proposed legislation to counteract an AI moratorium suggested by Republican lawmakers. This initiative follows the recent release of President Donald Trump's National AI Policy Framework, which seeks to impose federal preemptive actions against state-level AI laws. The proposed bill, known as the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards Act, aims to prohibit any federal interference in state-level AI regulation, allowing states the autonomy to create laws that protect their residents from potential AI harms.
Senator Brian Schatz, who filed a companion bill in the Senate, emphasized the importance of local governance, stating, "Preventing states from enacting common-sense regulation that protects people from the very real harms of AI is dangerous." He argues that Congress must ensure proper regulations are in place while allowing states to act in the public interest during this transitional phase of AI technology.
In a related legislative effort, Representatives Suhas Subramanyam and Jay Obernolte introduced the Small AI Innovators Empowerment Act. This bipartisan bill aims to support smaller AI businesses by directing the Department of Commerce to study the challenges these companies face, including access to funding and regulatory hurdles. Obernolte remarked, "America's leadership in artificial intelligence will not only depend on large technology companies, but also on the next generation of innovators." The Act is part of a larger trend to democratize the AI industry and provide smaller entities with the resources they need to thrive in a competitive marketplace.
In addition to these measures, there are ongoing discussions regarding the need for clear guidelines on how AI can be utilized within federal operations, particularly in the military. Senator Elissa Slotkin has introduced the AI Guardrails Act to establish clear boundaries for AI applications in Department of Defense operations, explicitly prohibiting its use in lethal autonomous weapons and domestic surveillance.
Another notable development is the Artificial Intelligence Ready Data Act, introduced by Senators Ted Budd and Andy Kim, which seeks to democratize access to government datasets. This legislation aims to enhance the training of US AI models, ensuring that researchers and developers can utilize federal data safely and efficiently.
As the legislative landscape shifts, there is also a growing focus on online privacy and the ethical implications of AI. Representative Zoe Lofgren has reintroduced the Online Privacy Act, which aims to establish a federal framework for the retention and use of personal data. This proposal seeks to empower individuals by allowing them to access and delete their data while ensuring companies comply with privacy regulations.
While the momentum for these AI-related bills continues to build, there remains a significant tension between state and federal authorities regarding the regulation of AI. The Senate recently voted to remove a proposed moratorium on state enforcement of AI laws, marking a victory for state autonomy. Advocacy groups have hailed this decision, emphasizing that states must maintain the ability to protect their citizens from potential AI-related harms.
Despite the push for a national framework, concerns persist about the implications of a fragmented regulatory landscape. Experts argue that allowing states to individually regulate AI will create inconsistencies that may hinder innovation and market growth.
As legislators navigate the complexities of AI regulation, the balance between fostering innovation and ensuring public safety remains a critical focus. The ongoing discussions and legislative efforts underscore the urgency of developing a cohesive framework that addresses the challenges and opportunities presented by artificial intelligence in modern society.
In summary, this week's legislative developments reflect a concerted effort by lawmakers to grapple with the implications of AI technology. As both state and federal policymakers work to establish guidelines, the future of AI regulation will undoubtedly shape the landscape of innovation and public safety in the years to come.

Related articles

Three Charged in Scheme to Smuggle Nvidia AI Chips to China

Three individuals, including a senior vice president at Super Micro Computer, have been charged with conspiring to smuggle Nvidia AI chips into China, violating US export controls. The indictment alleges they diverted billions of dollars' worth of technology through a complex scheme involving fake documents and shell companies.

Trump Administration Leverages TikTok to Shape Narrative on Iran War

Amid rising tensions with Iran, the Trump administration is utilizing TikTok to influence public perception regarding US military actions. This strategy aims to engage younger audiences through social media, showcasing the platform's potential as a tool for political messaging.

Understanding AI Political Campaign Ads: Risks and Regulations

As artificial intelligence (AI) becomes increasingly integrated into political campaign advertising, various risks and new regulations are emerging. This article explores the implications of AI-generated content, including misinformation challenges and state-level legislative responses aimed at ensuring transparency and protecting voters.

Navigating AI in Political Campaign Ads: Key Insights for Voters

As the 2024 election approaches, the use of AI in political advertising is on the rise, prompting new regulations and concerns. Voters need to understand the implications of AI-generated content, including misinformation and the potential erosion of trust in political messages.

US Military Embraces AI for Iran Operations, Yet Human Judgment Remains Vital

The US military's recent operations in Iran have significantly leveraged artificial intelligence, enabling rapid targeting of numerous sites. However, experts emphasize that despite technological advancements, human decision-making is crucial in military engagements to mitigate risks and ensure ethical considerations are taken into account.