AI Attack Ads Transform Massachusetts Political Landscape

Mar 24, 2026, 2:33 AM
Image for article AI Attack Ads Transform Massachusetts Political Landscape

Hover over text to view sources

Artificial intelligence is rapidly becoming a pivotal tool in political campaigns across Massachusetts, with recent developments raising alarms about the potential for voter deception and the integrity of the electoral process.
One notable instance occurred on March 11, when Republican state representative Marc Lombardo shared a striking AI-generated attack ad targeting his opponent, Daniel Darris-O'Connor. This ad, styled as a vintage newspaper article, exaggeratedly portrayed Darris-O'Connor alongside New York City Mayor Zohran Mamdani, suggesting a link to Mamdani's democratic socialist policies.
Similarly, in January, gubernatorial candidate Brian Shortsleeve posted a fake radio ad featuring an AI-generated imitation of Governor Maura Healey's voice. This ad claimed Healey was proud of her record, yet it included exaggerated and misleading statements about employment and economic conditions in Massachusetts. Shortsleeve's campaign confirmed the use of AI for this purpose, emphasizing that while the ad was a parody, it highlighted what they described as Healey's failures.
The use of AI in political advertising is not just a trend but has the potential to become a staple of campaign strategies, prompting concerns from various lawmakers. Senator Barry Finegold, co-chair of the Legislature's emerging technologies committee, expressed his apprehension, stating, "If we don't stop it, I think this is going to be a part of campaigns and something, I believe, is out of bounds." Currently, Massachusetts law only prohibits the use of computer-generated images for harassment, with no comprehensive regulations covering AI in political advertising.
In response to the growing prevalence of AI-generated content, the Massachusetts State House is considering new legislation that aims to regulate its use. In February 2026, two bills were passed: H.5093, which seeks to prohibit the distribution of materially deceptive AI-generated media within 90 days of an election, and H.5094, which would require clear disclosures regarding the use of AI in political ads to ensure transparency. These bills are now under review by the Senate's Committee on Ways and Means.
The Federal Election Commission (FEC) is also addressing the issue at a national level, having published a notice seeking public comment on regulating AI-generated advertisements. The FEC's inquiry follows a petition that calls for clarifying that deceptive AI-generated campaign ads violate existing election laws. However, there is considerable debate regarding the Commission's authority to enforce such regulations.
The emergence of AI in political advertising is part of a broader trend, with 26 states already implementing laws to govern the use of deepfakes in campaigns. These regulations vary, with some states opting for outright prohibitions on the publication of deepfakes close to election dates, while others require disclosures to inform voters about the nature of the content they are consuming.
Massachusetts lawmakers are now tasked with deciding how to navigate the complex landscape of AI in political advertising. As candidates increasingly turn to AI for campaign messaging, the implications for voter trust, election integrity, and the democratic process become increasingly significant.
As this situation evolves, public opinion will likely play a crucial role in shaping the future of AI in political campaigns. Voters may soon find themselves grappling with the challenge of distinguishing between genuine content and AI-generated misinformation, a challenge that could redefine the political advertising landscape in Massachusetts and beyond.
In conclusion, the rise of AI-generated attack ads in Massachusetts political campaigns highlights a pressing need for regulatory frameworks to protect voters and ensure fair electoral practices. The ongoing debates at both state and federal levels will determine how effectively these emerging challenges are addressed in the coming years.

Related articles

This Week in Tech: AI Moratoriums and Support for Small Innovators

This week, various tech bills made headlines, notably efforts to counter an AI moratorium proposed by GOP lawmakers while also supporting small AI businesses. Key measures include legislation aimed at protecting state-level AI regulations and initiatives to empower smaller AI innovators in a competitive landscape.

Three Charged in Scheme to Smuggle Nvidia AI Chips to China

Three individuals, including a senior vice president at Super Micro Computer, have been charged with conspiring to smuggle Nvidia AI chips into China, violating US export controls. The indictment alleges they diverted billions of dollars' worth of technology through a complex scheme involving fake documents and shell companies.

Trump Administration Leverages TikTok to Shape Narrative on Iran War

Amid rising tensions with Iran, the Trump administration is utilizing TikTok to influence public perception regarding US military actions. This strategy aims to engage younger audiences through social media, showcasing the platform's potential as a tool for political messaging.

Understanding AI Political Campaign Ads: Risks and Regulations

As artificial intelligence (AI) becomes increasingly integrated into political campaign advertising, various risks and new regulations are emerging. This article explores the implications of AI-generated content, including misinformation challenges and state-level legislative responses aimed at ensuring transparency and protecting voters.

Navigating AI in Political Campaign Ads: Key Insights for Voters

As the 2024 election approaches, the use of AI in political advertising is on the rise, prompting new regulations and concerns. Voters need to understand the implications of AI-generated content, including misinformation and the potential erosion of trust in political messages.