Maine Moves to Regulate AI-Generated Political Advertisements

Mar 12, 2026, 2:35 AM
Image for article Maine Moves to Regulate AI-Generated Political Advertisements

Hover over text to view sources

Maine lawmakers have taken a significant step toward regulating the use of artificial intelligence (AI) in political advertising by advancing a proposal that mandates disclosure for any content significantly altered by AI. The bill, known as LD 517, passed in the Maine House of Representatives with a vote of 73-65, primarily along party lines with Democratic support.
Supporters of the bill argue that the requirement for transparency is crucial as AI-generated "deepfakes" become increasingly sophisticated and accessible. These high-quality manipulations can mislead voters, potentially affecting electoral outcomes. Bill sponsor Rep. Amy Kuhn (D-Falmouth) emphasized that the measure is designed to promote informed decision-making among voters. "This bill is very narrow," she stated, acknowledging that while it won't cover all misleading content, it aims to protect voters from deception.
Under the proposed regulations, any political campaign or political action committee that uses altered media would have to include a disclosure label. Violators would face investigations by the Commission on Governmental Ethics and Election Practices, which could impose civil penalties amounting to 500% of the media's cost. The bill defines "synthetic media" as recordings that depict candidates doing or saying things they did not actually do or say, in a way that could mislead a reasonable person.
Critics of the bill, including Rep. Jennifer Poirier (R-Skowhegan), question the necessity of government regulation in this area, arguing that it could infringe on free speech. Poirier expressed concern about the practical implications of the law, noting that distinguishing manipulated media from legitimate political messaging can be challenging due to the nature of political advertising. "Our responsibility is to protect the marketplace of ideas, not to police it," she remarked.
The Maine proposal aligns with similar regulations in other states; over 26 states have enacted laws addressing the use of political deepfakes. Most of these laws require disclosures similar to those in Maine's LD 517, while some states like Minnesota and Texas have implemented bans on the publication of deepfakes close to election dates.
In addition to LD 517, Maine lawmakers are considering other measures related to AI, including bills aimed at protecting children from potential harms associated with AI interactions. Janet Mills has acknowledged the promise of AI technology but emphasizes the need for responsible and ethical usage. She has proposed a $6.7 million budget to implement recommendations from a task force focused on AI, which includes creating a statewide AI literacy campaign and supporting job training programs to ensure Maine's workforce remains competitive in an AI-driven economy.
The Federal Communications Commission (FCC) has also proposed regulations requiring political advertisers to disclose the use of AI-generated content in broadcast media. This initiative reflects a broader movement among lawmakers and experts advocating for transparency in political communications as generative AI technology evolves rapidly. FCC Chair Jessica Rosenworcel has highlighted the importance of informing consumers about the technology used in political advertisements, further supporting the need for legislative measures at both state and federal levels.
The ongoing discussions in Maine underscore the complexities of regulating AI in political contexts. As AI technology continues to develop, lawmakers must navigate the balance between ensuring transparency and protecting free speech. The outcome of LD 517 and similar legislative efforts may set a precedent for how other states approach the regulation of AI in political advertising in the future.
With the bill now moving to the upper chamber, Maine's approach could significantly impact the landscape of political advertising, potentially influencing how voters interact with AI-generated content in the lead-up to elections.

Related articles

US Military Embraces AI for Iran Operations, Yet Human Judgment Remains Vital

The US military's recent operations in Iran have significantly leveraged artificial intelligence, enabling rapid targeting of numerous sites. However, experts emphasize that despite technological advancements, human decision-making is crucial in military engagements to mitigate risks and ensure ethical considerations are taken into account.

Maine Moves to Regulate AI-Generated Political Ads Amid Concerns

Maine lawmakers are considering a proposal that mandates disclosure for AI-generated political ads. This measure aims to enhance transparency and protect voters from misleading content as the technology used to create such ads advances rapidly.

Maine House Advances Bill for AI Disclosure in Political Ads

The Maine House has approved a bill requiring political campaigns to disclose the use of AI in altering ads. This legislation aims to combat misinformation from AI-generated content, reflecting growing concerns about election integrity in the digital age.

OpenAI Revises Military Deal Amid User Backlash

OpenAI's CEO Sam Altman has announced revisions to the company's controversial deal with the US Department of Defense, acknowledging the initial agreement was rushed and poorly communicated. The updates aim to address concerns over potential domestic surveillance and the use of AI in autonomous weapons, but skepticism remains among users and experts.

U.S. Government's iPhone Hacking Toolkit Leaked to Foreign Criminals

A sophisticated iPhone hacking toolkit known as Coruna, believed to have originated from US government efforts, is now in the possession of foreign espionage actors and cybercriminals. This alarming leak raises significant concerns over cybersecurity and the proliferation of state-level surveillance tools.