U.S.

In California, a law has been passed to combat dipfakes in political advertising

California Governor Gavin Newsom this week signed three laws limiting the role of artificial intelligence, especially dipfake audio and video, in election campaigns.

One of the laws, which took effect immediately, prohibits the distribution of “substantially deceptive audio and video about a candidate” 120 days before and 60 days after an election.

Another law requires election-related ads that use AI-generated content to contain information alerting viewers or listeners to that fact.

A third law requires major online platforms to take steps to block “substantially deceptive content related to California elections” and remove such material within 72 hours of being notified of its presence.

“Ensuring election integrity is essential to democracy, and it is critical that we do everything we can to ensure that artificial intelligence is not used to undermine public trust through misinformation, especially in today’s tense political climate,” Newsom said in a statement.

“These measures will help combat the malicious use of dipfakes in political ads and other content, which is one of several areas where the state is taking the initiative to support transparent and trustworthy AI,” he added.

California is not the only state that has passed laws regulating the use of dipfake content in political ads. However, the application of the ban within 60 days of an election is unique and could be adopted by other states.

Criticism from tech titans

Social media and free speech advocates are expected to challenge the new laws, insisting that they violate the freedom of speech guaranteed by the First Amendment to the U.S. Constitution.

One of the main opponents is billionaire Elon Musk, owner of social media platform X, who has been actively using X to express support for Republican presidential candidate Donald Trump.

In July, Musk published a video in which he used dipfake technology to sparodize the voice of Vice President Kamala Harris. In the video, the cloned voice describes Harris as a “puppet of the deep state” and “an exceptional example of political correctness hiring.”

After Newsom signed the new laws on Tuesday, Musk posted the video again, writing, “The Governor of California just made this parody video illegal in violation of the U.S. Constitution. It would be a shame if it goes viral.”

Regulation at the federal level

Most legislative efforts to regulate AI in politics have so far been at the state level. This week, however, a bipartisan group of lawmakers in Congress proposed giving the Federal Election Commission the authority to oversee the use of AI in political campaigns.

Specifically, it would allow the agency to ban the use of diplomfakes, which give the impression that a rival candidate said or did something he or she didn’t actually say or do.

Speaking at an event organized by Politico this week, U.S. Deputy Attorney General Lisa Monaco said there is a clear need for rules to emerge governing the use of AI in political campaigns and expressed confidence that Congress will act.

She said that while AI promises many benefits, it also “lowers the barrier to entry for all sorts of malicious actors.”

“I’m confident that the legislation will be amended over time,” she added.

Dipfakes’ minimal role in election campaigns

Before the 2024 election campaign, there were concerns that the uncontrolled use of dipfake technology would lead to a huge amount of misleading content.

According to Katie Sanders, editor-in-chief of the website PolitiFact, that didn’t happen. “It didn’t turn out the way many people feared,” she told Voice of America.

“I’m not entirely sure it’s good news, because there’s still a lot of misinformation in political advertising. It’s just not generated by artificial intelligence. It is based on the same techniques – exaggerating the opponent’s position or taking fragments out of context,” she added.

Leave a Reply

Your email address will not be published. Required fields are marked *