US legislators are calling on social media platforms X, Facebook and Instagram to explain why they are not imposing labels on deceptive AI-generated political advertisements that could fake a candidate’s voice or actions.
Two Democratic members of Congress have sent a letter to Meta chief executive Mark Zuckerberg and X boss Linda Yaccarino expressing “serious concerns” about the emergence of AI-generated political ads on their platforms and asking them to explain any rules they are crafting to curb the harms to free and fair elections.
“They are two of the largest platforms and voters deserve to know what guardrails are being put in place,” said senator Amy Klobuchar in an interview with The Associated Press.
“We are simply asking them, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”
The letter to the executives from Ms Klobuchar and representative Yvette Clarke warns: “With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues.”
X, formerly known as Twitter, and Meta, the parent company of Facebook and Instagram, did not immediately respond to requests for comment. Ms Clarke and Ms Klobuchar asked the executives to respond to their questions by October 27.
The pressure on the social media companies comes as both legislators help to lead a charge to regulate AI-generated political ads. A House of Representatives bill introduced by Ms Clarke earlier this year would amend a federal election law to require labels when election advertisements contain AI-generated images or video.
“I think that folks have a First Amendment right to put whatever content on social media platforms that they’re moved to place there,” Ms Clarke said in an interview. “All I’m saying is that you have to make sure that you put a disclaimer and make sure that the American people are aware that it’s fabricated.”
For Ms Klobuchar, who is sponsoring companion legislation in the Senate that she aims to get passed before the end of the year, “that’s like the bare minimum” of what is needed. In the meantime, both legislators hope that major platforms take the lead on their own, especially given the disarray that has left the House without an elected speaker.
Google has already said that starting in mid-November it will require a clear disclaimer on any AI-generated election ads that alter people or events on YouTube and other Google products.
The policy applies in the US and in other countries where the company verifies election ads. Facebook and Instagram parent Meta does not have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.
A more recent bipartisan Senate bill, co-sponsored by Ms Klobuchar, Republican senator Josh Hawley and others, would go farther in banning “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire.
AI-generated ads are already part of the 2024 election, including one aired by the Republican National Committee in April meant to show the future of the US if President Joe Biden is re-elected.
It employed fake but realistic photos showing boarded-up storefronts, armoured military patrols in the streets, and waves of immigrants creating panic.
Ms Klobuchar said such an ad would be likely to be banned under the rules proposed in the Senate bill. So would a fake image of Donald Trump hugging infectious disease expert Dr Anthony Fauci that was shown in an attack ad from Trump’s Republican primary opponent and Florida governor Ron DeSantis.