Political ads on Google will soon need to clearly disclose if they contain AI-generated content, such as deepfakes of a presidential candidate uttering something they never said in real life.
The company plans on instituting the new requirements in mid-November, according to a Google support document published on Wednesday.
It’ll apply to image-, video-, and audio-based ads. If they contain “synthetic content that inauthentically depicts real or realistic-looking people or events,” then Google says the ad will need to prominently carry a disclosure mentioning the AI-generated elements.
“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” the company told PCMag.
The only exception is for political ads that feature minor alterations from AI-editing tools. This could include resizing images, brightening the scene, or making background edits. Then the political ad doesn’t need to carry the disclosure.
The requirement comes as some election groups have already been running AI-generated political ads. In June, a political action committee for Republican candidate Ron DeSantis ran an ad featuring an AI voice mimicking Donald Trump to attack him. The AI voice mimics Trump's voice when reading a social media post he actually wrote on Truth Social. Yet the ad carried no disclosure about the AI-nature of the voice.
In April, the Republican National Committee also ran a political ad attacking President Biden that featured numerous AI-generated images depicting the alleged crises that would follow if he was reelected to a second term.
The images all look fairly realistic, but the clip only lists a small disclosure at the end indicating to viewers that the ad was built “entirely with AI imagery.” In contrast, Google’s upcoming requirement notes: “This disclosure must be clear and conspicuous, and must be placed in a location where it is likely to be noticed by users.”
Google told PCMag it already had a policy that effectively banned advertisers from using deepfakes to deceive or mislead users about political or social issues. How this will square with the new AI-generated ad disclosure rule isn’t clear. But it looks like the new disclosure rule was designed to address gray areas in AI-generated political advertising, like the pro-DeSantis ad that featured an AI-generated Trump voice reading a social media post that he wrote in real life.
For now, Google would only say: “We will continue to enforce all of our policies, including our manipulated media and election misinformation policies, wherever we find violations.”
Bloomberg also notes the upcoming ad policy from Google won't apply to YouTube, which already hosts numerous videos featuring AI-generated content. In June, YouTube also said it would allow videos that claim the 2020 presidential election suffered from “widespread fraud, errors, or glitches,” even though the service itself knows such allegations are false.
In the meantime, the Federal Election Commission has launched a process to potentially stop AI-generated political ads through the development of new rules.