Estimates from AdImpact show that the two major-party presidential candidates — and various political action committees (PACS) — are expected to invest about $381 million in radio advertising this election cycle, but some radio broadcasters say they fear the FCC’s plan for political AI disclosure would upend the political ad buying process.
A group of radio broadcasters, in joint comments filed with the commission, say a mandate for them to disclose when AI is used in creating political ads, along with other required steps, would not be effective in fostering an informed electorate. The FCC proposes that broadcasters provide an on-air announcement for political ads using AI stating: “The following message contains information generated in whole or in part by artificial intelligence.”
The broadcasters in the group, including Alpha Media, Cumulus Media and Cox Media Group, say the FCC’s proposal would provide a “purely cosmetic solution at a considerable cost to the public and the parties tasked with implementing the proposed rules.”
The AI disclosure proposal also would require licensees to include a notice in their online political files when political ads include AI-generated content.
“The proposed requirement for broadcasters to be responsible for vetting whether political ads contain AI-generated content would upend the political ad-buying and approval process, which will force broadcasters to incur costs for additional staff and training,” according to the joint filing by broadcasters.
[Related: “NAB Tells FCC Its Plan for Political AI Disclosure Can’t Stand“]
Broadcasters would need to have staff available to implement disclosures and field concerns regarding conflicting reports of AI-included material, according to the filing. For instance, third-party claims that advertisements include AI-generated content “would needlessly place broadcasters in the middle of disputes over content they did not create and whose authenticity they have no way to confirm,” the broadcasters commented.
In addition, broadcasters have “hard evidence of the costs associated with these requirements as certain states have adopted laws requiring AI disclaimers.”
The joint filing by broadcasters also says that not requiring disclosures for ads run on popular platforms, like streaming services and social media, would simply confuse listeners and viewers rather than inform them. The FCC has said its proposal on AI does not apply to online ads and those found on social media since they are outside the FCC’s authority.
The broadcasters continue: “Further, the proposed AI-generated content disclosure and transparency requirements — if adopted — would exceed the commission’s authority under the Communications Act and would violate the First Amendment.”
iHeart in its own new comments to the FCC says the limited examples of the use of AI in political advertisements identified in the record — virtually all of which appeared only on digital platforms — do not justify a rush to implement rules that would only have limited applicability.
“(The proposed rules) will not increase transparency and will only result in more confusion about the content of political ads,” the broadcaster said.
iHeart, which owns 860 radio stations in 160 markets, says only Congress has the authority to properly address the use of AI in political advertising and the disclosure thereof.
“While iHeart recognizes the potential benefits of uniform federal rules governing the use of AI in political advertising, they must come in the form of comprehensive legislation from Congress and not through a piecemeal, agency-driven approach,” iHeart told the FCC.
iHeart says the record establishes at least three critical flaws in the commission’s proposal:
“First, it is built around a definition of AI-generated content that is so broad that it could capture most ads on radio today, resulting in excessive use of disclosures and audience apathy. Second, it would rely upon standard language for the disclosure that provides no information to the audience about what portion of the ad constitutes AI-generated content. And third, it only applies to a subset of the political advertising ecosystem (and, as explained above, the ads least likely to include a deceptive use of AI).”
The FCC, which began looking at new transparency standards in May 2024, has said its proposal does not prohibit the use of AI in political ads, only disclosing the use of AI within political ads.
Comments on the FCC’s NPRM can be viewed via the FCC online system. Refer to proceeding 24-211.