Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Broadcasters Push Back Against AI Disclosure Proposal

The FCC received more than 2,000 comments on the topic of AI disclosure in political ads

The FCC received more than 2,000 comments about its proposal regarding disclosure rules for the use of artificial intelligence in broadcast political ads, which has turned into a highly politicized issue.

The notice of proposed rulemaking, if adopted, would require broadcasters to identify political ads that include AI-based content. The commission also proposed requiring licensees to include a notice in their online political files for political ads that include AI-generated content.

Summarizing the position of the National Association of Broadcasters, Rick Kaplan, chief legal officer and executive vice president, Legal and Regulatory Affairs, wrote in a blog entry that the FCC has limited regulatory authority on this issue and that the proposal risks doing more harm than good.

“While the intent of the rule is to improve transparency, it instead risks confusing audiences while driving political ads away from trusted local stations and onto social media and other digital platforms, where misinformation runs rampant,” Kaplan wrote.

NAB believes Congress is the body that should create rules to hold those who create and share misleading content accountable, on both digital and broadcast platforms.

“Instead of the FCC attempting to shoehorn new rules that burden only broadcasters into a legal framework that doesn’t support the effort,” Kaplan wrote, “Congress can develop fair and effective standards that apply to everyone and benefit the American public.”

In its official comments, the NAB wrote that Congress has not granted the FCC authority over political advertisers and ad creators in this area. “A disclosure regime cannot be successful if the information that triggers the disclosure is not accurate or even available, but in this instance that information is controlled by the advertisers.”

Further, NAB said, the disclaimer proposed by the FCC is generic and doesn’t provide meaningful insight for audiences. (For radio, the FCC proposes that broadcasters provide an on-air announcement stating: “The following message contains information generated in whole or in part by artificial intelligence.”)

NAB said, “AI is often used for routine tasks like improving sound or video quality, which has nothing to do with deception. By requiring this blanket disclaimer for all uses of AI, the public would likely be misled into thinking every ad is suspicious, making it harder to identify genuinely misleading content.”

The NAB and the Motion Picture Association also have said the proposal “raises significant, novel factual and legal issues that will entail extensive fact-finding and research.”

The FCC has emphasized that it is not proposing to ban or otherwise restrict the use of AI-generated content in political ads. [See an FCC fact sheet on the issue.] It said it is particular concerned about the use of AI-generated “deepfakes.”

The proposal is being pushed forward by Chairwoman Jessica Rosenworcel; if she brings along the votes of her two Democratic colleagues, the proposal would pass. In May the senior Republican on the commission, Brendan Carr, said “the FCC’s attempt to fundamentally alter the rules of the road for political speech just a short time before a national election is as misguided as it is unlawful.”

Many of the filed comments came from people who identified themselves as private citizens. Many used similar wording: “I support the FCC’s proposal to regulate deepfakes and AI, to create more clarity and understanding for listeners and viewers of content, especially when it comes to our elections.”

But CMG Media Group was among those that wrote to oppose the change. It said there are legitimate questions about its legality, questions that would produce uncertainty about the rules for years and muddle any messaging about the use of AI in political advertisements.

CMG, which owns 50 radio stations, also says the change “would confuse, not inform audiences” and possibly drive advertisers away from using broadcast altogether. “To be clear, (the proposed rules) will not eliminate the use of AI-generated content in political ads: It will merely drive those ads to unregulated platforms.”

Political ad dollars add up quickly for broadcasters. Estimates from AdImpact show that the two major-party presidential candidates and the various political action committees are likely to invest more than half a billion dollars in radio and TV advertising over the final seven weeks of this campaign cycle.

The Federal Elections Commission separately is looking at AI in federal campaign advertisements. This week the FEC said it believes the use of fraudulent misrepresentation utilizing artificial intelligence in federal campaign advertisements is already covered by existing campaign finance law.

The Federal Election Campaign Act, according to the FEC, prohibits any person from falsely representing that they are speaking, writing or acting on behalf of a federal candidate or a political party for the purpose of soliciting contributions.

“The law also prohibits a candidate, his or her employee or agent, or an organization under the candidate’s control, from purporting to speak, write or act for another candidate or political party on a matter that is damaging to the other candidate or party,” according to the FEC.

The FEC said Thursday it has decided not to initiate a rulemaking. It’s not clear how that decision might affect the FCC’s proposal. The two federal agencies have been jockeying for position and a firm footing on regulating the use of AI in political advertising.

Reply comments on the FCC’s NPRM are due Oct. 11. File comments via the FCC online system. Refer to proceeding 24-211.

Close