Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Rosenworcel Circulates Proposal to Regulate AI Use in Political Ads

She hopes the agency will consider a disclosure requirement for broadcasters and candidates

The use of artificial intelligence has been top-of-mind for many broadcast professionals. Now the chairwoman of the Federal Communications Commission is looking to regulate the use of AI in political advertisements.

On Thursday, Jessica Rosenworcel announced that she is circulating a proposal which, if adopted, would look into whether the agency should require an on-air and written disclosure when there is AI-generated content in political ads on radio and TV. The proposal would also request comment on a “specific definition” for what constitutes AI-generated content.

According to an FCC press release, the Notice of Proposed Rulemaking that Rosenworcel hopes to open would initiate a proceeding that “recognizes consumers’ right to know when AI tools are being used in the political ads they view.”

Additionally, the proposal might also consider applying disclosure rules to both candidate and issue advertisements in order to increase transparency. According to the draft NPRM, the FCC would apply the disclosure requirements to broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees.

In the press release, Rosenworcel said the proposed proceeding would not prohibit the use of such content, only require the disclosure of any AI use within political ads.

“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” said Rosenworcel.  “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

Per the FCC, the use of AI is expected to play a significant role in the creation of political ads in 2024 and beyond, “but the use of AI-generated content in political ads also creates a potential for providing deceptive information to voters, in particular, the potential use of ‘deep fakes.’”

These deep fakes could include altered images, videos or audio recordings that depict people doing or saying things that did not actually do or say, or events that did not actually occur.

In wake of Rosenworcel’s announcement, Ishan Mehta, program director for the Media and Democracy Program at Common Cause, released a statement:

“Americans expect and deserve to know whether the content they see on our public airwaves is real or AI-generated content – especially as the technology is increasingly being used to mislead voters. This rulemaking is welcome news as the use of deceptive AI and deepfakes threaten our democracy and is already being used to erode trust in our institutions and our elections.

Mehta continues: “It is imperative that regulations around political advertising keep pace with the onward march of new and evolving technologies.”

The FCC notes that the Bipartisan Campaign Reform Act gives the agency authority when it comes to regulating political advertising.

NPRMs like this one are opened if a majority of commissioners vote in their favor. With a Democratic majority, Rosenworcel likely will get at least that far. If adopted, her proposal would launch a proceeding during which the commission would take public comment on the actual rules.

[Read More Radio World Stories About Artificial Intelligence]

Close