Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

What are the Legal Considerations of AI in Radio?

Laws and best practices are only beginning to evolve

Does generative AI open new legal issues for radio broadcasters? We asked three respected broadcast attorneys.

David Oxenford

“Right now, I have not been getting many AI questions from broadcasters themselves,” said David Oxenford, partner at Wilkinson Barker Knauer.

“But I have been working with a number of broadcast and other media groups in reviewing state laws limiting AI use.

“Many bills deal with AI in political advertising, but there are also broader bills that restrict AI use in any medium if they depict a person who had not consented to the use of their voice or likeness.”

Oxenford said there are many aspects to these bills, from making sure parody and satire are protected — “you don’t want a late-night TV show to get sued for a skit featuring a Donald Trump cartoon generated by AI for using his likeness without permission” — to the inability under the Communications Act and FCC rules of broadcasters to censor the content of candidate ads, even if those ads use AI.

“Proposed legislation on AI is often worded very broadly and needs to be reviewed to ensure that it does not have unintended consequences,” Oxenford said.

Nor has Scott Flick, a partner at Pillsbury Winthrop Shaw Pittman, received many questions from stations, presumably because radio’s use of generative AI is still sparse, or because they simply haven’t focused on the legal risks yet. 

“That will change when we have a major incident involving a broadcaster’s use of AI, and everyone is suddenly hyper-focused on ensuring the same thing doesn’t happen to their own station,” Flick said.

Political spots

What responsibility does a radio broadcaster have regarding the possible use of AI in political spots?

Gregg Skall, a member at Telecommunications Law Professionals, said broadcasters are charged with a public interest responsibility, so even before considering the political rules, they have a duty to be honest with their audiences and to not misrepresent who is speaking to them.

Gregg Skall

“So to the extent possible, they should make efforts to assure that material offered to them as presented by the speaking party is, in fact, the person depicted,” Skall said.

“In the case of political spots, the Communications Act requires that when an advertisement contains the recognizable voice of a legally qualified candidate for elective office, equal opportunities must be made available to their legally qualified opposing candidates in the same election, they must be offered comparable or lowest unit rates and federal candidates have a right to access the station. 

“However, the rule does not say that these attributes apply to a voice that ‘sounds like’ the candidate. It is supposed to be the actual candidate’s voice. So, the challenge for the broadcaster is to properly and legally treat AI-created voices differently,” he continued.

“Perhaps the best way to deal with AI in this regard is to require a declaration as a part of any such order that states that the agency or the other spot buyer declares under oath that the voice is that of the candidate. That may create a dilemma for the agency since they may not have that information. So there is a new lesson for political agencies and other spot buyers as well.”

Scott Flick emphasized that when it comes to candidate spots, the “No Censorship Rule” prohibits stations from modifying even a deceptive candidate ad, so trying to analyze a candidate spot for the use of deceptive AI wouldn’t have much purpose.

“Third-party issue ads, however, always create the risk of the station being sued for defamation, so stations have the same obligation to do their diligence on the truthfulness of such spots as they have always had,” he said.

“The use of AI in the creation of such spots just makes it that much more difficult for a station to spot deceptive material, which is why the state broadcasters associations have been focused on ensuring state legislation on AI makes clear that it is the advertiser, and not the station airing the ad, who should be at risk for the content of that ad.”

What happens if a station airs a spot by a candidate with audio of their opponent, and the opponent complains that the content was faked?

 If the spot is considered a candidate “use” under the law, Flick said, the station can’t demand that it be modified or modify the spot itself. “The opponent asserting that the spot has deceptive AI-created content needs to complain to the advertiser and examine the options available under a growing number of state laws on AI content.”

Gregg Skall recommends that the station request from the candidate or their authorized committee a statement that it was not the candidate’s voice. “On that basis, it should challenge any equal time requests,” he said. “I would also ask the candidate to make a recorded statement attesting to the fact that the spot was fake and not his/her voice.”

And David Oxenford said that, as with any ad, if a broadcaster is put on notice that a non-candidate ad contains potentially defamatory material, they need to review that ad and decide if it is in fact defamatory. 

“If they continue to run the ad after having received notice that the ad was false, they could have liability. AI simply makes it easier for political advertisers to generate content that could be found to be defamatory if it portrays a candidate falsely.”

He added: “This is a time to have an attorney on speed dial to help assess the risks — and to do it quickly.”

Synthetic voices

We asked these experts whether best practices have emerged for a broadcaster who wants to create content using a cloned voice of their talent or a third party.

Scott Flick

“The first question is whether it is authorized by the talent or celebrity,” said Skall. “This should not be done without permission, which should be documented. In any case, however, a disclaimer should be aired stating that the voice that was heard was created artificially with appropriate permission.”

While AI applications in radio are obviously still developing, Flick said it’s clear that profiting from a particular person’s identifiable voice is going to violate “right of publicity” and similar laws in many states unless the station has secured the necessary rights. 

“Life will get tricker when the response to these laws is voices that sound a lot like a particular person, but not quite identical,” he said.

“That’s where courts will get involved, making findings as to when the creator of an artificial voice is profiting off of someone else’s fame, versus the artificial voice just having a few characteristics in common with a famous voice. That will be expensive litigation, regardless of who wins.”

Oxenford too noted the variety of state laws covering “rights of publicity.” He said using anyone’s voice without permission is a risk. 

“This is particularly true for celebrities. You need to be aware of those laws and be very careful in appropriating anyone’s voice or likeness. The risk varies depending on the content — use in ads is likely going to raise issues, using the voice in a parody or satire may be defensible if it is clear from the context that it is not the real celebrity that is on the air. But even with parody and satire, there are not clear safety rules — it all depends on the context and the whims of the court in which you get sued.”

[Read More Radio World Stories About Artificial Intelligence]

Evolving rules

Meanwhile, AI’s regulatory environment is changing quickly. 

“As counsel to the National Alliance of State Broadcasters Associations, I’m seeing state bills pop up almost daily, as legislators seek to ensure that the law stays ahead of AI developments, if that’s possible, rather than forever playing catch up,” Flick said.

“However, the most notable aspect of these legislative efforts is the struggle to define what uses of AI are ‘bad’ and then what to do about them. Over time, we’ll find out which approaches are most effective, and the laws will then start to converge on those successful approaches.” In that regard, he said, states will serve as legislative laboratories, likely influencing eventual federal legislation regarding AI. 

“In the meantime, the challenge for stations will be staying on top of these legislative efforts, both to prevent sloppily worded bills from becoming law and harming innocent broadcasters airing third party content, and to ensure stations stay on the right side of these new laws.”

Might the Federal Communications Commission get involved? The attorneys were doubtful.

“The FCC traditionally stays away from reviewing content of broadcasts — leaving that to the courts and other agencies,” Oxenford said. “Watch for federal legislation to see if the FCC ends up with any more power in this area.”

Flick commented, “While the FCC has made clear that it will be diving in with both feet on AI in other contexts, there really isn’t a reason to do that in radio. Though the use of AI that goes awry could result in violations of existing rules — e.g., indecency or the rule against broadcast hoaxes — that just emphasizes that existing rules already address the FCC concern, and AI is just another mechanism that might cause a violation of those existing rules.”

Gregg Skall noted that the Federal Election Commission has responded to a petition for rulemaking that seeks a rule to prohibit a candidate or their agent from fraudulently misrepresenting other candidates or political parties and to make it clear that the related statutory prohibition applies to deliberately deceptive AI campaign ads. 

“Were this rule to be enacted, it is likely that it would provide additional guidance or even a template for the FCC in some parallel rulemaking effort.”

Skall said that according to the Council of State Governments, since 2019, 17 states have enacted 29 bills focused on regulating the design, development and use of artificial intelligence. 

“These bills primarily address two regulatory concerns: data privacy and accountability. Legislatures in California, Colorado and Virginia have led the way in establishing regulatory and compliance frameworks for AI systems,” he said.

“There are a number of policies the states are focusing on, but of major significance for broadcasters is to protect individuals from the impacts of unsafe systems and to protect their privacy. Any use of AI that makes its way into the broadcast studio should be examined with these goals in mind. The council website provides a list of state AI efforts.”

Similarly, he noted, the National Conference of State Legislatures reports that in the 2023 legislative session, at least 25 states, Puerto Rico and the District of Columbia introduced AI bills, and 18 states and Puerto Rico adopted resolutions or enacted legislation. 

Flick reiterated that broadcasters must pay attention as such bills come along seeking to regulate AI, particularly in advertising.

“Broadcasters need to make sure that these bills are clear that if a deceptive ad is aired, it should be the advertiser and not the distributor of that ad — in this case, the broadcaster — who should be liable for any AI-induced deception that results,” he said.

“While some AI-generated video or audio may be relatively innocent — for example, making a picture look like it was taken in better lighting — where deceptive AI-generated content is placed in an ad, the purpose is to fool the viewer, and that includes the station airing the ad. It is simply not practical for a radio station to do a deep dive into every ad to figure out what is true and what just seems true based on the content of the ad.”

Other concerns

Are there other areas where AI raises legal concerns?

“Except for the increased emphasis on broadcast content, radio stations are in most ways like other businesses in terms of the benefits and risks of AI,” Scott Flick said.

“It will make some operations more efficient while in others it will generate a poorer quality result, but one that may be deemed good enough given the cost or time savings involved. From an FCC perspective, the FCC has generally rejected efforts by radio stations to defend themselves against an asserted rule violation by claiming the violation was caused by an inattentive employee. Blaming a violation on AI will likely fare no better.”

David Oxenford worries about some programs that are used to generate local news or provide local hosts for programming. 

“Especially in smaller markets, where there are few local news sources, AI could copy information from a local source, and if the broadcaster uses it, there could be copyright liability,” he said.

“From my own experiments using AI, it can also have ‘hallucinations’ and report on things that never happened. There is already at least one court case against an AI company when its system generated a story about the criminal conduct of an individual — conduct that never happened. This kind of story, if broadcast, could lead to defamation claims. So carefully review any content that AI generates before it goes on the air.”

And while AI has proven useful for quick answers to sometimes complicated questions and research projects, Gregg Skall agreed that it can be unreliable. 

“To the extent that broadcasters begin to rely on it, accuracy and honesty with their audience must be foremost in their value system and processes. So fact-check the produced material; AI can provide leads, but not final answers. Be honest with your audience and let them know if the host, DJ or other speakers are real or AI-generated.”

This article is from the free ebook “Artificial Intelligence in Radio.” 

[Sign Up for Radio World’s SmartBrief Newsletter]

Close