Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Rosenworcel Calls for AI Reform at Berkeley Conference

"The bottom line is that the public deserves to know if the voices and images in political commercials are authentic"

On Friday, Sept. 27, FCC Chairwoman Jessica Rosenworcel spoke at the 7th annul Berkeley Law AI Institute, where she shared her thoughts on the use of artificial intelligence in the broadcast and media world.

Held in Berkeley, Calif., the conference dove into the latest developments in AI technology, governance, legal practice innovations, risk mitigation and regulatory frameworks, among other topics. Rosenworcel was called upon to speak to the Federal Communications Commission’s role in AI regulation.

In the following text, the chairwoman comments on several proposed pieces of legislation that have become hot button issues in recent weeks — such as the FCC’s proposal regarding disclosure rules for the use of artificial intelligence in broadcast political ads, a highly-polarizing bill as the November elections loom closer; and its proposal to regulate the use of AI in robocalls and robotexts.

Rosenworcel’s full comments can be found below.


Good morning!  It is great to be at Cal and join you at the Berkeley Law AI Institute.  This is the seventh year you have held this gathering, meaning you can lay claim to being at the forefront of the current generation of thinking about Artificial Intelligence. After all, it was only two years ago that ChatGPT woke up Washington and sparked a global discussion about the future of machine learning.

As the Chairwoman of the Federal Communications Commission, I can speak with some authority on this because during the last two years I have been involved in more conversations about AI technology than I can count. 

But it is not just policymakers who have thoughts about the future of AI. Taylor Swift has thoughts. Scarlett Johansson has a lawsuit. So does The New York Times.  We have Fake Drake soundalike tracks.  Unions have gone on strike to address the impact of this technology on their members’ livelihoods.  The top non-fiction book in this country is Nexus by Yuval Noah Harari.  It is about AI.  Last year’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence clocked in at 63 single-spaced pages.  Right now, there are more than 120 AI bills pending in Congress. 

By any measure, it’s a lot. Because at the same time we are asking what we can do to support this technology and what we can do to manage its risks. We want to know what it means for the future of work.  We are concerned about models that inherit the prejudices of the systems they are trained on and determine who gets a loan and who gets a job. We want to know what it means for competition.  We want to talk about energy consumption. We want to understand what it means for the future of humanity.

Like I said, it’s a lot. So back to Taylor Swift. Because she not only has eras, she has concerns about how her image has been used without her permission in ways that “have conjured up fears around AI and the dangers of spreading misinformation.” That fear is especially acute when it comes to elections. When we read something that seems hard to believe, there is an instinct to check the source.  We have not built up those same defenses when it comes to video or audio. If we see or hear something that looks realistic, we have a lifetime of experience that has conditioned us to believe our own eyes and ears. Think about it.  We say “I see you” and “I hear you” casually and we do it all the time.    

But gone are the days when the biggest risk of a misleading visual was a photoshop of still image that would take time to craft by someone with real editing skills. AI has become powerful enough to mimic human voices and create life-like images and video cheaply, easily, and at massive scale. 

Take this newfound ability, mix it with the conversations about AI that are happening in Washington and around the country, and it is no surprise that in a new Pew survey released last week, the public is worried. The American people, by a margin of 82 to 7 are concerned that AI-generated content will spread misinformation this campaign season and it cuts equally across political lines. The same survey found the public is eight times more likely to think AI is being used for “bad” in a campaign as opposed to “good.”

The FCC is responsible for communications technology — broadcasting, broadband, wired and wireless services, as well as the satellites in our skies. That means our work includes oversight of television, radio, and phone networks, which right now are pushing out billions of dollars in campaign messaging in an effort to reach voters across the country. 

This is what led the Berkeley Law AI Institute to invite me to join you today and talk about what the FCC is doing on these matters and AI. So let’s get to it. 

Before going any further, I want to put my cards on the table and confess that I am much more hopeful about AI than pessimistic. A big reason why is that I am an optimist by nature. But that is not the only reason.  Because everywhere I look in communications technology, I can see how AI has the power to advance our grandest ambitions. 

Take the future of communications. It involves exponentially increasing the connections around us — between people, between people and things, and among things themselves. The data those connections produce can provide us with the ability to make more intelligent use of scarce resources, including spectrum itself, which is the invisible infrastructure in our skies. In fact, at the FCC we have worked with the National Science Foundation to help demonstrate the potential AI has to increase the efficiency and effectiveness of our networks.  Consider that a large wireless provider’s network generates several million performance measurements every minute. With this data we can provide more dynamic access to communications, using AI to self-configure, self-optimize, and self-heal facilities.  It provides a level of insight and precision that can increase network trust and help turn communications scarcity into abundance. 

This is what is coming. There is reason to be excited. Like I said, I am optimist by nature. But I also have a law degree.  I am schooled in thinking about how things can go wrong. That means I understand why in the Pew survey there is so much concern about AI and elections.  I also know from experience. 

In late January this year, I came into my office. Traffic, coffee, booting up the computer; it was like any other day. But the news rolled in that morning that thousands of voters in New Hampshire had just received a call from what sounded like President Biden, days before the primary election in the state.  “What a bunch of malarkey,” he said before telling those on the other end of the line there was no need to vote. 

Months before this morning, I had sat through presentations on voice cloning technology. They were eerie and unnerving.  They were all about what could go wrong.  Still, there was not much to see in our robocall complaint database. But, on this morning, here it was.  During the election.  With the voice of the highest office in the land. 

The FCC kicked into high gear. We acted fast. We unanimously adopted a ruling that made clear that “artificial or prerecorded” robocalls using AI voice cloning technology violate the Telephone Consumer Protection Act.  That’s a law from 1991.  It limits telemarketing and the use of automatic dialing equipment.  In 2021, the Supreme Court narrowed the scope of its protections against robocalls — I know it’s crazy — by limiting the definition of this equipment in Facebook v. Dugid. But we reached the conclusion that the law covers artificial voice cloning with the help of a group of State Attorneys General, including the New Hampshire Attorney General.  They have been our partners-in-arms in the fight against robocalls. 

In fact, we have built a bipartisan army with 49 State Attorneys General who have signed on to a Memorandum of Understanding to work with the FCC on junk robocalls.  The ruling we made bringing AI voice cloning technology under the Telephone Consumer Protection Act is important because it gives our state colleagues the right to go after the bad actors behind these calls and seek damages under the law.

Next, we worked with carriers to trace those responsible for this New Hampshire calling campaign.  When we found the carrier putting this junk on the line, we sent a cease-and-desist letter and notified all other carriers to go ahead and stop carrying their traffic.  Then we proposed a fine, and the carrier ultimately paid $1,000,000 and put in place policies to stop these calls going forward. 

Our traceback efforts also led to the individual behind the call itself — Steve Kramer. We proposed a $6,000,000 fine. He has not responded. So yesterday at the FCC we adopted a Forfeiture Order to enforce it in court. 

With these actions, we made clear that if you flood our phones with this junk, we will find you and you will pay. And it is not just the FCC you need to worry about.  Because remember our friends in the states are working with us.  Right now the New Hampshire Attorney General is prosecuting Steve Kramer for voter suppression and impersonation of a candidate.

So now let’s summarize. We moved quickly, we worked with our state colleagues, and we enforced the law. We sent a message. But this New Hampshire episode will not be the last time we see this technology used without permission to confuse and misinform. 

We have to plan for this future now. That means we need to do it with the tools we have. It is not easy. But with all hard problems you have to start somewhere.

So we are starting with transparency. Last month, the FCC started a rulemaking to address the use of AI in robocalls and robotexts. We are taking public comment on it now. 

We proposed an initial definition of an AI generated call as “a call that uses any technology or tool to generate an artificial or prerecorded voice or a text using computational technology or other machine learning, including predictive algorithms, and large language models, to process natural language and produce voice or text content to communicate with a called party over an outbound telephone call.” 

Then we proposed requiring callers and texters to make clear when they are using AI generated technology. That means before any one of us gives our consent for calls from companies and campaigns they need to tell us if they are using this technology.  It also means that callers using AI-generated voices need to disclose that at the start of a call. 

This kind of transparency is important. And again, you have to start somewhere. By starting with disclosure we do not restrict speech, we do not restrict technology, we are instead seeking to create a norm—legally and socially—that when AI is being used you deserve to know. 

This same spirit of transparency drove the FCC to consider other changes to its policies. Earlier this summer, we started a rulemaking to take a look at our longstanding practices regarding campaign advertisements. 

Since the 1930’s the FCC has required broadcasters — television and radio stations — to keep a publicly available inspection file. Today it has information about who bought a campaign advertisement, how much they paid for it, and when it ran.  This disclosure ensures that those who use the public airwaves for local, state, federal, and issue campaigns publicly disclose facts that matter in democracy.  When it comes to these advertisements, they have a duty to tell us who is responsible and who should be held accountable.   

Over the years, this practice has been updated. Cable and satellite were brought into the fold. We now have the standard on-air disclosure, too, with candidates announcing who they are and making clear that they approve the advertisement.  Then, over a decade ago, the FCC took steps to make sure these public inspection files were not just kept in dusty cabinets but were available online.  Our policies change as technology change.

Fast forward to July. The FCC proposed that all parties that already have to file information about their television and radio campaign advertisements should also indicate if AI is being used.  In addition, we are looking at requiring on-air disclosure of AI use in these advertisements.  In short, we proposed establishing a simple standard based on disclosure.  If a candidate or issue campaign used AI to create an advertisement, they should share that. 

Here, we proposed an initial definition of AI-generated content as “an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors.” We are taking public comment on this now.

The bottom line is that the public deserves to know if the voices and images in political commercials are authentic or if they have been manipulated. But to be clear, we would make no judgment on the content being shared.  This is not about telling anyone what is true and what is false.  It is about empowering voters, viewers, and listeners to make their own choices.  

The work we have done on robocalls and the proposals we have made on campaign advertisements are grounded in the same idea.  They are built on the notion that with AI the place to start is transparency.  Across government, I believe this is the place to begin.  We recognize that the FCC is just one part of a broader effort that involves other federal officials, state and local governments, and private actors.  No one of us can accomplish this alone.  But we all can work together to support disclosure.  We can all work together to support transparency.

Back to Taylor Swift.  She recently said: “The simplest way to combat misinformation is with the truth.”  Amen. But the truth does not have a chance if we are not open and honest about when we use AI for voice cloning and image and video manipulation.  What I have described here is not the end of the effort.  It is simply the right place to start.   

I am not the only one who feels this way.  As I mentioned at the start of my remarks, the most popular non-fiction book in this country right now is an examination of the historic implications of AI.  It is written by Yuval Noah Harari, who you may know as the author of Sapiens or as a recent guest on Kevin Roose’s Hard Fork podcast.  I know Kevin is closing out this conference, so I hope I am not stepping on his material. 

The book’s main observation is that, if you look at the course of human history, the essential ingredient for large-scale democracy is information technology.  Technologies like the printing press, telegraph, and radio are what enabled democratic conversations at a scale beyond small city-states.  If modern democracies are built on information technology, that means major shifts in that underlying technology will change how we operate as a society.

The internet and social media have already given us the most sophisticated information technology in history.  But, in the book, Harai says we are losing the ability to talk with each other and points to signs of democracy fraying at home and across the world.  Enter AI.  It is, he suggests, going to be the most dramatic change in technology we have ever seen. 

In other words, the stakes of this debate about AI and information flows could not be higher  So I wanted to end by highlighting one of his prescriptions. Harai contends that if and when AI-generated voices are taking part in our conversations and engaging with us, we should know.  In other words, we need transparency.  Without it we will not know that a voice is synthetic and that damages public trust and democracy itself. 

So what you have is a popular historian reaching the conclusion that with AI transparency matters. You have Taylor Swift concluding that with AI transparency matters.  And you have the FCC working with the laws we have on the books to reach the same conclusion.

The challenges AI presents are real, but so are the opportunities.  So let’s start with disclosure; let’s start with transparency.  Let’s give every consumer, citizen, viewer, and listener the facts they need to know to make their own decisions, and let’s support the transparency that is essential for democracy. 

[Read More Radio World Stories About Artificial Intelligence]

Close