
It’s no secret: Artificial intelligence has captured America’s attention, and many industries are exploring how to leverage AI to cut costs and increase efficiencies.
Broadcast media are no exception. Recent industry analysis suggests that the radio industry has been an early adopter of AI and automation, and industry’s use of AI will likely continue to grow in the years ahead.
In a recent Radio World Newsmakers interview, WETA Chief Engineer William Harrison shared his belief that 2026 is truly going to be the year of AI in broadcasting.
He noted that broadcasters have already been experimenting with AI for programming choices and even for AI DJs, and he envisions that there could be more uses of AI for both stations and their talent.
Similarly, my former partner John Garziglia recently wrote for Radio World that AI was a consistent theme at the Consumer Electronics Show 2026, observing that AI will have a major impact on the industry in the next few years alone.
When implemented effectively, AI can be a useful tool for radio broadcasters to streamline their operations, reduce expenses and stay competitive against streaming services, podcasts and digital platforms. Indeed, as Harrison noted, many stations have already begun adopting AI-powered tools for tasks like voice tracking, playlist curation and production editing — areas that traditionally required significant human time and expertise.
As these tools continue to evolve, the future of radio will “involve fewer human voices and more algorithms,” as one author put it a while back on medium.com.
As AI continues to become more prevalent, broadcasters will need to balance their use of AI against traditional broadcasting skills and ensure that they continue to invest in human talent for those roles where creativity, community engagement and judgment remain essential.
Industry concern
While AI can be a useful tool, broadcasters must be aware of the dangers AI presents for the industry.
Given that we are in an important mid-term election year, the issues are all the more significant. During the last presidential election cycle, we saw first-hand just how easily AI can spread misinformation.
We read about the reported 2024 robocall impersonation of President Joe Biden that told New Hampshire voters not to vote in its presidential primary. Reports are already out showing that Russia and China are using new AI tools to sow division in the U.S. and to undermine America’s image.
Broadcasters must be prepared to address AI disinformation and avoid eroding their credibility, reliability and the public trust in their stations.
In addition, when utilizing AI on the content side, broadcasters should ensure they are complying with industry standards and regulations.
For example, as a result of concerns over AI impersonation, SAG-AFTRA, the organization representing broadcast journalists, news editors, program hosts and other media professionals, has stated that every person has an inalienable right to their name, voice and likeness. Any use of an individual’s name, voice or likeness must be pursuant to the individual’s consent and just compensation.
It has also adopted a platform that any recreation of or synthetic performances must be paid on-scale with an in person performance.
As AI tools continue to develop, broadcasters must ensure they are aware of and comply with industry standards and requirements.
The law steps in
AI is an increasing area of focus for both Republican and Democratic lawmakers, and certain states have started to tackle AI issues through the enactment of new state laws.
The New York Times reported that in 2025, all 50 states and territories introduced AI legislation and 38 states adopted about 100 laws. While not all AI laws will impact broadcasters, there are many that will and broadcasters must stay alert for new state requirements.
For example, New York passed legislation that requires any advertisement produced with AI to disclose whether the ad includes AI-generated performers, and also “requires consent from heirs or executors if a person wishes to use the name, image, or likeness of an individual for commercial purposes after their death.”
Similarly, California has adopted AI laws that prohibit “knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content” and requiring “electoral advertisements using AI-generated or substantially altered content feature a disclosure that the material has been altered.”
That bill highlighted serious issues for broadcasters. When the difficulties for broadcasters of knowing for sure how some programs and political ads are produced, as well as the legal requirements posed by the no censorship provisions of §315 and the mandatory access rights §313(a)(7) of the Communications Act for federal candidates were explained, the law was changed to hold broadcasters responsible only when they have “actual knowledge” and to then require that they implement a policy regarding the use and disclosure of political AI and that the policy be communicated to any entity purchasing a political ad.
AI has not only been a topic at the state level, however, but also federally.
There have been a number of AI bills introduced in the U.S. Congress too. For example, last April, Sen. Chris Coons (D-Del.) introduced the No Fakes Act of 2025, S.1367, which protects individual rights from “computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual.”
Thus far, no federal AI laws have been enacted, but this is a hot topic for many legislators, and it would not be surprising if federal laws governing AI were passed in the coming years.
State regulation and the Trump executive order
Concerned that state-by-state legislation could impede AI development and impact the United States national and economic security, on Dec. 11, President Trump issued Executive Order 14365.
The order’s purpose is to promote United States leadership in AI, which, the order alleges, will increase the national and economic security.
To accomplish this, it establishes an AI Litigation Task Force that is responsible for evaluating and challenging state AI laws that are inconsistent with a “minimally burdensome national framework for AI.”
Within 90 days, the secretary of commerce, the White House special adviser for AI and crypto and senior White House policy officials must publish an evaluation identifying state AI laws deemed onerous and appropriate for referral to the AI Litigation Task Force.
In addition, the executive order tasks various federal officials with:
- Issuing a policy that makes states with onerous AI laws ineligible for Broadband Equity Access and Deployment (BEAD) non-deployment funds.
- Creating a uniform federal policy framework for AI that preempts state AI laws that conflict with the policy adopted.
- Initiating a proceeding at the FCC to determine whether to adopt a federal reporting and disclosure standard for AI models that preempts conflicting state laws.
- Issuing a policy statement on the application of the FTC’s prohibition on unfair and deceptive acts or practices to AI models.
- The exact impact of this executive order on state AI laws will be important to watch in the coming months.
An AI guide for now
Every station considering the use of AI in its broadcasts should adopt a policy that conforms to the SAG-AFTRA principles of full disclosure and equal pay and complies with state disclosure requirements.
For circumstances where a broadcaster is presented with prerecorded programming or advertising, the broadcaster could consider requiring that the programmer provide a certification that requires the programmer to certify whether AI technology was used in the preparation and creation of the material to be presented, and if so that a proper and full disclosure accompanies the material to be presented. This is particularly important in the case of political programming and advertising where the licensee is restricted from editing and may be required by law and regulations to allow access to its airwaves.
[Do you receive the Radio World SmartBrief newsletter each weekday morning? We invite you to sign up here.]