With the 2026 NAB Show approaching, we’re providing you a series of previews asking exhibitors about their plans and expectations.
Jessica Powell is founder and CEO of AudioShake.

Radio World: Your company won an NAB PILOT Innovation Award two years ago at the convention. What products or themes will you highlight this year?
Jessica Powell: AudioShake makes sound usable and controllable by automatically separating audio into its component parts. That allows broadcasters and media companies to isolate dialogue, music and effects so they can edit, distribute and monetize content more easily.
We started in post-production workflows, but today we also support a range of real-time, at-scale applications. These include live audio cleanup for broadcast teams, improving transcription and translation accuracy, and removing licensed music from archive or sports content before publication.
At NAB this year we’ll be debuting our fastest and highest-performing models yet. They’re designed for low-latency environments like live broadcast, live events and voice AI systems that rely on clean audio inputs.
RW: AI technology has swept through every industry. What have been the most important developments in how AI is deployed in your market segment?
Powell: Audio has been one of the hardest formats for AI to handle. Sound is messy, context-dependent and unforgiving when models aren’t good enough. As a result, audio has often lagged behind formats like text and video, which have become searchable, editable, and highly structured.
That’s starting to change.
Audio separation models have reached a level of quality where they can be trusted in real production environments. As a result, audio AI is now being embedded directly into the infrastructure that media and broadcast companies already operate: media asset management systems, broadcast chains, archive workflows and production pipelines.
Companies like ESPN, AI-Media and Ortana have integrated AudioShake directly into their workflows and platforms. This allows tasks such as removing copyrighted music before publishing, preparing dialogue for captioning systems or making large archives searchable and reusable to happen automatically inside existing workflows.
For broadcasters, this means audio is finally becoming as searchable, sortable and actionable as the rest of their content.
RW: What other business trends will you be watching for at the convention?
Powell: A few things are on my radar heading into this year’s show.
The first is catalog monetization. Broadcasters are sitting on enormous archives — decades of interviews, live performances and field recordings — that are either inaccessible or too expensive to process at scale. The economics of unlocking that content are shifting, and I expect we’ll see more broadcasters asking not just “Do we own this?” but “How do we actually make money from it?”
I’ll also be paying attention to how content owners approach copyright compliance before publication rather than after.
Today, the standard workflow is reactive. A sports team might publish a clip and only discover there’s a problem once a copyright flag appears. Meanwhile, content owners with large archives often leave valuable footage unused because the music licenses attached to it have expired. Pre-publication cleanup — removing licensed music before content is published — is a far more efficient workflow, and awareness around that shift is growing. NAB is often where those operational changes move from interesting ideas to real industry adoption.
RW: What else should we know?
Powell: The same models powering enterprise workflows for major studios and broadcasters are now available to teams of any size through AudioShake Indie. That includes our dialogue, music aand effects separation models, used across the film and TV industry. This is the same technology used to isolate Maria Callas’ voice from 1970s archival recordings for the Oscar-nominated film “Maria” and to prepare dialogue tracks for international dubbing across television catalogs.
NAB Show booth: W3217