Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Suess on the Myriad Uses of AI in Media

He discusses strategic approaches to artificial intelligence

Kyle Suess stands outside in front of shrubbery, wearing a sports jacket and open-collared shirt
Kyle Suess

Kyle Suess is co-founder of Amira Labs. During the NAB Show he will give a talk as part of the SBE Ennes Workshop on April 21 about “Myriad Uses of AI in Media,” including for radio.

Radio World: What does your company do?

Kyle Suess: Amira Labs builds AI software for broadcast and media teams to detect, diagnose and help resolve content issues in real time before viewers notice. We automate audio/video QC, compliance and language/caption checks across live and VOD workflows. Our solutions are deployed on-prem or in the cloud, including fully air-gapped installations where models run locally with no third-party APIs required.

RW: What’s your background?

Suess: It is in building software products. I became drawn to a blend of tech and media starting in college in 2013 while working at a startup that was commercializing natural language processing research for multi-language translation and metadata tagging of videos from YouTube, news publishers and other online platforms.

That was the spark that led me to working at another startup, Grafiti, where my Amira Labs co-founder Stefan and I leveraged machine learning to catalog thousands of graphics and charts to make it easy for journalists and news media to weave them into stories.

These experiences brought out a motivation to get more involved in SMPTE, to learn from those who know more than me, and ramp up building useful tools for broadcasters. Our first Amira Labs-product designed for scalable, low-latency captioning, translation and language identification won NAB’s PILOT Innovation Challenge award.

RW: Broadly speaking, what are examples of how AI is being used in media now?

Suess: Captioning is the big one that many people have seen by now. There are a lot of captioning choices in the market, though be mindful of aspects like language support, latency and usage costs for captioning for long periods of time across many feeds.

Clipping highlights, content tagging and dubbing/AI voiceovers are other top examples. These applications of AI help with quickly generating highlights to post across social media, analyzing saved files to generate metadata for easier searching in media asset management (MAM) systems, and generating synthetic voices to narrate a script or speak in another language.

From a pragmatic sense, AI is widely being used in media as a service delivered through one of the “Big 3” providers of Google (Gemini), OpenAI (ChatGPT) and Anthropic (Claude) for typical everyday tasks like debugging networking issues, generating show rundowns, analyzing advertising data, etc.

This works at an individual level, but can be very expensive and limiting at scale, especially when actually involving content — audio streams, media streams, codecs, containers. For a lot of media companies, the last few years have involved “R&D science projects” relating to incorporating AI.

An infographic headlined "Building Connected Agent Ecosystems"
AI protocols that Suess will discuss for media uses cases.

What I will highlight is bringing an engineer’s mindset to strategically approaching AI and navigating how to build with it, beyond R&D. It’s important to be cognizant of the bigger picture and be calculated with assessing options when making AI decisions. There’s so much innovation happening nearly every week in the open-source world. I’ll highlight some of the most impactful and useful projects for media organizations.

RW: Specific to Radio World readers, what instances can you describe?

Suess: Translation of radio programs from English to other languages, done locally by uploading a script. The motivation for this use case is from working with a radio station in Kansas that wanted to reach more Spanish speakers and automatically translate their English programs, while still making it sound natural and not as robotic. This can go beyond Spanish to other languages catering to the community demographics of different radio markets, like Chinese in the Bay Area, Vietnamese in Orange County, Arabic in Detroit, etc.

Another use case is real-time content classification and segmentation of radio broadcasts.

Consider that a major U.S. radio broadcaster has multiple programs running simultaneously and they want to listen and classify different segments of the programs, or conversations if it’s like a podcast. This is where AI can be useful to easily save snippets of content that could be repurposed for a multitude of uses, without having to put in hours and hours of manual effort.

RW: Much of the attention around uses of AI focuses on negative impacts on human-based workflows. What’s your view?

Suess: First, I think it’s a valid concern and I wouldn’t dismiss it. We hear about the hype around the gold rush and efficiency multiplier aspects of AI in the news. Sometimes, it sounds like CEOs reimagining the 1960s “Twilight Zone” episode, where an enterprise can turn itself into a workforce of machines overnight.

I think AI is a greater enabler and augmenter to save time on the work we struggle with doing and don’t look forward to doing. However, I’m not sold that propositioning AI as a full replacer of humans is the next best move.

Screenshot of a multiviewer mosaic with an AI agent interface to obtain real-time understanding for user-requested live video feeds.
Screenshot of a multiviewer mosaic with an AI agent interface to obtain real-time understanding for user-requested live video feeds.

There’s a divergence between business aspirations and reality, and the reality of the situation is there’s still so much nuance, tribal knowledge and (let’s face it) chaos involved in making media happen that it seems short-sighted to sacrifice those built up in-house advantages.

I think the biggest gains will come from equipping employees with the AI tools that will bring the people and technology together to achieve those more productive outcomes desired.

In thinking about the impact of AI on jobs, there’s a great video from 1979 on YouTube of an interview about the impact of computers. If you replace “computer” with “AI,” the same thoughts we are grappling with today seem not so different than those of 47 years ago.

Perhaps looking back at the past will help offer informative moments of clarity for what we should really be doing with AI when going forward.

Amira Labs will be in booth W2217, sharing space with Open Broadcast Systems in W2219.

Close