Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

When It Comes to AI, Trust Is So Important

CGI’s Thielen says tools should be used with your station’s reputation in mind

This is one in a series about the applications of AI in radio broadcasting.

Michael Thielen is vice president, consulting services at CGI. He is responsible for the dira radio broadcasting playout and production solution. He also is involved in the company’s AI and non-linear planning and production workflows.

Radio World: How are artificial intelligence technologies being deployed for radio at CGI?

Michael Thielen

Thielen: We offer several relevant products. The first is dira, where AI is used in the management of audio and other files. It improves the work of journalists in creating audio material that plays on the air. Another is OpenMedia, a newsroom system that journalists use to gather information. AI is used for researching material, looking at the wires, scrolling through the internet for material and then writing text. 

RW: Radio awoke to the potential of AI when ChatGPT hit the marketplace. How would you assess the impact of AI in radio so far?

Thielen: ChatGPT of course was the first big breakthrough of generative AI, to be used for generating content and helping solve problems in the production process. 

We were using AI prior to that for purposes such as speech-to-text transcription. That was a small revolution in itself, making the content of archives more accessible. In the past, users had to transcribe material manually; they can now search for material much more easily.

But generative AI truly is a revolution in the market, opening so many possibilities for journalists and creative people. It can help me generate content in various languages, it can help me generate an article with a different kind of style or tone — it brings so many opportunities.

RW: Can you expand on the uses with dira?

Thielen: AI is used for example in automatic audio enhancement, so journalists can take problematic audio material and make it sound much better through an automated process. 

We also can offer artificial voices, and the user can generate a fully artificial presenter, or clone voices from presenters. This can be helpful, say, if there’s an important traffic problem, a public health announcement or news about a fire. You might not have a presenter in the studio, but with the AI tool you can bring these stories to the listener with synthesized voices.

We’re also testing the ability to generate different types of content based on the format and tone of a given station. Is the presenter a serious person, or should they have a cheerful tone as with a magazine show? Commercial stations in particular may be interested in these kinds of options.

We also partner with a company that does live transcriptions. Deaf people normally may not be able to listen to you, but they’re still interested in your news and content. Now you can offer live subtitling for your radio program, which is great for accessibility. In fact you can offer the transcription in multiple languages.

Also, these transcription services generate tons and tons of text, which we store in our dira system. This creates a time-stamped record of what happened on the radio. 

AI then can be used to generate analytics about what has been played. You might use this for music reporting or for analyzing your advertising content or the news stories that have aired. And you can analyze your competitors’ programs as well as your own, without paying a marketing agency to have students sit down and transcribe programs.

It’s fascinating what the technologies can do. At the same time, personally I sometimes want to step back and ask about unintended consequences of AI. For instance, when you look at web pages, at TikTok and other social media, you see a lot of AI-generated content, and much of it is not labeled as such. What’s real and what’s not? People tend to believe what they see, but with current models of AI, it’s very hard to know whether a video or photo is real.

I mention this because trust is so important to us in the broadcast industry. These are questions broadcasters must weigh too.

RW: Are there obstacles or blockers that broadcasters should keep in mind?

Thielen: Users certainly should be aware of privacy compliance. This is not unique to AI, but it involves questions for instance of where content is stored in the cloud. Are data protection rights being adhered to? If I’m using not my own language model but something provided by big tech companies, what really happens with the data I use? What kind of training might it be used for? For broadcasters, how do you ensure that the language models are not trained into becoming biased or unethical?

RW: Other thoughts?

Thielen: Our aim in implementing AI in our radio production suite is to help build workflows that ease the work of the journalist or content creator so they can be more effective and work better and more quickly. We want to help get rid of boring — call them stupid — tasks that can be automated or done easily by AI. We’re not a company that’s just looking to bring the cheapest solution to the market.

Tools like ChatGPT are great for building text and suggesting content. But in our way of thinking, everything that’s done with AI should go through the eyes of a real person who will read and approve it. We think it’s important that broadcasters continue to be seen as trusted advisors, as trustworthy people. For many of our customers, their brands are among their most important business assets. They need to protect that reputation, and they can only do it if there is a human in the loop.

Read more about applications of AI in radio in a free Radio World ebook.

[Read More Radio World Stories About Artificial Intelligence]

Close