Stan Walbert is CEO and marketing director of MultiCAM Systems. The company uses an AI algorithm to choose the best video camera presets based on who is speaking, then emulates how a human operator would switch. A longer version of this appeared in the Radio World ebook “AI Comes to Radio.”
Radio World: What does the term “artificial intelligence” mean for your company and its products for the radio market?
Stan Walbert: Radio stations are now considering themselves as “content creators,” and they need to be able to deliver content in the most interesting form for their audience. Nowadays that means video first, in an increasing number of cases.
Since people don’t have the resources to do everything by themselves, they need to rely on AI to help create natural-looking video that engages the audience. The AI must act as a human would do to make the content interesting. The shots must look natural. What stations really need to avoid is setting up something that is boring with very few shot angles, or something where the shots are jerky in movement.
[Related: “AI Will Help the Industry Reinvent Itself”]
There is a big difference between dummy algorithms, macros and scripts, and AI. AI is the only one that can provide videos that make the show look natural. When you watch stations that use MultiCAM to create their visual experience, you will find you end up focusing on the video content and not the fact that it is “video for radio.” That is because of the AI, because it helps the station create something that you would normally need an entire camera crew and director to create.
Our stations are content creators, no matter what format they are providing. This technology gives radio stations a major “assist” into extremely well-produced video content.
RW: How is this different from other products or technologies on the market?
Walbert: There is no other product that uses AI for visual radio. MultiCAM is the only company that uses AI for visual radio. Our AI reproduces what directors are doing when they produce live videos. This is based on our experience of being in broadcast production for over 10 years; that is how we came up with the AI for this.
RW: Give an example of how the use of this AI changes the workflow for a typical user of your products.
Walbert: With MultiCAM radio, you can create entire programs without additional staff needing to be involved in any of the day-to-day workflow. This is groundbreaking technology because it allows radio stations to compete for content creation in both video and audio areas. In the past without our technology, there may have been a static camera shot or a few camera movements. The novelty of that wears off quickly.
In my opinion, what we are producing with automated almost works better than someone being there could. The reason for this is that AI allows the cameras to respond immediately; and frankly, no human could keep up with that. AI allows the station to avoid what we call “Aquarium visual radio.” This is where it is a static shot.
RW: Describe the development process.
Walbert: We spent a lot of time thinking about how we ourselves did this in our production work. For example, we would never as humans pick two shots with the same angle to follow each other. We emulated the rhythm of how a director would act, and we implemented that. We studied this extensively because we ourselves are from the broadcast production background, so we have looked at how these shots are made. We combined that with our knowledge of robotics and automation.
We are at the very beginning of where this technology can take this industry.