Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

AudioShake Unveils AI Voice Separation Tool

Multi-Speaker model isolates individual voices in complex audio

AudioShake has announced a new AI model that the company said separates overlapping voices in audio.

The music technology company stated that Multi-Speaker is designed to separate speakers into individual audio tracks. In a press release, it said that it is the first model of its kind to produce multi-speaker separation in high-resolution audio.

AudioShake Stem Separation Demo
A demonstration of AudioShake’s Stem instrument separation tool.

AudioShake explained that the challenge lies in overlapping speech. Its Multi-Speaker model uses AI to handle environments such as crowd dialogues, panel discussions and fast-paced interviews, separating them into individual streams.

The company highlighted that users can isolate speakers to improve transcription and caption accuracy, as well as clean up overlaps for dubbing and localization.

The model can be applied in broadcasting, film and transcription.

AudioShake also said that Multi-Speaker is being used to isolate the voices of ALS patients and feed them into voice cloning models developed by Eleven Labs. This allows patients to “speak” in their own voices, even after losing the ability to talk independently.

[Check Out More Products at Radio World’s Products Section]

Close