Captioned radio is still cooking and is about to take another step forward.
NPR Labs conducted the first live captioned radio broadcast on election night 2008 when NPR’s election coverage was simulcast in captioned-radio format. It worked with the International Center for Accessible Radio Technology at Maryland’s Towson University, Boston’s WGBH Media Access Group and Harris Broadcast to provide captioning coverage for five local Public Radio Satellite System stations.
Since then, Towson and NPR Labs created “Towson University Captioning Solutions.” In 2013, Columbia, Md.-based BTS Software Solutions joined “TUCS” to refine the live captioning and transcription software and build what it says is an expandable architecture.
The original concept was, and continues to be, to help deaf and hard-of-hearing listeners understand what’s on the radio. Dr. Ellyn Sheffield, cognitive scientist and Towson University research professor, developed new processes for the project. She tells me some 23 million Americans have difficulty accessing audio programming because of hearing loss and that number is projected to grow as the number of senior citizens triple by 2050.
BTS-S2 plans to expand TUCS ability to bring what the company says is accurate and affordable captioning and transcription services to market. What used to be called captioned radio is now thought of as audio-to-text conversion.
Sheffield retains her teaching role at Towson and is now also a vice president at BTS-S2. She tells me the university has transferred the TUCS technology to BTS-S2 so the company can expand its operations into a business. Now renamed Verb8tm, the service can provide captions for live radio and real-time events for academia, corporations and media outlets.
Though NPR Labs is no longer directly involved with the project, BTS-S2 continues to have a close relationship with the broadcaster.
Using Verb8tm, BTS-S2 is providing NPR with transcripts for daily shows including “Morning Edition,” “All Things Considered” and “Fresh Air.” They are also captioning for “LatinoUSA,” a Futuro Media public radio program.
“Now we’re in a position to start radio captions in earnest for the radio industry,” Sheffield tells me. She notes that in addition to producing a “verbatim” transcript, the company can also provide a summary of the content if a broadcaster wants to transmit information over RDS. “When you have a person listening to a story and summarizing it for RDS” the process is nearly instantaneous and less labor-intensive than say, waiting for a full transcript of a congressional hearing.
Using the Verb8tm system, a trained staff member re-voices every word of the program, which is a “significant value-add to simply using a straight voice-to-text algorithm,” according to Sheffield. For example, voice writers chose who to listen to if more than one person is speaking at once; accurately identifying sounds and picking out speech from background noise, she tells me. Staffers also add punctuation so the text is readable and they paraphrase when speech becomes too quick for a person to read, she adds.
The company currently employs students and plans to broaden its hiring to veterans.
Audio-to-text conversion can help listeners understand the programing, but it can help the station too, by enabling more hits on its website, for example, by making programming more searchable, according to BTS-S2.
Company personnel hope to go to the NAB 2015 in Las Vegas to speak with radio and other companies.