Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Vipology Demonstrates New Music Testing Platform at Worldwide Radio Summit

First use of smart speakers for music research

It was only a matter of time. The meteoric sales of smart speakers has brought radio — and music back into the home and made listening a social experience again. Someone was bound to figure out a way to use smart speakers to do music research. That day has apparently arrived.

Los Angeles-based technology developer Vipology claims to have put all the pieces together for a new type of online research. By combining Amazon Alexa, the AI power of IBM Watson and proprietary software, the company claims to have created a platform that will deliver consumer emotional analysis to music testing. The system will be demonstrated at the Worldwide Radio Summit in Hollywood, Calif., on May 3-4.

Vipology says it has combined its own VS3 smart speaker system with software from recently acquired MusicTesting.com. The company also partnered with Benztown in developing the VS3+MusicTesting.com platform, with Benztown providing a library of thousands of song hooks covering 14 formats.

Vipology recently introduced its smart speaker skills product, Vipology Smart Speaker System (“VS3”) to radio stations, as consumer adoption of smart speakers continues to grow at record levels. That product launched across 100 U.S. radio stations. VS3 deliverables include quickly securing the stations’ brands on smart speakers, as well as working with stations in a customer-informed process to maximize their placement on smart speakers for cash or barter.

Vipology claims that its VS3 + MusicTesting.com platform can bring music research into listeners’ homes and enable them to share their passion through emotional responses, thereby scoring listeners’ emotional feedback in ways legacy music testing cannot.

It will be interesting to see how this new testing fares in the already competitive music testing ecosphere. Privacy concerns surrounding smart speakers are already being raised. How will the privacy of research subjects be protected? And, isn’t most listener’s response to music emotional? How will this be different, and how well will it track with existing music research methodology?

Close