Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Testing: One, Two, Three

From the auditorium music test to online surveys, programmers need listener input

Memories differ, and some believe it started even earlier. But by 1981, a small group of music radio programmers were whispering about a powerful new secret weapon: the auditorium music test, also known as the AMT.

Hal Rood moderates a panel at the Worldwide Radio Summit in Los Angeles.

Prior to AMTs, top 40 programmers had been experimenting with callout research. KRIZ Phoenix Program Director Todd Wallace confided to his friend Jeff Salgo at KMJZ in San Diego that his staff had been systematically calling listeners to find out which hits to play in heavy rotation. Salgo tried it, too, and his 1000 Watt AM station quickly defeated the legendary KCBQ.

When Salgo was later hired to program an adult-targeted FM in town, KBZT “K-Best 95,” he realized that callout respondents wouldn’t have the patience to rate hundreds of song hooks over the phone. He asked his GM, “What if we paid people $25 to come to a hotel and rate a bunch of songs?” According to Salgo, now market IT manager for Entercom in Los Angeles, the impact was amazing: “K-Best shot up to #1 in every key demo.”

Around the same time, having learned that The Research Group was beginning to offer AMTs to client stations, Cox Radio boss Nick Trigony gave Research Director Roger Wimmer a million-dollar budget to “test music testing” and told him, “You’ve got a Ph.D., figure it out!” He did, examining ideal sample sizes, respondent incentives, song hook size, nearly every variable possible. Many of his refinements to the methodology are still in use.

Jeff Salgo signs on KKDJ(FM) — later renamed KIIS — in Los Angeles, April 15, 1971.

ONLY AS GOOD AS THE INGREDIENTS

Although changing times have caused some changes in music testing since, a core principle remains: Every good test starts with a high-quality sample.

Los Angeles programmer Jhani Kaye — famous for his attention-to-detail at the helm of stations like KOST, KBIG and KRTH — attended all of his stations’ music tests to see the sample with his own eyes, ensuring that every respondent truly passed the screener.

He has his share of horror stories. Once, a group of men attending an AMT admitted that they were often asked to show up if the turnout looked thin — to raise funds for their barbershop quartet.

Ken Benson of research company P1 Media Group warns, “Many AMT participants are research pros. The pool of people willing to participate is small and getting smaller.”

Benson says that’s why the cost of recruiting and incentivizing participants has risen to as high as $250 each. Multiply that by a sample size of 90 or 100, and you see the problem.

Ken Benson at Virgin Radio in Toronto.

In response, Strategic Solutions Research — like many providers — has developed an online survey system to save money on meeting rooms, moderator travel and by allowing respondents to participate at home.

“We’ve been doing online testing for 10 years now,” says Executive VP Hal Rood. “We have a variety of techniques to ensure the person is who they say they are. Both types of respondents [online and in-person] are people who are also willing to participate with Nielsen. The key is the quality of the screening, the questionnaire and the data’s interpretation.”

Cutting costs even more, Benson’s company has developed a trademarked “CSMT — Crowd Source Music Test.” Like many innovations, it was born of necessity. A small-market cluster couldn’t afford to recruit a test, so, “We reluctantly decided to use the stations’ databases and run ads on the cluster, but we made no guarantee that we would have actionable data at the end. The recruit wound up being a tremendous success.”

They’ve since added social media posts and other sources to their “crowd source” techniques. After running parallel tests in various formats and market sizes, Benson is confident in the results.

L.A. programmer Jhani Kaye.

Strategic Research Solutions has also used databases and social media to recruit, but Rood is cautious. “The best research investment is in actively recruiting consumers so you reach beyond the super P1s who have signed up for your database,” he says.

Wimmer takes no issue with new recruiting sources, as long as the screener is adhered to rigidly. Also important, he says, is that you “do an analysis of every person’s scores in relation to the standard deviation of the test. Any respondent’s data whose scores are above or below the standard deviation must be thrown out.” In other words, your researcher should be challenged not to use “outlier” data that is too different from the median.

Wimmer is concerned with today’s widespread use of electronic dials in music testing. His analysis shows that respondents make fewer errors when they’re using a simple 7 or 10 point scale. “The average person sometimes finds the dial technology difficult,” he says.

Roughly 40 years since the advent of music testing, Kaye points out that music radio still can’t be programmed using science alone. It’s also an art form.

“Music testing is extremely valuable, but it can’t be a bible,” says Kaye. “A good programmer will always know his/her audience well and be able to read — and to a certain degree — predict what the audience would like.”

Dave Beasing recently had the sad duty of signing off LA’s “100.3 The Sound” after a 9-1/2 year run, divested as part of Entercom’s merger with CBS Radio. The award-winning programmer and consultant is now busy opening his new on-demand audio company.

Close