How Should Radio Carry Multichannel Audio to Listeners?
One of the most hotly debated elements in digital radio is how (or whether) HD Radio should carry surround ("multichannel") audio. No less than four separate proposals have been floated to the industry by their respective vendors, and Ibiquity Digital, as good stewards of the HD Radio technology, are evaluating them as they come along, without stating any preference for any particular system or systems. Instead, Ibiquity has left these choices to broadcasters, just as much of the rest of the HD Radio format exhibits flexibility for the applications desired by individual stations.
For broadcasters to make appropriate choices, they will need to fully understand the potential breadth of impact of these decisions, however. This requires a deep examination of the issues, across all the possible permutations that exist in an environment that will likely remain unsettled for some time.
All proposed formats share an interest in providing a multichannel audio service that maintains compatibility to stereo and mono listening, which is a clear requirement for any broadcast audio system. But there the similarities end, and just how (and how well) this compatibility to all listening modes is achieved varies widely among the systems.
The major distinction among the formats is how the multichannel audio is compatibly encoded in the signal. Of the four proposals to date, three take what can be called a composite approach, while the fourth uses a component method. Traditionally, this nomenclature is used to distinguish systems in which multiple signals are embedded in a single path from those that use separate paths to send multiple signals. In compatible surround, the SRS, Dolby and Neural systems use the composite style, while the MPEG Spatial Coding system chooses a component approach.
The latter uses a data channel (~5 kbps) for communicating parametric steering information to the receiver (separate from its coded audio signal), while the composite systems embed surround information into a stereo audio signal. This implies that composite systems can travel along the same stereo audio paths currently used by broadcasters - or can they?
Thinking it through
If composite surround encoding is used at the broadcast studio for the main channel audio of an IBOC station, this allows it to use the existing STL, as well. However, this also means that both the analog and digital broadcast signals will include the surround encoding.
While this provides the ability to add surround capability to legacy FM, there may be reasons why this is not desirable (and why it hasn't been done before - which it could easily have been). Here's why:
The intrinsic mechanisms of all composite surround systems almost always involve the addition of some out-of-phase material in the stereo signal (beyond that which may be present in the plain stereo version of the same signal, of course). When applied to the FM stereo multiplex, this translates to higher average modulation of the L-R subcarrier, which in some cases can increase the audibility of multipath distortion. In extreme cases, it may also lead to reduced relative loudness of the mono signal.
Moreover, multipath's effects on the phase of the received signal can wreak havoc on the subtle phase relationships of a composite-surround signal's encoded steering data, making the decoded analog surround signal suffer from poor and/or unstable surround imaging. An analog FM receiver's stereo blend feature kicking in can further add to image inconsistencies, if not their outright elimination.
Thus it may be desirable to add composite surround encoding only to the digital signal, keeping the FM analog signal in stereo. But if this course is taken, it puts the composite systems in the same situation as a component approach, i.e., two separate audio signals - stereo (2.0) and surround (5.1) - must be generated and managed by the broadcaster.
This means that, unless a station chose to add surround purely as a full-time automatic upmix (with dubious value), both component and such "separate composite" methods would require dual audio signal paths around the station, and that surround encoding would most likely take place at the transmitter site. For non-collocated stations, dual STL paths also would be needed (one for 2.0 and the other for 5.1, the latter possibly in compressed digital form).
The impact on stations routing and storage systems could therefore be considerable, regardless of the methodology used to transmit surround content. So stations should understand that "going surround" is not as easy as it might initially appear, and it is a step that should not be taken lightly.
That said, there are some existing systems that can simplify and reduce the cost of these requisite steps. Using Dolby-E or an IP audio distribution system can allow stations to route 5.1+2.0 audio signals in real time on a single path, using largely existing wiring infrastructures. Those solutions could also be applied to a single digital STL path (either AES-3 or IP-based).
Meanwhile, the cost of hard-disk storage continues to drop, and storage capacities per unit of physical space continue to increase. Existing or newly proposed file formats also will allow straightforward and efficient storage and management of surround+stereo audio.
When content is aired that only exists in stereo form, automatic detection and switching systems are being developed that can insert a (pseudo)surround upmix that the station can control - either in real time during broadcast, or when the content is stored to hard disk.
A further advantage of taking the separate 5.1/2.0 approach is the ability to apply audio processing independently and optimally for the two signals (although a single device could provide both functions in the same chassis).
The other path
Of course, with a component surround approach, no such decision on whether to apply surround to analog FM is required. The need for a separate steering data channel means that the surround will only apply to digital signal. (Of course, a station could choose to matrix-encode its analog FM independently.)
Originally in this scheme, it was thought that the analog or non-surround digital receiver would simply receive a stereo downmix of the 5.1 audio sent to surround receivers. However, the latest revision of the MPEG Spatial Coding specification allows original stereo to flow to stereo (analog or digital) receivers while parametric steering data is used by surround receivers to decode a 5.1-channel output from the same audio signal.
This even works when the original stereo and 5.1 mixes are significantly different. If this seems impossible, you're not alone. So next time we'll dig a little deeper into how the MPEG Spatial Coding system works.
How Should Radio Carry Multichannel Audio to Listeners?