Distorting Group Delay Distortion

A Commentary About Group Delay Left a Ringing in Orban's Ears </I>
Publish date:
Social count:
A Commentary About Group Delay Left a Ringing in Orban's Ears </I>

A Commentary About Group Delay Left a Ringing in Orban's Ears

Dana Puopolo's Guest Commentary, "Let's Keep AM Sounding Good" (Jan. 19) contains a number of technical errors.

Mr. Puopolo's thesis hinges on the ideas that the thing which causes 5 kHz low-pass filters to sound less pleasing than 10 kHz NRSC filters is group delay distortion, and that the 5 kHz filter moves into the frequency range to which the ear is most sensitive. However, when he starts to make technical claims about the nature of group delay and group delay distortion, things go awry.

Break it down

The statement "group delay is exactly what it says: delay" is simplistic. In fact, it is only true when the group delay of a filter is constant at all frequencies; otherwise, the truth is subtler.

Group delay at a given frequency is defined as the negative of the slope of the filter's phase response at that frequency. This definition can lead to frequency ranges where group delay is negative, something common with high-pass filters. Nevertheless, a negative group delay does not imply the output of the filter arrives before the input, which is a physical impossibility.

What is group delay distortion? It is the variation in group delay as a function of frequency after we have subtracted as much of a constant delay as possible. For low-pass filters, the filters under discussion here, this constant is usually the group delay of the filter at zero frequency. (A filter has a well-defined group delay at 0 Hz, as unintuitive as this might seem. It takes some calculus to justify this.)

Contrary to Mr. Puopolo's assertion that "humans can hear time delay distortions and filter group delay (distortion) quite easily," the technical literature indicates that humans are, in fact, about two orders of magnitude more sensitive to magnitude distortion (what is often informally termed "frequency response") than they are to phase distortion.

Mr. Puopolo goes on to argue that although some "engineers with good intentions" pointed out to him that "modern computer filter design can produce audio filters that have low group delay (distortion) right up to their cutoff frequencies ... the group delay in the (radio's) IF filter can multiply with the delay in the audio filter, causing severe audio artifacts." There are several problems with these statements.

First, it is easy to produce filters in DSP that have no group delay distortion at any frequency. These types of filters are used frequently. Because the group delays of cascaded filters add - they do not multiply, contrary to Mr. Puopolo's assertion - cascading such a filter with the IF filter in a radio will not have the slightest effect on the group delay distortion of the cascade. The group delay distortion of the combined filters will be the same as the group delay distortion of the radio's IF filter.

In short, Mr. Puopolo's explanation for the unpleasant things we can hear with sharp-cutoff filters in the audio midband does not hold up to technical scrutiny. However, there are certainly problems we can hear, and there must be an explanation.

I believe these filters ring when hit with transient material and this ringing is what we hear as unpleasant. The filter stretches out audio events that the ear expects to be sharply defined and imbues them with a distinct, unnatural-sounding tonality.

What does this ringing have to do with group delay distortion? It turns out that for a given amount of selectivity in the magnitude domain, filters with no group delay distortion have impulse responses that are substantially more spread out in time than filters with the minimum possible phase shift, the so-called "minimum-phase" filters.

In other words, filters with no group delay distortion smear impulses more severely than filters with lots of group delay distortion. What's even worse is the filters with no GDD introduce pre-ringing before the energy peak of their impulse response, while the ringing in minimum-phase filters occurs only after the energy peak.

Case compromise

We know from psychoacoustics that the ear has a property called "temporal masking," which means a strong sound (the "masker") is able to prevent the ear from detecting weaker sounds that occur before or after the masker.

This phenomenon is markedly asymmetric. Temporal masking is much weaker for sounds occurring before the masker than after the masker. This means it is a bad idea to introduce pre-ringing in a filter, which is exactly what a filter with no GDD does. So it is reasonable to argue on psychoacoustic grounds that a minimum-phase filter is likely to have less audibly objectionable ringing than a filter with no GDD, which is exactly the opposite of Mr. Puopolo's conclusion.

What, if anything, is the optimum design? Making filter slopes gentler will reduce ringing but also will markedly impact the ability of a filter to prevent first-adjacent interference. If we assume the system specification imposes a minimum selectivity requirement on the filter, how does one shape the group delay?

In Orban's current AM processors, we correct the low-pass filter for group delay distortion up to about 80 percent of its cutoff frequency. This compromises between the pure minimum-phase case and the constant group delay case. A filter designed in this manner has pre-ringing of much shorter duration than a filter with no group delay distortion, yet the amplitude of the highest overshoot is substantially less - by almost 50 percent - than the amplitude of the overshoot in a minimum-phase filter.

We consider this to be the best compromise when system specifications require a filter with a cutoff frequency in the range of 5 kHz.

For a 10 kHz cutoff frequency or above, one can make more of a case for a filter without group delay distortion because these filters have the lowest overshoot amplitude, while the higher cutoff frequency is accompanied by proportionally shorter-duration impulse response.

Orban has long suggested that stations use an NRSC cutoff during the day because studies done at the time of the creation of the NRSC1 standard indicated there were few geographical areas where first-adjacent interference is a problem during daylight hours. At night, we proposed using 5 kHz, which prevents any first-adjacent skywave interference.

We still think this combination makes the most sense if the goal is to maximize real-world audio quality. The NRSC bandwidth provides marginally higher audio quality than 5 kHz through typical radios during the day, but at night the benefits of reduced first-adjacent interference far outweigh the subtle improvement the NRSC bandwidth would make in basic audio quality.

Simultaneously, we recognize that Jeff Littlejohn has made some valid points regarding improved modulation efficiency when modulation energy is not wasted by transmitting frequencies the average radio cannot reproduce. The NRSC currently has a working group studying the various tradeoffs involved in lowering the transmitted bandwidth, and we hope to have some more definitive answers as this work progresses.