• Get involved.
    We want your input!
    Apply for Membership and join the conversations about everything related to broadcasting.

    After we receive your registration, a moderator will review it. After your registration is approved, you will be permitted to post.
    If you use a disposable or false email address, your registration will be rejected.

    After your membership is approved, please take a minute to tell us a little bit about yourself.
    https://www.radiodiscussions.com/forums/introduce-yourself.1088/

    Thanks in advance and have fun!
    RadioDiscussions Administrators

Any product similar to Aphex Compellor?

Are you saying that cascading analog VCA/Opamp devices don't contribute to overall audio path noise?
No, I didn't say that. Everything in the audio path contributes. But that's an absolute statement, not a practical one. In practice, the contribution of any device is relative to its actual noise floor and spectrum.

But since you brought it up, there's an opamp or three in every ADC and DAC keeping them all at a real 20 bit DR, even if they produce 24 bit words. That's in any modern digital system. Any opamp circuit with today's chips will have a noise figure a dB or two above theoretical termal noise for the input impedance. You can cascade them, equal but random noise adds 3dB each time. So yes, they contribute. But in practical terms, still not the big factor, and not a big factor at all in the recieved S/N. I haven't found VCAs particularly noisy either. Even the dbx202 from 40 years ago was quieter than required for FM by quite a margin. That 45 year old 5534 is darned quiet enough, again, just a few dB off theoretical thermal noise, and there are better today in just about everything.

But this is a discussion of systems, and now we're at the component level. That's not where the issues are. A broadcast system with a noise floor at -60dB re 100% is not suffering because of cascading opamps or VCAs. They are in complete circuits and applications with the capability of failure and, more likely, misadjustment.
Then I maintain they shouldn't kid themselves that they're doing everything to improve their broadcasts. But, that's largely part and parcel for radio anyway. It's typically done on the cheap.
Kidding themselves is an entirely different matter.
Competitive with similar radio stations? Or competitive quality as compared with the new world of competition from streaming?
I meant competitive in terms of overall sound quality. Streaming, as you well know, is a completely different animal. We don't need to get into it, but you known the differences, like no pre-emph, bit-rate reduction/compression, and relatively little useage in cars.
 
I meant competitive in terms of overall sound quality. Streaming, as you well know, is a completely different animal. We don't need to get into it, but you known the differences, like no pre-emph, bit-rate reduction/compression, and relatively little useage in cars.
But for as much as radio folks don't want to admit it, streaming is the new competitor, not just the other radio stations. That's just one reason why stations that insist on aggressive or cascading analog audio processing to sound louder, fatter, whatever, than the radio competition, are just hurting themselves with an audience who can tell the difference.
 
But for as much as radio folks don't want to admit it, streaming is the new competitor, not just the other radio stations. That's just one reason why stations that insist on aggressive or cascading analog audio processing to sound louder, fatter, whatever, than the radio competition, are just hurting themselves with an audience who can tell the difference.
Well, I'll certainly admit streaming is the new competitor, no problem there. It's just not in cars fully yet, but will be soon. However, the same listening environment noise floors, and need for controlled dynamic range, still exist for streaming audiences. And when streaming fully penetrates the automotive market, the two use-cases will be nearly identical. So what do you do? Streaming doesn't have an FCC modulation limit that is slightly spongy, or pre-emph. Instead it has this 0dBFS thing, cheapo DACs that crackle when it with intersample overs, and content distributors like YouTube and their 10dB higher than anyone else averages as one of your "competitors". What the heck? How is this better? Ok, no pre-emph is better. Lossy compression is not.

Agreed that those with aggressive cascading of any audio processing, digits or not, with the goal to louder, fatter, whatever, are in fact hurting themselves and others. But the audience of streams vs air aren't really that different in their ability to tell (or care) there's a difference. Remember, in spite of all our efforts, content is still King. They'll listen right through all the crud to hear the content. It's not that they don't care at all, of course. Given similar content, then we have a real fight, and that's where streaming processing needs front and center attention..and doesn't get it much of the time. Did you slap that dusty old analog box on your stream encoder's input? Umm....

Your competitors are no longer limited to a coverage area, they're global. Your pie slice can get pretty small. A careful use-case study is more important than ever, and once its understood who the audience is and where they listen we can process for that target. And once we have audience data, we might actually be able to justify budget for processing the stream better. And then, finally, it makes complete sence to get that old junker audio processor you took of the shelf and put on the stream encoder out of the chain and get real with it. But we may not be there yet for all stations. The stream stats aren't always very big. I have one client with about a 10:1, often greater, air:stream ratio, but another with a stream audience that is more than 30% of the air audience daytime, 100% at night when they're off the air. They built that because they're a part-time language specific broadcaster, and when they go off air, they're still streaming 24/7, so probably not comparable, but we do pay close attention to processing that one....digitally, BTW, with multi-band compression, very little peak limiting and no clipping. Gasp. Ok, the music is already clipped. These kids today.

The darned loudness war isn't over, though. One of my clients recently complained that their competitor's stream was louder than theirs. And it was. And smashed to death, eye-tearing distortion, etc., as if it was AM broadcast but with full bandwidth. It took using all the old arguments to calm them down (listeners have and use their volume controls, TSL links to fatigue, blah blah, same old stuff) to get them accept being second, but clean and listenable. That, and a 2dB loudness bump. Oh well. It never ends, this crap.
 
But for as much as radio folks don't want to admit it, streaming is the new competitor, not just the other radio stations.
In fact, one "frequent poster" just stated that they had been kicked off a Facebook group (Ouch! That hurts!) for saying streams "are not radio". When most of the population thinks any audio source is "radio" then those of us who used to think differently have to adapt.
 
Another part of streaming vs FM Analog/Digital comparative loudness is to what extent 0 dBFS in streaming sources produces the same level in the receiving device as 100 percent modulation in analog FM. That is, the listening device used by the audience should produce exactly the same output level from every delivery method when each delivery method is at maximum peak level.

As I recall the IBOC digital broadcasting product marketed as HD Radio, in its original specs, indicated that 0 dBFS in HD Radio would play 6 dB louder than 100 percent modulation in analog FM.

I truly hope I was mistaken, and this is not the case.

People who adjust processing know that if you have a 6 dB more final peak level "head room" than competitors, that is not a fair fight. Is it correct that early codecs and low bit rate systems (such as HD Radio) were not well behaved near 0 dBFS? FM analog broadcasters are capable of successfully going to within a couple tenths of a dB below 100 percent modulation.

The above is comparing streaming level with FM analog level. As far as comparing streaming stations to each other, I think a key point is what happens on the back end, downstream from the content creator/broadcaster. It may be a challenge to find out what is truly going on when your content is sent to a streaming distribution provider.

OK, so HD Radio is about 96 Kb/s and streaming might be 256 Kb/s. Someone told me streaming feeds may frequently be less than 100 Kb/s. Streaming providers say their codecs are amazing, perhaps lossless, the customers are thrilled. We are advised to not send square waves.

What does this mean?
 
Last edited:
In fact, one "frequent poster" just stated that they had been kicked off a Facebook group (Ouch! That hurts!) for saying streams "are not radio". When most of the population thinks any audio source is "radio" then those of us who used to think differently have to adapt.
Kicked them off? That was harsh. Must have been some sort of a streaming group.
 
Regarding FM analog, HD Radio and streaming- It is interesting that a system limited primarily by our individual actions and radio propagation is considered inferior to a system with significant built-in compromises, that every user must accept.

How you make your FM analog station sound is your choice. You may not have much choice with digital distribution. Despite this, we will do the best we can.

This line of thinking applies with vehicle manufacturer digital dashboards and proprietary entertainment centers. If a device used by the audience acts as a gatekeeper, and re-processes and alters your content (perhaps adding commercials), you are losing control of your product and the experience of your audience.
 
Last edited:
Another part of streaming vs FM Analog/Digital comparative loudness is to what extent 0 dBFS in streaming sources produces the same level in the receiving device as 100 percent modulation in analog FM. That is, the listening device used by the audience should produce exactly the same output level from every delivery method when each delivery method is at maximum peak level.

As I recall the IBOC digital broadcasting product marketed as HD Radio, in its original specs, indicated that 0 dBFS in HD Radio would play 6 dB louder than 100 percent modulation in analog FM.

I truly hope I was mistaken, and this is not the case.

People who adjust processing know that if you have a 6 dB more final peak level "head room" than competitors, that is not a fair fight. Is it correct that early codecs and low bit rate systems (such as HD Radio) were not well behaved near 0 dBFS? FM analog broadcasters are capable of successfully going to within a couple tenths of a dB below 100 percent modulation.
It was a lost battle from the beginning. IBOC/HD Radio didn't bother to establish a loudness reference (LUFS), porbably because radio predated loudness references, so it's wild. The levels between HD and analog failover are supposed to match, but it never really could because it's processed differently by requirement and by desire. It's only close.

Streaming is generally sort of standardized to -16LUFS, but the different platforms bend that around quite a bit to the point where there is no repeatable standard. AES has recommended -16LUFS for music, but platforms either auto-normalize to whatever standard they like, or probably worse, let the user pick. Spotify is user selectable from -11 to -23LUFS, with the default at -14LUFS. YouTube forces normalization, you have no choice.

And then there's music downloads, if anybody actually buys them anymore, which are all over the place. The whole LUFS thing was supposed to mitigate this mess. Instead, it's worse. TV had CALM, but it's not enforced, so....off we go again. Home video and TV audio vs streaming is a jump out of your skin difference. Which is odd, because Dolby did a lot of leg work with Dialnorm, which streaming platforms ignore.

Look, if the entire industry won't even standarize the loudness reference levels for streaming, then there's no hope of matching streaming to HD Radio with a fixed gain offset. Oddly, FM and AM analog are probably more consistent because everybody is...um...competitive, and has that FCC limit. So the way to do it is to sample the stream, correct its level based on LUFS, and store that offset in a radio preset so it's right the next time you mash that button. Heck, do that with everything, AM, FM, HD, the works, and the loudness war is then officially over on radio.

In the end, there's this thing on the play side, a volume control, and listeners use it anyway.
The above is comparing streaming level with FM analog level. As far as comparing streaming stations to each other, I think a key point is what happens on the back end, downstream from the content creator/broadcaster. It may be a challenge to find out what is truly going on when your content is sent to a streaming distribution provider.
The streaming systems I'm familiar with put the encoder at the station, and the final bitrate is selected by applying a formula that ends up at the cost for total bandwidth, so the number of listeners per hour at a given bitrate. You have control, but it's control that relates to cost. And the original stream is the top of the ladder, it won't get better. So a talk station might choose 64Kb/s mono, and save some cost per listener where a music station might want 128Kb/s stereo. Bandwidth = $$.
OK, so HD Radio is about 96 Kb/s and streaming might be 256 Kb/s. Someone told me streaming feeds may frequently be less than 100 Kb/s. Streaming providers say their codecs are amazing, perhaps lossless, the customers are thrilled. We are advised to not send square waves.

What does this mean?

You have to be careful about comparing lossy codec bit rates only. Some codecs are more efficient than others. MP3 for example ain't so great, hence why we like 256Kb/s or higher. But the numbers are not directly comparable. A 128Kb/s AAC stream is audibly on par with 256Kb/s mp3, at half the bit rate. And sadly, most streams are mp3. A very few streaming platforms do stream actual lossless, but it's not typical, far from average, and not as transportable as they'd like. And lossless means different things. Could mean 1.5Mb/s at 16/44.1, or it could mean 24/96. The data is repacked to conserve bandwidth and stay bit-perfect using methods like FLAC. More often it's 16/44.1 for lossless, and those high-depth, high rate platforms are likely up-sampling, which doesn't do anybody any good.

HD Radio uses a variant of HE-AAC (which was standardized in 2001), so pretty efficient, and way better than MP3, which was developed by Fraunhofer and standardized in 1995. People don't generally realize that MPEG 2 Layer 3 (MP3) includes a 15KHz low pass filter, where AAC does not.

The total bit rate for HD Radio is either 96Kb/s for Hybrid mode, or 120Kb/s for Extended Hybrid. But that’s the total bit rate, and in Extended it will be divided into separate channels at lower bit rates. You end up at 96Kb/s (98Kb/s if you're reading the NRSC papers) for HD1, less for the others.
 
Regarding FM analog, HD Radio and streaming- It is interesting that a system limited primarily by our individual actions and radio propagation is considered inferior to a system with significant built-in compromises, that every user must accept.
I'm not sure I see much difference. All transmission systems are compromised in some way. You pick your compromise. Analog FM uses pre-emphasis to tame the triangular noise spectrum, a compromise. The Zenith FM stereo idea has many problems, but won over Crosby because it didn't mess with SCAs (which were FM's main income back then), so more compromise. FM has bandwidth induced audio performance issues in general. HD has very limited bandwidth to stay on channel (yeah, great idea, hobbled from the start), and streaming has a cost directly tied to digital bandwidth (hate that application for "bandwidth"), and we're all stuck with all of it.
How you make your FM analog station sound is your choice. You may not have much choice with digital distribution. Despite this, we will do the best we can.
Mentioned before, the streaming kind of digital distribution gives content creators a huge choice of quality driving parameters. But they are all coupled to cost of stream bandwidth per listener. There's nothing stopping anyone from doing a uncompressed FLAC stream, except the relatively high cost per listener.

Which, BTW, is what I think is one of the biggest difference between a streamed signal and a broadcast one. The total cost goes up with each additional listener in streaming. In OTA radio the cost per listener goes down as the listener count goes up. The is offset in both cases by ad cost per 1000, but as the streaming audience grows, so does its cost. That's never been the case before.
This line of thinking applies with vehicle manufacturer digital dashboards and proprietary entertainment centers. If a device used by the audience acts as a gatekeeper, and re-processes and alters your content (perhaps adding commercials), you are losing control of your product and the experience of your audience.
The other side of the re-processing idea is that if, during HD Radio's development, we standardized on transmitting a stream that included metadata that profiled the processing being applied, the user could use that as a "Rosetta Stone" and choose more or less processing for his needs, even making that decision based on his local ambient noise. I think that kind of radio-based reprocessing could have promoted a new level of quality audio in broadcasting, if broadcasters could give up a degree of processing control to those advanced users who preferred less, or no processing. Never could have happend, of course, but what if?

The localized ad-inserts, pre-rolls and banner ads is a money grab. You could say it helps make streams possible, but no stream is going away without it, so...money grab.

That loss of control is offset by the wide variety of alternative program sources available to the user, many commercial free, and listener preference driven. Its why kids don't buy downloads anymore, it's all free to stream if you put up with pre-rolls and mid-rolls, and commercial free if you subscribe. That's whats going to kill us, not a quality issue. And all that's left then for radio is "live and local" which for some listeners doesn't matter. Scary.
 
That loss of control is offset by the wide variety of alternative program sources available to the user, many commercial free, and listener preference driven. Its why kids don't buy downloads anymore, it's all free to stream if you put up with pre-rolls and mid-rolls, and commercial free if you subscribe. That's whats going to kill us, not a quality issue. And all that's left then for radio is "live and local" which for some listeners doesn't matter. Scary.
But that's just reality. Live and local aren't nearly as important as on demand.
Regarding quality not being a determining factor when choosing streaming over radio, mainly for music; about twelve years ago I was involved in a research project across several markets where the company I worked for had stations. One of the things I requested be added to the focus group sessions, was a technical qualitative audio section. Members of the focus group were asked to put on headphones, listen to clips of the same songs, one recorded from a fairly heavily processed FM station, and the same clip played from an AAC-formatted clip. In all cases, participants said they would rather listen to the AAC audio than processed from an FM station. Terms were used to describe the processed audio as: distorted, doesn't sound right, and the AAC clip 'much better than sounding than the first clip.'
So I maintain, that whereas the killing of radio may not be entirely caused by audio processing, it isn't exactly helping win listeners over.
 
As I recall the IBOC digital broadcasting product marketed as HD Radio, in its original specs, indicated that 0 dBFS in HD Radio would play 6 dB louder than 100 percent modulation in analog FM.
The digital side of HD Radio does indeed have 5 dB greater headroom than 100% modulation on the analog side. The idea was to allow the digital audio to be less processed while still matching the loudness of the analog audio.

And some stations do use it that way, but most have realized that in fringe areas where the receiver is constantly switching between digital and analog, it is important not only to match the time alignment and loudness of the audio, but also the density and tonal balance, to make the transition more seamless to the listener.

So HD Radio hasn't really freed us from heavy processing and the treble restraint of the pre-emphasis curve, at least as long as the analog side is still around.
 
But that's just reality. Live and local aren't nearly as important as on demand.
I'm not sure about that. In markets 1, 2, and 3 the news/talk stations rank 7th, 5th, and 4th respectively, and those formats are driven by being live/local. I don't know where to find total audience figures that include OTA, radio streaming, and streaming only sources. That would be more relevant.
Regarding quality not being a determining factor when choosing streaming over radio, mainly for music; about twelve years ago I was involved in a research project across several markets where the company I worked for had stations. One of the things I requested be added to the focus group sessions, was a technical qualitative audio section. Members of the focus group were asked to put on headphones, listen to clips of the same songs, one recorded from a fairly heavily processed FM station, and the same clip played from an AAC-formatted clip. In all cases, participants said they would rather listen to the AAC audio than processed from an FM station. Terms were used to describe the processed audio as: distorted, doesn't sound right, and the AAC clip 'much better than sounding than the first clip.'
Yes, that's interesting of course, and has been done a few times. When given a choice of audio qualities for the same content, the preference is always to better quality. Not a huge surprise. But that choice is not presented to regular listeners much if ever. Their primary choice is one of content. Even with competing stations within a format, there are large content differences. You can see that clearly in, for example, Market 3 where there's an AM News/Talk in position 4, and they didn't get there with audio quality. That puts processing as an audience driving factor into a much lower, and unfortunately impossible to quantify, position. All you can really say you don't want it to be a negative factor, so less is actually more, if you can convince the decision makers to loose the loudness war.
So I maintain, that whereas the killing of radio may not be entirely caused by audio processing, it isn't exactly helping win listeners over.
Totally agreed. Processing remains a battle of programmers and managers egos. Very few have the guts to back it off and let it sound good while accepting "defeat" in the loudness war. Very, very few.

Flipping from what's killing radio to what's keeping it alive...in the automotive audience radio wins with simplicity. That's until we have push-button internet radios in enough cars to influence ratings, and that's limited now by its cost to listeners. Paired phones don't count, too difficult to use while driving, and then there's the data caps. Cost for streaming to cars (data cost and hardware cost), and the lack of full preset button control holds it back, but that's only for now. Cost has held XM back from dominating too. Nothing yet beats hitting a preset button and getting the programming of your choice for free. And right now, for most, that's a radio only function.
 
The digital side of HD Radio does indeed have 5 dB greater headroom than 100% modulation on the analog side. The idea was to allow the digital audio to be less processed while still matching the loudness of the analog audio.

And some stations do use it that way, but most have realized that in fringe areas where the receiver is constantly switching between digital and analog, it is important not only to match the time alignment and loudness of the audio, but also the density and tonal balance, to make the transition more seamless to the listener.

So HD Radio hasn't really freed us from heavy processing and the treble restraint of the pre-emphasis curve, at least as long as the analog side is still around.
It's not just fringe areas, it's urban multipath areas too. Ain't that fail-over transition just a load of fun?
 
I'm not sure about that. In markets 1, 2, and 3 the news/talk stations rank 7th, 5th, and 4th respectively, and those formats are driven by being live/local. I don't know where to find total audience figures that include OTA, radio streaming, and streaming only sources. That would be more relevant.
But a news/talk audience for radio is gradually aging-out of the demographics advertisers want. Younger audiences are not back-filling those slots, which makes the future at least for traditional radio, any existing or past trends that much more uncertain.
Yes, that's interesting of course, and has been done a few times. When given a choice of audio qualities for the same content, the preference is always to better quality. Not a huge surprise. But that choice is not presented to regular listeners much if ever. Their primary choice is one of content. Even with competing stations within a format, there are large content differences. You can see that clearly in, for example, Market 3 where there's an AM News/Talk in position 4, and they didn't get there with audio quality. That puts processing as an audience driving factor into a much lower, and unfortunately impossible to quantify, position. All you can really say you don't want it to be a negative factor, so less is actually more, if you can convince the decision makers to loose the loudness war.
I was mainly talking about music to music, and demographics under 55+.
Flipping from what's killing radio to what's keeping it alive...in the automotive audience radio wins with simplicity. That's until we have push-button internet radios in enough cars to influence ratings, and that's limited now by its cost to listeners.
Does that include apps like Apple Carplay? Because most auto manufacturers are making that a highlight element of their newer vehicles, and has become a requirement for most Millennials or Gen Z car buyers.
Paired phones don't count, too difficult to use while driving, and then there's the data caps.
That may have been the case twenty years ago, but not today. People under 50 rely on their phones for everything, and most have 'unlimited' data plans to make sure of that. My middle son's new Chevy Silverado with Apple Carplay enables the ability to change audio on the vehicle screen. No need to look at your phone to change audio sources or material. I frequently commute to work via motorcycle. I can use the Bluetooth setup in my helmet to my phone, and can change podcasts, SXM presets favorites, Spotify, whatever via Siri. No need to take my eyes off the road.
 
But a news/talk audience for radio is gradually aging-out of the demographics advertisers want. Younger audiences are not back-filling those slots, which makes the future at least for traditional radio, any existing or past trends that much more uncertain.
No argument there, but a N/T station at #4 in one of the top 3 markets isn't exactly showing signs of aging out. It has a long way to go, and hasn't significantly changed position in many years. So far, no downward trend.
I was mainly talking about music to music, and demographics under 55+.
Well, sure, but again, in the ratings there's all of that stacked down below that #4 N/T.
Does that include apps like Apple Carplay? Because most auto manufacturers are making that a highlight element of their newer vehicles, and has become a requirement for most Millennials or Gen Z car buyers.

That may have been the case twenty years ago, but not today. People under 50 rely on their phones for everything, and most have 'unlimited' data plans to make sure of that. My middle son's new Chevy Silverado with Apple Carplay enables the ability to change audio on the vehicle screen. No need to look at your phone to change audio sources or material. I frequently commute to work via motorcycle. I can use the Bluetooth setup in my helmet to my phone, and can change podcasts, SXM presets favorites, Spotify, whatever via Siri. No need to take my eyes off the road.
Ok, well CarPlay is a move in the right direction. I have it in one vehicle, but I find it far less than perfect, prone to error and frustration, and the Siri integration seems to be one bottleneck.

You'll have to PM me the specifics of your motocycle helmet system, the one I use isn't that smart.

But if the UI and data cost, and the lack of live/local isn't what's holding back streaming from killing radio now, what is? Or is it? I said before, I don't know where to find audience data that includes all alternative services. Nielsen doesn't include services that don't subscribe, so it's not complete listener data.

Regardless, I'll still maintain that the listener picks content first, and audio quality isn't a factor unless it's a strong negative (like unlistenable) or there's a second choice with superior audio that is identical in every other way. The best we can do as far as processing influencing choice is make sure isn't a negative, and doesns't impact TSL. And of course some stations aren't even doing that.
 
No, I didn't say that. Everything in the audio path contributes. But that's an absolute statement, not a practical one. In practice, the contribution of any device is relative to its actual noise floor and spectrum.

But since you brought it up, there's an opamp or three in every ADC and DAC keeping them all at a real 20 bit DR, even if they produce 24 bit words. That's in any modern digital system. Any opamp circuit with today's chips will have a noise figure a dB or two above theoretical termal noise for the input impedance. You can cascade them, equal but random noise adds 3dB each time. So yes, they contribute. But in practical terms, still not the big factor, and not a big factor at all in the recieved S/N. I haven't found VCAs particularly noisy either. Even the dbx202 from 40 years ago was quieter than required for FM by quite a margin. That 45 year old 5534 is darned quiet enough, again, just a few dB off theoretical thermal noise, and there are better today in just about everything.

But this is a discussion of systems, and now we're at the component level. That's not where the issues are. A broadcast system with a noise floor at -60dB re 100% is not suffering because of cascading opamps or VCAs. They are in complete circuits and applications with the capability of failure and, more likely, misadjustment.

Kidding themselves is an entirely different matter.

I meant competitive in terms of overall sound quality. Streaming, as you well know, is a completely different animal. We don't need to get into it, but you known the differences, like no pre-emph, bit-rate reduction/compression, and relatively little useage in cars.
The short audio path is the best audio path. Radio stations are generally not designed around this concept. Most of them to the extreme opposite. Having recorded and mixed some of the stuff that still plays daily on current radio stations, I have a very good idea of how things are supposed to sound. It was my job to listen and make decisions based upon what I heard while making those records. I critically listened all day long and I worked closely with a deep variety of people who also did that. One gets a firm idea of what is "good". I became a broadcast engineer many years ago and I've been appalled at how things generally go with audio in radio. With regard to the Compellor or any similar processor, your signal is going through many op amps and many coupling capacitors. All of the op amps and VCAs are connected to power supply "rails" that are supposed to immediately provide a low impedance voltage that exactly matches the signal swing that the op amp needs at the output. The truth is that power supply rail impedance at audio frequencies in broadcast grade equipment is going to be much higher than we really need. Just put an oscilloscope probe on one of the op amp power supply pins (AC coupled) and watch all of the audio signal dance around riding on the DC power. It's the tail wagging the dog kind of thing. That sloppy DC impedance and inadequate power supply decoupling lets op amps interact with each other at the power rail level when they aren't supposed to do that. "But the distortion test looks good" is a bad excuse. Distortion is done with sine waves and nobody but your distortion analyzer listens to sine waves. Music demands and instant steep power peak and the power supply is supposed to deliver that and still keep it's composure for the next few moments afterwards. Poor power delivery plus the fact that every single component that audio directly passes through adds distortion. That means capacitors, op amps and even resistors add distortion. It has been measured and proven. Distortion and noise in an audio path is cumulative. The longer the audio path the more distortion and noise will add up. If I can avoid using a Compellor I will do that. I did my critical listening with a Compellor set so that the signal path was through the VCAs and with input and output levels matched but without the dynamic side chain moving the levels. It was a fair in circuit versus bypass test and the difference to me was noticeable.
 
The short audio path is the best audio path. Radio stations are generally not designed around this concept. Most of them to the extreme opposite.
Physically short? Electrically short? Geographically short? What means "short"?
With regard to the Compellor or any similar processor, your signal is going through many op amps and many coupling capacitors.
Ok, well I happend to have a 320A on the bench now with the manual open.
The opamp count: For a single audio channel, there's the input diff pair (1 or 2 depending on how you consider a differential receiver circuit), a differential VCA drive (again, 1 or 2), a VCA buffer, output gain amp (and inverter), and the output amps. So, that's 4 or 6 opamps for your "many". And one VCA.

Caps? There's one in each leg of the input for DC blocking, there's another in each leg of the output gain amp, one in each leg of the output amp, and...let's see...that's about all in the audio path. 6 total.
All of the op amps and VCAs are connected to power supply "rails" that are supposed to immediately provide a low impedance voltage that exactly matches the signal swing that the op amp needs at the output. The truth is that power supply rail impedance at audio frequencies in broadcast grade equipment is going to be much higher than we really need.
But is that really the "truth" or something somebody read or told you? Because, in the schematic there are local "audio decoupling" (what they're called in the schematic) caps that take that impedance pretty far down in the audio band.
Just put an oscilloscope probe on one of the op amp power supply pins (AC coupled) and watch all of the audio signal dance around riding on the DC power. It's the tail wagging the dog kind of thing.
Ok, well since I have it open and on the bench, let's just do that. Here's one of the opamps in the output circuit, arguable a higher current position. The scope is attached to the power supply pins on the IC, it's AC coupled, and input up so we can see the noise. Here a shot of one of negative supply rail, with no audio applied.
neg-rail-no-audio.jpg
Here's the same neg rail with audio enough to deflect the output meter to full scale.

neg-rail-audio.jpg

Here's the positive rail, no audio:

pos-rail-no-audio.jpg

And the positive rail, same full output audio:
pos-rail-audio.jpg
I don't see any wagging, sagging, lagging or dragging. All looks pretty much the same.

And I really need to clean off my scope screen.
That sloppy DC impedance and inadequate power supply decoupling lets op amps interact with each other at the power rail level when they aren't supposed to do that.
That might be true if there were:
1. Any such thing as DC impedance (which there isn't)
2. The power supply actually was not adequately decoupled (which it is)
3. Any condition actually existed that might cause opamps to interact with each other (which there isn't)
"But the distortion test looks good" is a bad excuse. Distortion is done with sine waves and nobody but your distortion analyzer listens to sine waves.
Well, I don't just test with sine waves, but they are the best test for linearity. In fact, there isn't a better one.
Music demands and instant steep power peak and the power supply is supposed to deliver that and still keep it's composure for the next few moments afterwards.
Um...well, no. Power is the result of voltage times current. In most of the opamp applications within a Compellor (or most other devices), there is voltage gain, but very little actual power distribution through audio in the circuit. So no, the actual current does not spike with music "demands". A power amp is different, and that's not what these things are. But also, there is no actual "instant steep power peak" anyway, it's all limited to the maximum frequency and amplitude of the audio waveform. There are lots of much faster signals, and they too can be handled.
Poor power delivery plus the fact that every single component that audio directly passes through adds distortion. That means capacitors, op amps and even resistors add distortion.
Yikes, what a horrible world we live in. But we don't actually always have poor power delivery, and not every component adds audibly to distortion. Capacitors, operated correctly, do not cause distortion. Only of the wrong types are chosen or applied wrongly in the circuit. Resistors distortion contribution is far, far below anything audible, or anything else in any circuit. Opamps distortion is well known, and also vanishingly small. Your distortion generators are all transducers, mics, speakers, headphones, phono carts, tape heads, and so on, and by several orders of magnitude. Discussing them as significant signal modifiers along with resistors is just overblowing the absolute into the absurd.
It has been measured and proven.
I don't think you really want to go down that rabbit hole.
Distortion and noise in an audio path is cumulative. The longer the audio path the more distortion and noise will add up.
I guess "longer" in this case means "more devices". Sure, in the absolute, but in the practical world how would one device with a noise floor at, say, -98dBU add to another noise floor at -98dBU? Assuming they are uncorrelated, but identical spectrum and amplitude, well that's easy, +3dB. But is that a problem? And does that occur? Because idencial noise levels is rare, as is an identical noise spectrum. 3dB add is worst case, are actually rare. Distortion doesn't simply add. To do so the specific nonlinearity that caused it would have to be identical.
If I can avoid using a Compellor I will do that.
Yeah, I think I got that part.
I did my critical listening with a Compellor set so that the signal path was through the VCAs and with input and output levels matched but without the dynamic side chain moving the levels. It was a fair in circuit versus bypass test and the difference to me was noticeable.
My next response, if I wanted to really persue this, would be to ask if the test was a true ABX/DBT. But lets not.

Look, you hate compellors and broadcast audio chains. Ok fine. Some have them in their lives, not by choice, but by necessity. Others can shorten the chain, and make everything a digital stream. But then, how does someone like you feel about 24/48? Resolution too low? Or realize that there actually is no 24 bit audio because there are not real 24 bit performance ADCs? Or is anything digital bad because of the mythical stairsteps (which don't exist) or the slicing and dicing (that doesn't happen)?

How can you tollerate this kind of audio business at all??

Amused, here.
 
No argument there, but a N/T station at #4 in one of the top 3 markets isn't exactly showing signs of aging out. It has a long way to go, and hasn't significantly changed position in many years. So far, no downward trend.
The median age of a news/talk listener in 2022 was 56 according to Nielsen and Scarborough Research.

It's not "showing signs", its already done and dusted. KFI, for example, is usually ranked between 15th and 20th in the 25-54 demo, despite being in the top 5 overall.
 
It's not "showing signs", its already done and dusted. KFI, for example, is usually ranked between 15th and 20th in the 25-54 demo, despite being in the top 5 overall.
Butm KFI is 3rd in billing in the market. Local direct and many local agencies don't buy strictly by demos. And with the decrease in agency business and decrease in 18-34 use of radio, the options for older-leaning stations are very good all of a sudden!
 
Status
This thread has been closed due to inactivity. You can create a new thread to discuss this topic.
Back
Top Bottom