• Get involved.
    We want your input!
    Apply for Membership and join the conversations about everything related to broadcasting.

    After we receive your registration, a moderator will review it. After your registration is approved, you will be permitted to post.
    If you use a disposable or false email address, your registration will be rejected.

    After your membership is approved, please take a minute to tell us a little bit about yourself.
    https://www.radiodiscussions.com/forums/introduce-yourself.1088/

    Thanks in advance and have fun!
    RadioDiscussions Administrators

Any product similar to Aphex Compellor?

MM797- From an analog audio and sound quality standpoint you are correct. I agree.
This aspect does not tell the whole story, because audio sound quality is not always all that matters.

My historic view-

Back in the old days, in the 70's and early 80's the number of transformers, op-amps and capacitors, VCAs in the signal path absolutely made a difference, and sound quality could be degraded. Broadcasters had to make a choice between the benefit of audio processing and sound degradation of a long signal path. For example, you might not have wanted to go through a nasty VCA to have single band leveling you really didn't need to achieve your processing goal.

As time went on op-amps improved and many designers became adept at achieving desired processing sound with a minimum amount of degradation in the analog signal path.

Then digital arrived. At first it was horrible. Over time digital signal processing (DSP) improved with audio engines achieving something amazing like 32-bit internal core processing dynamic range, and if A/D or DA interfaces are needed they could be 20-bit to 24-bit. Now good digital is fantastic sounding.

For many years now, in digital there has been no practical audio sound quality penalty by "stacking" audio stages to achieve a complex audio processing structure. Within reason the DSP does not care. Today, if the software and hardware can do it, you can make your audio processing dreams or nightmares come true. The sound quality risk is not the software and hardware, it is your own actions.

MM797 in the historical sense you are absolutely correct. In analog a shorter, high-quality signal path is preferable, unless the benefit of more audio processing stages is more important than sound quality degradation caused by more things in the signal path.

MM797, excellent comments about analog audio and bench work with the scope.

btw- I might add that except for output stages, op-amps rarely need much current drive capability. 5K load is just a few mA even at the rails.

Best regards.
 
Last edited:
Butm KFI is 3rd in billing in the market. Local direct and many local agencies don't buy strictly by demos. And with the decrease in agency business and decrease in 18-34 use of radio, the options for older-leaning stations are very good all of a sudden!
You mean, ultimately radio gets the leftovers?
 
You mean, ultimately radio gets the leftovers?
No, I mean in local sales, older leaning stations now have an advantage since there is TSL and cume erosion in strictly 18-34 targeted formats.
 
No, I mean in local sales, older leaning stations now have an advantage since there is TSL and cume erosion in strictly 18-34 targeted formats.
But for reasons that we've discussed in other sections, local sales have been lagging since 2008 and even more-so since the pandemic. Gravity especially depending on market size. Smaller local business living close to the margins couldn't stay afloat with no business, and those who did, switched to on-line/digital advertising because it's cheaper. Auto and furniture advertising remains in a slump, and big box stores that only do agency buys are pushing other local businesses out.
Doesn't matter if you're number one or two with whatever talk programming, if businesses don't or can't afford to buy your station, it's not a good sign.
 
But for reasons that we've discussed in other sections, local sales have been lagging since 2008 and even more-so since the pandemic. Gravity especially depending on market size. Smaller local business living close to the margins couldn't stay afloat with no business, and those who did, switched to on-line/digital advertising because it's cheaper. Auto and furniture advertising remains in a slump, and big box stores that only do agency buys are pushing other local businesses out.
Doesn't matter if you're number one or two with whatever talk programming, if businesses don't or can't afford to buy your station, it's not a good sign.
And that is the situation that those that criticize the cuts in staff, promotions and other areas don't understand: adjusted for inflation, radio revenue is off by around 65% since Y2K. We have more stations fighting for fewer dollars, plus the effects of inflation that have reduced revenue and increased operating costs.
 
MM797- From an analog audio and sound quality standpoint you are correct. I agree.
This aspect does not tell the whole story, because audio sound quality is not always all that matters.

My historic view-

Back in the old days, in the 70's and early 80's the number of transformers, op-amps and capacitors, VCAs in the signal path absolutely made a difference, and sound quality could be degraded.
Your timeline is a little messed up. The typical console of the 1970s didn't use opamps in the audio path because what was available wasn't good enough. The 741 was too slow, the 709 was basically awful, and the LM301 might show up from time to time, but not much. Consoles were built with discrete amplifiers until the arrival of the TLO72 and NE5534 in the last half of the 1970s. The BMX I used 5534s, designed in 1977-78, and those were ubiquitous. The circuit design from a performance standpoint is just fine today. The NE5532 and 5534 is still an excellent audio opamp.

VCAs, as a functional block, were invented by David Blackmer in 1973, and first showed up in console use with large automated mixing consoles mid-1970s as a viable replacment for motor-driven passive faders. The dbx 202 is a tad noisy by today's standards, still respectable, but otherwise quite clean, with relatively "musical" 2nd order distortion being dominant. VCAs didn't start showing up in broadcast consoles until, I think, the Auditronics series, and were never a limiting factor in total performance.
Broadcasters had to make a choice between the benefit of audio processing and sound degradation of a long signal path. For example, you might not have wanted to go through a nasty VCA to have single band leveling you really didn't need to achieve your processing goal.
I don't even know what a "nasty VCA" would be, unless you're referring to a gain control circuit implemented in a specific audio processor, which could be a 4 quadrant multiplier, an FET, or even biased bipolars. Yeah, those were not the best ever, but also, part of processing and limited to a very few instances. But distortion in a gain control element used in processing of that day wasn't because of the gain control device as much as the control time constants causing gain modulation at an audio rate, generating distortion. If the rate of gain change was slow enough, the devices were still not the biggest distortion factor in the system.
As time went on op-amps improved and many designers became adept at achieving desired processing sound with a minimum amount of degradation in the analog signal path.
Opamps improved in a single jump in the mid 1970s, the NE5534 and the TLO72, and their relatives. Some have pointed out, which I think is pretty key in this discussion, that audio recorded and mixed from the late 1970s, before the path became 100% digital, had all passed through dozens, if not hundreds of opamps before it got to the radio station. And they didn't distroy anything.

The big distortion mechanisms in the broadcast chain of the last 40 years have all be around processing, and related to primarily two functions: rapid gain modulation, and clipping. The result of those two things completely swamps anything an opamp or nasty VCA might contribute, even if chained dozens of times.
Then digital arrived. At first it was horrible. Over time digital signal processing (DSP) improved with audio engines achieving something amazing like 32-bit internal core processing dynamic range, and if A/D or DA interfaces are needed they could be 20-bit to 24-bit. Now good digital is fantastic sounding.
The above implies that "digital" was "horrible" becuase it wasn't 20-24 bits. That's simply not true. There were a fair number of horrible CDs because people slopped up on the transfers of analog masters to early releases. Things like using an equalized master, or not decoding a Dolby A tape, sadly, not uncommon, and gave digital audio a rough start. But many of the early analog transfers were excellent, and DDD projects were, and still are, spectacular. People point their fingers and the analog anti-aliasing filters and ADC/DACs of the day, usually citing the Sony PCM-1610. Sure, we have far better now, but the unfortunate reality is, those things were used on the great CDs and the terrible ones. There wasn't much else to choose from. So how did we get great CDs out of them too? Because they were done carefully.

24 bits increases theoretical dynamic range only. Real dynamic range is limited by Johnson noise on the ADC input, and that's a 20 but noise floor. But noise is the only improvement with increased bit depth, lets make sure we understand that.

DSP has improved in speed and processing capacity, the ability to lower latency with increased biquads. None of that directly improves audio quality. Relating to DSP, what improves audio quality is better algorithms, and we can now run better algorithms faster.
For many years now, in digital there has been no practical audio sound quality penalty by "stacking" audio stages to achieve a complex audio processing structure. Within reason the DSP does not care.
Yeah, well, not true. There is always a potential penalty for stacking audio stages when they do things that don't compliment each other. You could, for example, put a wide-band peak limiter in front of your multiband limiter. Would that result in a penalty in audio quality? Yup. The rules still apply in the digital domain.
Today, if the software and hardware can do it, you can make your audio processing dreams or nightmares come true. The sound quality risk is not the software and hardware, it is your own actions.
But it's not a question of the capability of modern DSP that changes what happens when audio stages are "stacked". A DSP runs algorithms that modify audio data. There is always a penalty of sorts for cascading audio modifiers if they are not specificaly intended to compliment each other, like in the example of a slow AGC preceeding a multiband processor. Digital did not change that, though. It was just as true in the analog days. You can focus on the potential noise buildup doing with analog vs doing it all in one DSP algorithm, but it doesn't matter when the noise is masked by the signal.
MM797, excellent comments about analog audio and bench work with the scope.
Sorry...didn't you see my scope pix that showed his claims were unfounded?
btw- I might add that except for output stages, op-amps rarely need much current drive capability. 5K load is just a few mA even at the rails.
I believe I said something similar.
 
Status
This thread has been closed due to inactivity. You can create a new thread to discuss this topic.
Back
Top Bottom