• Get involved.
    We want your input!
    Apply for Membership and join the conversations about everything related to broadcasting.

    After we receive your registration, a moderator will review it. After your registration is approved, you will be permitted to post.
    If you use a disposable or false email address, your registration will be rejected.

    After your membership is approved, please take a minute to tell us a little bit about yourself.
    https://www.radiodiscussions.com/forums/introduce-yourself.1088/

    Thanks in advance and have fun!
    RadioDiscussions Administrators

Audio Processing questions

MarkPD

New member
Today stations store music in computers with audio levels normalized. But how did stations run the board back when they played CDs and carts (or just carts)? Were levels normalized at the board? Were AGC release times faster than now to compensate for input levels being all over the place?

In the present, what's the way to get a major market sound? I'm talking about stations that sound like the AGC release is 0.5db/s and there's no holes in the programming from the slow release. Songs with low intros sound like the intro was boosted so the AGC release wouldn't be obvious after a loud liner. Do some stations preprocess their music to get this effect?
 
It was called humans and cart machines zero'ed out in all the production rooms and playback levels in the studio.

When audio was dubbed on to a cart, in a perfect world every jock and production person would strive for similar read on the VU meter.
Often when I'd dub music on to carts I'd ride the gain of a soft intro song while recording to help offset the work the jock and processing had to do when the song was played on the air. Then there were actual people in the studio who's job was not only to entertain but actually "run the board" watching levels of each element and adjusting the pot to maintain as close to 100% VU as possible. This seems to be a lost art now.
When I was on the air...if I knew a song had a soft intro I'd jam the pot higher/hotter to compensate as well. CD's complicated this a bit as you couldn't massage the level on the way into the recording. You just had to get better at running the levels when you were on the air.

Now everything is just a file and you drag it in. Sure you can normalize but no one touches the level of the audio on the way in to the automation unless you indeed dub in every song. When jocks are voice tracking they have little if any control over the song levels. And as much as I hate to say it many of the jocks who are actually live probably wonder what that thing that goes up and down on the board means? Either the pot or the meter!

My take is the exact opposite of yours that levels were more consistent back then than now.

And just my take, processing was more aggressive back then. Much cleaner now but seems to come at the expense of a solid AGC with the above mentioned issues contributing.
 
Last edited:
Today stations store music in computers with audio levels normalized. But how did stations run the board back when they played CDs and carts (or just carts)? Were levels normalized at the board? Were AGC release times faster than now to compensate for input levels being all over the place?

In the present, what's the way to get a major market sound? I'm talking about stations that sound like the AGC release is 0.5db/s and there's no holes in the programming from the slow release. Songs with low intros sound like the intro was boosted so the AGC release wouldn't be obvious after a loud liner. Do some stations pre-process their music to get this effect?

Audio processing at major market stations had a bit of magic to it. KHJ in LA had the processing behind a smoked glass door on a lockable rack. The gear consisted of modified off the shelf processors configured into a multi-band system with custom crossovers.

Even before that, we'd take the standard Audimax and Volumax of the 60's and make some resistor and capacitor changes to modify the time constants. The Audimax, generally at the studio, had the job of "normalizing" the audio before sending it to the STL or lines to the transmitter. At that end (sometimes in the same room, of course) the Volumax would operate as a peak limiter / clipper to prevent over-modulation; when driven hard, you could get very dense audio.

By the mid-70's we were using things like sets of LA-3As with crossovers for leveling. Then came the 80's and the Compellor, for decades the standard for off-the-shelf AGC. At the transmitter end, we went from a single band device like the Volumax to multiband processors, ending up with the Optimod in the mid-70's and its major competitor from Frank Foti.

In the earlier days... and when the loudness wars were at their worst... audio would often pump so jocks who did not talk fast enough would find the air conditioner noise in the studio being sucked up to high levels and fade outs or fade ins got sucked up and sounded tinny and strange.

One clarification... stations tend to standardize levels, not "normalize" them. Normalizing in digital processing tends to mean computer limiting peaks and raising low passages and results in a generally denser sound than the original recording. Stations want to preserve the original recording (ignoring the fact that so many songs are processed so much they look like square waves on a scope), and let the "live" audio chain from the studio to the transmitter take care of level control and density control.
 
David,
Thanks for the trip down memory lane! Loved that stuff. I came in just after that era as the Optimod and Prisms were gaining popularity.
Not that I've ever been involved...eh hem..but the 8100xt could be hot rodded a bit as well. Removal of a few chips and cutting of connections on certain cards.
I heard a story of someone in Cleveland back in the day not injecting the pilot within the processing but doing it as the last step in the chain. Have also seen/heard the Cobalt Optimod replacement cards, if I remember, in sync with Prisms. And only one time had heard of a type of red card? that went with the Prisms as well. And usually the Compellor in front of all of them. Liked the sound of the XT better than the Prisms tho. Had one station we ran the Clear Channel C4 Clipper at the transmitter site after all of this. Loud...comes to mind...
 



The Audimax, generally at the studio, had the job of "normalizing" the audio before sending it to the STL or lines to the transmitter. At that end (sometimes in the same room, of course) the Volumax would operate as a peak limiter / clipper to prevent over-modulation; when driven hard, you could get very dense audio.

Yes, and the Gates Solid Statesman FM limiter clipped like crazy. But it wasn’t until Bob Orban figured out what to do about the ringing and overshoots clipped waveforms caused filters to produce that things got really loud. That was mid 1970s.
In the earlier days... and when the loudness wars were at their worst... audio would often pump so jocks who did not talk fast enough would find the air conditioner noise in the studio being sucked up to high levels and fade outs or fade ins got sucked up and sounded tinny and strange.
Or if the jock paused too long the total gain would suck up enough to induce headphone > mic feedback.
One clarification... stations tend to standardize levels, not "normalize" them. Normalizing in digital processing tends to mean computer limiting peaks and raising low passages and results in a generally denser sound than the original recording. Stations want to preserve the original recording (ignoring the fact that so many songs are processed so much they look like square waves on a scope), and let the "live" audio chain from the studio to the transmitter take care of level control and density control.

“Normalizing” in the digital world only sets a maximum peak levell, it doesn’t change dynamics, and is loudness-blind. Processes like “Replay Gain” or Apple’s “Soundcheck” perform a pseudo loudness profile sample of the entire track and provide playback gain-offset metadata that playback software can use to adjust play gain to compensate for aggressive loudness war processing found on today’s music.
 
Yes, and the Gates Solid Statesman FM limiter clipped like crazy. But it wasn’t until Bob Orban figured out what to do about the ringing and overshoots clipped waveforms caused filters to produce that things got really loud. That was mid 1970s.

Yet many other developers worked on proprietary systems. Y-100 in Miami had audio better than any others in the market in the late 70's and early 80's, thanks to stuff that the late Doug Holland and Howard Quintin made in the shop; discussions of new chips and their slew rates was common among them. Greg Oginowski developed very effective multi-band processing in the late 70's that was better than anything available, particularly for AMs.


“Normalizing” in the digital world only sets a maximum peak levell, it doesn’t change dynamics, and is loudness-blind. Processes like “Replay Gain” or Apple’s “Soundcheck” perform a pseudo loudness profile sample of the entire track and provide playback gain-offset metadata that playback software can use to adjust play gain to compensate for aggressive loudness war processing found on today’s music.

Thanks for stating it better than I did. My point is that radio compresses and limits, but does not normalize in the sense that editing or recording software does.
 
Last edited:
Status
This thread has been closed due to inactivity. You can create a new thread to discuss this topic.
Back
Top Bottom