Did Redbook get it right?


I've always felt a tension between the narrative that a) the Redbook spec murdered music, probably in cahoots with greedy plastic vendors, and b) the great respect I've had for engineers I have worked with. I would think they knew what they were doing, considering the stakes and the state of their art at the time.

I leaned towards the murder/greed scenario, especially as my original Sony 520-ES CD player presented a fleshless corpse of Joni's Blue album, and the few high-end players of the time I tried, like the Enlightened Audio, seemed to fail at resurrection.

I've reconsidered. If I rip my CD's to FLAC, feed a Benchmark DAC over USB, and into my tube amplification, I am stunned by how good and satisfying many CD's sound. I have no desire to fire the Linn Sondek back up. I have no sense of things missing. Sure, there are many crap CD's, but is any of that stink coming from Redbook spec? Some newer CD's simply stun. I not into country, but something like the Mavericks' In Time CD is acoustically complete and fully fleshed.

I've been over to HDTracks and Acoustic Sounds to download hi-rez versions, and I can feel the pull to feed my rig the best I can buy. It's such a good story, easily embraced by the audiophile mind, but I'm increasingly wondering if it is all marketing razzle-dazzle...more, denser, higher...and in the end, Redbook got it right, and the new DACs finally do it justice.

Always with an open mind, and there's much better gear than mine, but I'm newly impressed by the original Spec.
electroslacker

Showing 1 response by kirkus

Way back in the early 80's, that was the story that was being told and there was a lot of screaming about how the 44.1khz rate was a big compromise.
IIRC the origin of 44.1KHz had to do with the fact that digital masters were sent to the pressing plants on Sony 1630 U-Matic videotape, and 44.1KHz is a workable sampling rate for both PAL- and NTSC-format units to put three audio samples in each line of video.

But most of the early pro digital recorders and devices used sampling rates from 32KHz to 50KHz, in depths from 12 to 16 bits. In this, there's some great-sounding gear (a particular Weiss delay comes to mind), some horrible-sounding gear (I'm thinking of an early Lexicon delay), and some stuff that can go either way depending on how it's used (like the 3M Digital Mastering System). There was (and remains) no simple, constant correlation between the exact sampling rate or bit depth and the sound quality of the unit.

Personally, I think the Red Book standard hits a beautiful compromise on a whole slew of engineering criteria, especially when you consider all the challenges for the media and hardware manufacturing that had to be met for it to become adopted on a worldwide mass-market scale. It's easy to look with hindsight and think they should have done this or that differently, but one can do that just as easily (and brutally) with the LP format. The engineers of the time of course realized that it had its limitations, but they designed a music format that is remarkably durable, reliable, convenient, consistent, and economical . . . and if care is taken in the production and playback processes, it can sound really good too.

Suppose that instead the CD was i.e. 8 inches in diameter, and based around 88.2KHz/20bit PCM, extremely robust parity-based error correction with buffering, an S/PDIF interconnection standard with a separate word clock, etc. etc. etc. The question is, would this format be so good as to have completely avoided the whole "digital-vs.-analog" polarization?

I think the obvious answer is that it wouldn't . . . and as audiophilia is so rampant with binary (sic) debates, this was going to happen no matter what. The same personalities would get into the same arguments, and today be talking about how nothing stripped the "soul" from the music like that sinister conspiracy that brought us 88.1KHz/20-bit audio.