Bits Are Bits, Right?


So I'm currently heading down the path of exploring which CD-Rs sound best in my CD player, along with what burn speeds sound best and what CD burners make the best CDs. I already know from my research that the more accurately the pits are placed on the CD (e.g. less jitter in the recorded data), the better chance I stand at getting the CD to sound good. There is a counter-argument to this idea that goes something like this: "Bits are bits and as long as the CD player can read them, the accuracy of the spacing doesn't matter because everything is thrown into a buffer which removes the effect of any jitter written into the data during burning." I know I don't agree with that logic, but for the life of me I can't remember the technical reasons. I know I used to know. Haha! 

So who here knows why buffers don't solve all of our problems in the digital realm? How come timing accuracy matters in the stages before the data buffer?
128x128mkgus
From a post by member Kirkus (who has a vast amount of hands-on experience with the internal workings of CD players) in this Audiogon thread from 2011:

Two big conceptual errors I see very commonly are the assumption that any intrinsic jitter related to retrieval of information off of a CD actually occurs through the forward signal/data path, and that any sonic artifact associated with parts upstream of the DAC must be classifiable as jitter.

In reality, CD players, transports, and DACs are a menagerie of true mixed-signal design problems, and there are a lot of different noises sources living in close proximity with susceptible circuit nodes. One oft-overlooked source is crosstalk from the disc servomechanism into other parts of the machine . . . analog circuitry, S/PDIF transmitters, PLL clock, etc., which can be dependent on the condition of the disc.

One easy way of measuring this on the test bench is to have two versions of the same test-tone CD, one pristine, the other scratched. A conventional distortion analyzer is used to null out the the tone(s), and then an FFT (or visual 'scope analysis) is used to analyze the residual. One would be surprised at some of the nasty things that sometimes come up out of the noise floor when the focus and tracking servos suddenly have to work really hard to read the disc.

Regards,
-- Al

It all comes down to a couple things: the scattered laser light gets into the photodetector as noise, the CD player is susceptible to external and internal vibration, the CD itself flutters during play so much that the laser servo system can’t keep up. The Reed Solomon Error codes are practically worthless. There is no buffering in most CD players. All the King’s horses and all the King’s men couldn’t put Humpty Dumpty together again.
"Bits are bits and as long as the CD player can read them, the accuracy of the spacing doesn’t matter because everything is thrown into a buffer which removes the effect of any jitter written into the data during burning." I know I don’t agree with that logic, but for the life of me I can’t remember the technical reasons.

Could be what you forgot is its wrong. Bits are indeed bits, but the pits and land aren’t bits at all. Each is a word comprised of a whole string of bits, with the length of the pit and the breadth of the land being what determines the string of bits. So it is a time function and anything, including a lot of things GK clowns around about, can mess with the bits and thereby the sound. CD is in other words not digital in the sense almost everyone imagines, but analog. Only CD is analog with a whole lot of noisy circuits and whatnot in the path. So technically even worse than analog. If such a thing even is possible. Which CD proves every day that it is.

Perfect sound forever! Technophilia uber alles!
Yes, digital only exists as a mathematical concept. All of reality is analog (at least the reality we deal with - at the scale of Planck time and Planck lengths things may be different). A stream of “digital” data is an analog signal that a computer has to interpret as a 1 or a 0 by deciding when the value has changed enough and at what time to be interpreted as a different bit.

One of the ways I know that what happens before the data buffer matters is the difference in sound quality between streaming and reading a local file. I have always thought the local file sounds better than streaming even though it’s the exact same data. Just recently, I was driving down the road with a friend when they plugged the phone into the car and the sound quality was much better than usual. I asked what they did, and I found out they were playing songs off the phone’s “hard drive,” whereas I am usually streaming from Tidal. Same data, way different sound.