Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho

Showing 7 responses by kthomas

Danielho - to answer your latest post, no, there is nothing going on beyond what you're supposing - the transport generates a digital stream of data and the receiver receives it. From a technical point of view, it's either received the same as it was sent or it's not, and the benefit of a premium cable would have to be dependent on it's ability to allow the receiver of the data to get it "the same" more often than a lesser cable does. The objectivists point of view would suggest that the only way for the music to sound different would be for the bits to be different (or absent).

One of the severe limitations of the current transport / DAC technology is that there is no redundancy or error correction built into the transfer - the transport sends the data and "hopes" it gets there. If it doesn't, there is no way to fill in the hole or to recognize that it has been changed. Digital audio in the future most assuredly will be transported with a scheme that has complete error correction and redundancy which again, technically speaking, should basically eliminate any instance where the receiver gets something different than what was sent.

Then you have to decide whether the way digital data gets transported, assuming it doesn't get changed at all, can have an effect on the way the music ultimately sounds. I'm certainly not saying that people who say they hear a difference don't, but I can't explain why it would be and I haven't seen a lot of explanations. Perhaps there is something going on that is not yet explainable.

Finally, I agree that somebody in the world has to have run a test where they "capture" the data received by the DAC and compare it to the data sent by the transport, but I have never seen anyone document such a test. With all the communications advances of the past few years, I doubt very seriously that recent transmit and receive circuits / chips are not capable of sustaining the data rates necessary for CD playback over a 1 meter cable in a controlled environment, and therefore there is little sonic degradation based on the digital data being lost or changed between transport and DAC.

1439 - I agree with your point 100% and have made it myself many times - if differences are heard using different cables, the bits must be being altered (a bad thing)and if they are, then we as consumers should be demanding a better technology for the interface, not spending a bunch of money on cables and transports. As you say, you can transfer bits perfectly in the computing world with good quality but relatively cheap cabling and absurdly inexpensive hardware. If a DAC with an ethernet input interface was available, it would be easy to set up a whole-house music distribution system that performs as well (or better, if transport technology is as spotty as it would appear it is) as the best transports. Hopefully such an interface isn't too far off in the future.
Redkiwi - you're right that having time synchonization requirements makes the environment more demanding. However, as long as you have 1) a redundancy scheme and 2) sufficient resources above and beyond the demands of the basic application to support the redundancy scheme, then you can effectively eliminate the time synchronous demands. The Levinson DAC / Discman buffering doesn't eliminate it because there's still no redundancy - if they send the data and it's not received correctly, there's no recovering the lost data. But if I have a 100Mbit ethernet connection and have to keep up with only the bandwidth necessary for CD playback, I can send / resend the data dozens of times if need be and still keep up. If I can transfer files across a LAN perfectly accurately at 10Mbit/sec, I should be able to transfer music "files" perfectly at a rate of 1.5Mbit/sec. If current transport /DAC interconnect technology can't perform this same feat, we should demand better.
Why is it that when refuting the argument that there is no technical reason for a digital cable to impart a sonic difference, those who hear a difference resort to the, "well, your system must not be resolving enough and/or your hearing not good enough or well-trained enough" line? Why not, "you're claiming I'm pre-disposed to hear a difference, while I think you're pre-disposed to NOT hear a difference". I would guess that if you took all the systems owned by those who say that they don't hear a difference and compared them to the systems of those that do, the quality and resolving capabilities would be quite similar. In any case, that point of view always detracts from the discussion in general, especially when it's layered on top of, "I don't know why it sounds better, it just does". If the system is indeed more resolving, offer up a hypothesis on how a system on which such differences can be heard effects this improvement so we can all learn from it.
Blues_man - Given the inadequacies of the CD protocol and your experience in the area of digital audio data transfer interfaces, what are the benefits brought to the environment by using a high-end digital interconnect over a basic, well-built digital interconnect? That was really the question at the beginning of this post, and I think we're all still curious. -Kirk
I'm still missing something, but I've been known to be dense before - my take on timing errors would be that they cause bits to be misinterpreted - since sender and receiver have subtly different experiences with their independent clocking mechanisms, the receiver interprets things slightly askew and gets a different "answer" than the sender sent. The result would be different bits fed into the DAC than if there was a perfect transfer. Is there something to "timing errors" beyond this that I'm missing?
It is asynchronous, and it's fixed rate. In a "normal" setup, sender and receiver each have their own clocks that are supposed to work identically. I've never studied the CD interface, so somebody who has can correct me if I mis-state something, but in typical asynchronous communications, each byte has a "start" bit and a "stop" bit surrounding the eight bits of data, so timing errors would have to be severe enough to cause problems from the time the start bit occurs to the time the stop bit arrives. Since it's a fixed data rate, it takes just as many bits to transmit high frequencies as low frequencies, so the chance of error should be the same. In any case, I can't see any way that the cable would make a difference in the ultimate delivery of the bits based on timing errors as long as it's the usual "good working condition".

Or, I have no idea of what I'm talking about and would love to understand it with you, Danielho. I definitely don't have any idea how a DAC works electrically, certainly not on the analog side - to me, the problem is broken down into "chunks" where getting the samples delivered to the input of the DAC is separate from what the DAC does with that sample - I'm assuming that with a known stream of samples, the DAC will produce a known output. -Kirk