Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho
Well, I never believed that there could be differences between digital cables - strictly ones and ohhs. But then I thought about it and realized that there's a significant difference between the sound of coax and toslink in my system. Hmmm - still ones and 0's but the cable DOES make a difference afterall.

I just ordered the famous Stereovox digital cable from Cable Company and fully expect it to tighten up the sound from the Monster Cable coax cable I'm using. Hopefully it will be good enough for 2 channel music, as even the Monster Cable isn't all that bad.
Just got the Stereovox, and there's really a night and day difference. The Monsters are warmer but details are lost and smeared - not bad for music listening, but they were just a bit too euphonic. The Stereovox (burnt in by the Cable Co) have excellent detail, depth, soundstage, imaging, etc., but not without its faults either. They can sound bright and edgy and digital sounding.

As it is, I'll probably keep both, using the Monster for older brighter soundtracks. But for incredible steering, imaging and surround effects on good soundtracks, I'll keep the Stereovox. And for music, I'll just keep using analog interconnects.

So digital cables sound pretty much the same? To my ears the difference is as dramatic as any interconnect I've used. In fact they're almost polar opposites of each other soundwise. As if each parameter is on the opposite end of the same spectrum.

Hmm, anyway I'm now a confirmed believer though I can't help being a little disappointed by the Stereovox's after all the hype ("most amazing deal in audio - ever" etc.). I'm hoping the Cable Co just didn't cook them long enough or something, since the bright digital sound I'm hearing sounds a lot like analog cables that aren't fully broken in. We'll see.
This is a very interesting post. Everyone who posted a comment is right; well, maybe except the one about Nicole Kiddmann.

Ideally, it shouldn't make a difference in which digital cables you use, but really, just because it is one and zeros, it is still an analog signal. It has a slew rate, overshoot, undershoot, and ringing. Any mismatch in cable impedance, terminations, or source impedances can cause some anaomolies in the digital word. When it gets bad enough, the 1's become zeros and the 0's can become ones. The same is true for optical links, too.

I wish I could post a picture here, I have some eye diagrams of high speed digital signals in the 2.5 GHz and 5.0 GHz range which are very enlightening. When the cable is lossy enough, there is very little difference in a one or a zero.

Do note that it is datastream dependent; that is, a long series of ones won't degrade like several ones followed by one zero, etc.
A digital waveform can be very badly distorted, as viewed by the eye, but a well designed line receiver will still properly distinguish ones from zeros. Furthermore, if an error is made, or even a group of errors, perhaps due to some unrelated power glitch or scratch on the CD, a data stream with error correction encoding (like a CD) will still be recovered exactly. Them is the facts.
Precisely. Which is why it is so curious that digital cables do seem to sound different from one another.
Drubin...So let's turn this question over to the psychologists. Electrical engineering has no explanation.
Because the components that connect the transport and DAC are themselves connected to the ac-line source via possibly different power cords...and we know from a thread elsewhere that AC Power Cords have an effect on frequency response (digital or analog)

Geez guys it was soooo easy ;)

Cheers and happy listening!

DPac
maybe it's not what goes through them; how about what "gets out of them" ie, emmissions => noise...
Less makes for more. More musical results from less error correction. That be a fact Jack...Tom
Actually, if a digital waveform is distorted to the human eye on a scope, then it is distorted. There are built in test parameters you can put in to determine if the waveform is in spec or out of spec over a period of time.

Check out this link:
http://www.scientificarts.com/logo/logos.html

You don't need to know the math to see what they are talking about. There are some eye diagrams there which are very "open" and hence have excellent data transmission with no errors and there are some which are "closed" which are filled with errors.

Just 'cause its digital doesn't mean it isn't lossy.
Spatialking...A digital waveform does not represent the information being communicated (a "one" or a "zero"). The waveform is sampled at some specific instant of time to determine if it is above or below some threshold. Where the signal goes in between sample times is completely irrelevant. All that matters is that the sample time be chosen so that it is well away from leading and trailing edges of the waveform where the signal level is changing rapidly and might be misread.

In a complex digital system that I worked on (Missile guidance) we had dozens of analog signals being sampled and digitized, and many digital signals being used to generate pulses of power (amperes) to control things like gimbals. All in the space of a 9 inch sphere, so you can imagine the electrical noise environment! By very careful selection of sampling times and signal sequences it all worked fine, although you would never have thought it possible if you looked at the raw analog signals. The bottom line is that a sampled data system does not exist except at the sample times.
Missile guidance systems? OK, from now on I will definitely be nicer to Eldartford.
Okay, now what can I expect to hear when I use a Balanced AES/EBU cable rated at 110 ohm (in place of a digital coax at 75 ohm)?

And will the highly contested and aformentioned replies to Dan's original post apply?

--ksr
Hi folks, I have been using two top of the line AES/EBU digital cables from Pure Note (Paragon Enhanced) and Purist Audio (Dominus). After a while, I inserted back my old Wireworld Gold Starlight II S/PDIF, because I prefer the Wireworld over both the AES/EBU cables. Why I prefer the S/PDIF? With the Wireworld the sound is more upfront, with more PRaT. The midrange is fuller. But both AES/EBU cables had better detail retrieval. Is this a common difference between AES/EBU and S/PDIF? Thanks.

Chris
Dazzdax,

IMO, yes, it is a common difference between coax and balanced digital cables. You've essentially described the difference I hear almost every time between spdif and aes connections. Occasionally, the spdif will be better top-to-bottom, but usually it's just superior from the mids on up. Of course, as always, it depends on the system. If the loudspeakers in use have too much energy in the aformentioned range, aes connection might be just the answer. There's little doubt in my mind that having to 'deal' with 3 wires instead of 2 makes designing an aes cable very difficult.
The transmitting and receiving chips in the transport and dac are not identical for unbalanced and balanced modes in most cases. This may partially account for the differece you're hearing.
Longer digital cables sound better than short ones, according to my ears. My listening tests were with Illuminations SPDIF and PS Audio Statement AES/EBU, comparing 1/2 meter to 2 meter lengths. Has anyone else also found this to be true? Anyone have a technical explanation?
Warjarrett,

Yes, there are technical explanations as to why a longer digital cable sounds 'better', unfortunately, there are also very good explanations as to why shorter sounds 'better'.
Frankly, I think it's pretty clear from my listening tests that longer digital cables sound warmer and fuller, while shorter cables are brighter and more revealing. Ray Kimber has stated the same thing.
I don't think it's worth worrying about, as long as you find the cable that performs best in your system.
Why is everything we hear un-measurable ?
If you can hear a difference, could we come up with a difference that we could measure...
Gee.......how did I ever miss this one? I gotta pay more attention to what goes on around here.

OK.....briefly.........the impedance of the cable, transport and receiver have to be matched. In the case of SPDIF, that means 75 ohms. 75 ohms means BNC, and not RCAs. (75 ohm RCAs can not exist, the laws of physics say so.)

Anyway......if there is ANY impedance mismatch, part of the signal will bounce back and forth. (Reflections......in nerd-speak. ) The reflected signal can cause disturbances in the clock recovery. (Don't ask me to 'splain why......it is just the nature of a poorly designed protocol.) When you have a system where all 3 components are matched, impedance-wise, then you should not be able to hear any differences when you futz around trying different combinations.

Sadly, most stuff is not even close to being 75 ohms. Not cables, not receivers, and transports are usually far off the mark. Transports are ususally designed to produce the least amount of EMI, and that spells disaster for SPDIF.

Receivers usually suffer from input stages that use RS422 inputs, that have lots of hysterisis. Which is a form of regenerative feedback......which means lots of reflections from reactive elements being coupled back into the input, blah, blah....

(A very famous maker of digital gear once designed a DAC that had a 5-pole filter on the input. Any wonder why it sounded like crud? And to think that they actually offered me a job once, to help fix all their problems. How dumb did they think I am?)

As for cables.......well......RCA jacks and mystery coax...or worse.........twisted pair.......make getting an impedance match darn near impossible.

Which gives rise to an entire industry of after-market kludges, designed to move money from your wallet and into someone else's.

(Actually....a buddy of mine used to make probably the only one that ever worked as claimed. Only to see it skewered on some DIY website. My, my.......)

Did that answer the question?
Not only do cables sound different from each other, but they are also directional.

OK.....I know some of you are thinking that I have spent too much time out in the hot Texas sun. Yes, I have, but that is besides the point. For many years, I was the "wire and cable guru" in the Advanced Technology Lab for a major telecom outfit. To support this claim, I have provided links to some data on different cables that I have here at the lab.

For those of you who aren't interested in looking, here is the short version:

All cables have impedance perturbations along their length. Where they are (in relation to the load and source end) and their magnitude, along with anomalies were the shield and centre conductor are separated to go into the connector will have *some* impact on impedance. And therefore, how they will measure. Hey......if they measure differently, it is not a hard stretch to conclude that they will sound differently. On SPDIF, this is one place where "If it measures different............."

Granted, the differences, both measured and audible are minuscule, they do exist.

Before I go any further, I should explain something about coax cables in general. A bog standard 75 ohm coax will have an impedance variation of +/- 3 ohms. IOW, the cheap stuff that you may buy at "rat shack". A decent 75 ohm cable will have a variation of +/-1.5 ohms. A precision one will have a variation of +/-0.75 ohms. Most of you will never see one that good. The only place they turn up in my work is for calibration kits for network analysers.

So.....for those of you who want to see what I am talking about, here you go.....

First is the cheap cable that we used to include with our D/A convertor box. The U-Byte 1 cable was not intended to be the greatest thing ever invented (we sold them for $20 for anyone who wanted one), but it outperformed most cables in the<$200 price range. (This was back in '92 or so. I think.......) Anyway, the reason it worked so well was not its absolute impedance (which was is the +/-1.5 ohm range), but due to its 6 m length. It was that ol' "If the reflections arrive after the decision point" philosophy at work.

If you look closely at the graph, you will see how there are minor differences between the green and black traces at the higher frequencies. One is one way.........the other the other way. (Which direction is right?.........both or neither, depending on your outlook in life.)

A further bit on the graphs..........

As the length of the cable (6 m, in this case) becomes significant in relation to the frequency, you get lobing on the graphs. For this reason, using a Vector Network Analyser is not the preferred way to accurately measure cable impedance. That is best done on a Time Domain Reflectometer. If I get the time, I will make those measurements available.

So......it is best to read the impedance at the lower frequencies. In this case, the reflection is -40 dB, and that translates to an error of +/-1.5 ohms.

OK, next is the U-Byte 2 cable. This is a better cable. It has an reflection of -50 dB at the low frequencies, which gives us an impedance error in the +/-0.5 ohm range. Not bad for something that I bought from Belden for a buck a foot. On this one, while the errors are less, they are more apparent with direction. This is easily explained, as a readily available (and affordable) 75 ohm BNC connector was not to be found. I used a good BNC jack that I had to modify (by hand) each and every one, in order to make it fit the Belden cable. Yes, done that way, there is an obvious lack of precision. Still.........it was and is a damn good cable.

*Which is no longer made or available. Please do not ask to buy one, as there are none to be had, except the few that are still floating around in the lab. And no, I am not parting with any. Their inclusion is for reference purposes only.* In fact, we are not in the cable business at all. I will measure any cables supplied for testing, but will not make any, for anyone, at any price.

OK, the obvious question now becomes "What do other cables look like?"

Well, here are two. Not designed for SPDIF, but 2 cables that I have sitting on my bench. So I measured them.

First is a generic RG-59 cable. It is only 1 m long, so there is much less lobing at the higher frequencies (due to its shorter length). The accuracy is around +/-0.85 ohms, which is pretty darn good for something that just happens to be sitting around. If if were a bit longer, it may sound pretty good. Maybe. (The TDR would tell more.)

Then we have a Suhner cable that is clearly marked on its jacket that it is +/-1.5 ohms. It also has a return loss of -45 dB. Better than advertised. However, you will notice that it has less lobing, and is more accurate at the higher frequencies. Hmmm......maybe I should connect a few of them in series and listen to them.................

Well, maybe not. I have other things to do. None of which involve audio.

Hope this gives some of you something to ponder next time you plunk down $$$$$$$ on digital cables.

Enjoy, and Happy Listening!
Hey, U Byte! Oh yeah, U Byte 2! Well I'm thinking about going one box. It's the only solution for this insanity.
A guy who worked for our dealer in Chicago came up with the names. Yeah.....that is pretty much what he was thinking when he did.

SPDIF is a less than optimal solution. To get the jitter on the clock to the level of a one-box solution, it takes a lot of extra parts, in the form of secondary phase-lock loops. And/or feeding the clock back to the transport.

We made one outboard D/A box. Went back to making one-box players after that.
Ar_t...One interesting experience in my career as a systems engineer on Submarine-Launched Ballistic Missiles was a visit aboard a 20 year old boat to evaluate the ability of the existing wiring from tube to fire control (as much as 300 feet) to transmit digital data at the higher bandwidth to be used by a new system. We injected pulses at one end and looked at them with a scope at the other end. My God! Were they ever distorted. All kinds of spikes and overshoots. But, and here is the point, the information transfer over the wires using those sorry-looking pulses was flawlwss.

You have described how the digital waveforms are distorted by improper impedance, stub terminations, etc. but it is still unclear how an analog wareform reconstructed from digital information could be affected by the shape of the digital pulses.
It has to do with the way the clock is extracted from the SPDIF signal. There is a high degree of correlation in it. This leads to a great deal of data-related artifacts in the recovered clock.

(If someone was to hook up some sort of listening doo-hickey to the point in the circuit where the PLL filter is, you will hear a very distorted version of the programme material.)

Any reflections in the data stream manifest themselves into a change in the data-dependent jitter. Not so much in the actual amount, but the frequency distribution. Absolute jitter numbers by themselves are of little good without the corresponding spectral distribution. Close-in jitter, say <10 Hz, is more detrimental than jitter at 1 kHz. So, as the reflections alter the decision point, they alter the spectral distribution.

I know.....a lot of technical mumbo-jumbo, but that is it in a nutshell.

If the clock and data were sent via separate cables, this sort of problem would not occur. Which is why one-box solutions will always be better.

What Eldartford describes is basically a Time Domain Reflectometer. I hope to have some pictures of different cables soon. (I need to construct a hood for my camera, so that I can photograph the screen. The TDR I use was made in '63. Back before they had a data port on the back to pull out the data in a form you can make a JPEG with.)
Ar_t...The system I described does have a separate "clock" line. Actually there are three lines, One, Zero, Strobe.

1, 0, 1 is a one.
0, 1, 1 is a zero
Any other set is invalid.
.
I've just read this thread with great interest.

I did some tests on my MBL transport and DAC (1621A/1611F) which I find interesting. It has multiple inputs and outputs, so I can run several cables in parallel and switch the input on the DAC via the remote, thereby making A/B very easy. It switches instantly too so there is no interruption to the music, making it even easier to detect any audible differences.

I ran a total of 4 cables in parallel, each on its own interface: 1 AES/EBU (either Synergistic Research Precision Reference (yeah, a regular XLR cable) or Audio Metallurgy GA-0 (a digital cable)), 1 RCA (I tried Gabriel Gold Revelation, Zu Varial, cheapo $1 plastic cable - all three regular ICs, and one digital $100 Monster cable), 1 Toslink (cheap Monster cable from the mid 90s) and 1 BNC (DIY copper, specced as a digital 75ohm cable).

Playing around with these in various configs and switching inputs back and forth (both blind and active, I used some fellow audiophile friends) - there is NO difference whatsoever between ANY of these cables.

I find this pretty mysterious to be honest. I've played with a lot of cables in the past and found many worthwhile improvements. But with this MBL duo, it's all the same.

Can someone perhaps shed some light on what is happening here? What have the Germans done inside the boxes to pull this off? I have no idea, and I'm not qualified to guess either - science isn't my forte. :)
Hi Osgorth.

German technicians are well-known perfectionists: ME-109 fighter plane, Panzer tanks, 88mm multi pupose canon,WW II standard 7.62 machine gun (which the US Army adopted as the MG60 in the late 50s and is still using 50 years later), the early Telefunken TVs and radio tuners, the VW bug, Porsche racing cars (especially, the 956 and the 962), anything optical, etc...

Perhaps MBL components are perfect and therefore not subject to improvement by cables.
Mike19
Did you test only by doing a rapid A/B between cables? In my experience, that methodology usually yields a "no difference" conclusion.
Osgorth, I think you've just done the test many here may have preferred you hadn't. Well done! From a technical point of view, there's no reason for digital cables to sound different provided they're made to a good basic standard, but there are plenty of audiophiles with bat ears who can 'hear' some amazing differences in various cables. I'm not surprized at your findings at all.

As for Drubin's comment about A/B comparisons; I've written in previous threads that this is the ONLY way to truly listen for discrete differences, as it allows the brain to do a true real-time comparison.

Long listening tests (where you listen for a longer period), then change cables and listen again requires you to compare the new sound to your memory of the first, which is less accurate IMHO. Not only are you relying on your memory of the sound, but you can introduce variations due to shut-down and restart of components, unlplugging and reconnecting cables and so on. Such comparisons work for hearing gross changes, but not the very small variations in digital cable sound.
Allow me to defend my position with this observation from personal experience: switching between tape and monitor when dubbing to a cassette deck, one might be hard-pressed to hear much difference. One would be very wrong.

Playing around with these in various configs and switching inputs back and forth (both blind and active, I used some fellow audiophile friends) - there is NO difference whatsoever between ANY of these cables.

This matches my experience. I maintain that properly engineered and matched equipment should not give a hoot about cabling - as long as it is adequate and you don't have a ground loop, RF/EM problem or contact problem. I suspect that the great differences reported stem from running equipment at the limits of its ability - amps that are already clipping and loads that are mismatched to begin with....situations where the slightest difference might influence the sound enough to be audible. Just two cents...and I admit I could be completely wrong.
Sometimes there is much more involved than meets the ear while doing quick a/b comparisons in wire. Geometry of the wire, conductor material and dielectric material need to find their relaxed neutral state of mind once they have been disturbed and reattached again. This I found to be true of cables that have been playing in the same position in the same system for a year or more. They need to settle back which may take a 1/2 hour or more. Tom
I`ve done a test like Osgorth did, and with optical AT&T as a reference.(A-B-C test) First; optical outplayed any "high-end" (high price) coax, actually no match. Even if we could observe some minor differences between the coaxes, they was totally outplayed by the clean open sound from/through the optical cable.
Tryed out some DIY`s too, air-insulated coax and stuff, but none came close to the AT&T digital. Until I made up a coax of my reference IC; the TV-coax Vivanco KX-710. And for some reason this coax just does it all right. Now we were up in the same league as the optical, and after some switching we could observe that the Vivanco-coax was definetly a bit cleaner in both ends, much like the same way it outperforms all other IC`s, then in twin configuration.

Even if I have my thoughts about why they sound different I woun`t try to come up with some answer/guessing. But one thing is clear; there`s a lot more to soundreproduction then what those "theory-heads" comes up with :P
Wire properties can certainly affect digital pulse characteristics. But, up to the point where a data "one" can be misinterpreted as a "zero" pulse characteristics don't affect the information which goes into the D/A converter.
So I don't find it "mysterious" at all that various cables sound the same. The mystery is why some folk think they sound different.
Eldaford - if they affect digital pulse characteristics (different bandwidth) then they produce different amount of jitter. Shielding also affects the jitter since noise causes changes in threshold levels. Some DACs are not sensitive to jitter (like Benchmark DAC1)at all but others are. Impedance matching also plays a part. Many cables ends-up with an RCA connector that is not 75 ohm.
The question is : why people pretend to hear a difference ?

Or why they THINK they heard a difference ?

No need to look further
I don't know if this response has been given before but this is what I know. For digital signal paths the frequencies of concern (pulse rate, pulse edges, etc.) are outside (higher) than the normal audio domain. In fact, they behave more like RF. So, the digital cable is essentially a specialized RF cable. At RF frequencies it is the cable impedance, especially at the interfaces (connectors) that is important because mismatches with the commected components can cause signal reflecttions. What this means, obviously, is that phantom bits may been ADDED to the digital stream. So, a good quality digital cable is just as important, if not more so, than your analog interconnects!
Added bits would be extreme case of impedance mismatch. It's more to preserve shape of the pulses to avoid crossing threshols at different times causing jitter. Jitter creates sidebands not harmonically related to root frequency (audible). Even double Phase lock Loop cannot compensate for fast changes.
"The question is : why people pretend to hear a difference ?

Or why they THINK they heard a difference ? "

Anyone can hear the difference if just the right and wrong cables are put up against eachother. But most cables are about the same, no big diff.
And the confusion gets complete when folks starts to make up theoretic "answers" to why.
Palerider - if you cannot understand just read more.

Myself, if I cannot hear the difference I modestly admit it without saying that everybody else only "THINK" they hear a difference (that would be an arogance on my part).
It seems Paleriders opinion is that all digital cables sound the same and (I assume ignorant) people are just pretending they sound different.
Palerider, what digital cables have you tried and between what equipment before you arrived at your opinion?
Is it unreasonable to ask what experience you've had on which you base your opinion?