Because $500 one must sound better than one for $20 under First Amendment of Audiophile Bill of Rights.
57 responses Add your response
I know a guy that makes and designs cables. There are some reasons known why cables in general sound different but exact reasons are not always well understood - he conjectures it has quite a bit to do with the dielectric. But sound different they do as you will find if you listen to them. If you are afraid of being fooled do a simple blind test.
Think of the range of answers to your question as a knob on your amplifier. You can turn it from hard left (7 o'clock) to hard right (5 o'clock). Or you can stop at any point in between.
Say hard left is the "complete skeptic" setting. If you put your knob in this position, you'll think that all USB cables sound alike and that anyone who hears a difference is a fool deluding themselves.
Hard right is the "magic" position. There will be vast and obvious differences between cables for reasons that probably involve Star Trek warp drive principles.
Move the knob to the middle left and you might think that, under some circumstances with certain equipment, RFI noise can subtly affect results. Move it mid-right and the difference between cables is heard more often and with greater clarity, and the technical explanations will go up a notch in creativity.
So, dial in whatever setting makes you happy. Just remember that no matter which one you choose, there will be plenty of people ready and willing to tell you you're wrong.
Two things that I have learned through experience with many digital cables over the years.
1. Digital cables can and do sound different from each other. Or put another way, some sound better some worse.
2. A $200 cable may or may not sound better than a $100 cable. I have owned digital cables that retailed from $86 to $3600. I'm currently using a $200 digital cable that I'm very happy with.
They sound different to me. I've tried several budget brands and I have found them to sound noticably different. Logically, they should not sound different since it's sending discreet values, not an analogue electrical stream, but they do sound different. The only explanation I can think of is that the clock signal is sent on the cable and therefore there can be jitter introduced by different cables.
From what I've read:
Digital signals travel through the cable at very high frequencies, they are different from the signals that run through interconnects. High-frequency signals travel to the outside edges of a conductor, the "skin effect," and silver is a better conductor than copper, hence you have silver-clad cables.
I had a shoot out between USB cables in my system, Pangea PC (silver clad) vs Audioquest Forest (copper), and the Pangea wiped the floor with the Audioquest. No contest. The Forest sounded dynamically compressed, like the speakers were covered with a wet blanket. I am assuming the same would be true of digital coax, but don't know from experience.
The length of the cable matters as well, shorter is better for USB, longer is better for digital coax. The coax issue has to do with reflections within the cable that create timing errors in the signal, 1.5m minimum is what you want.
The connectors matter because their impedance can be different than that of the cable itself. All digital coax cable is 75 ohm impedance. It is difficult to create a true 75 ohm RCA connection, BNC connections are easier to create as 75 ohm. Very few DACs or CD players have BNC connections tho.
Robertsong, typical CDP used as a transport outputs digital S/Pdif signal with about 25ns edge transitions. Speed of signal in the wire depends on the dielectric but we can assume about 5ns/m. Very start of transition (knee) travels thru cable and reflects on characteristic impedance boundary (impedance change). That is always the case since there is no perfect match but degree of mismatch (and therefore reflection) changes. It will take 10ns for signal to travel forth and back of 1m cable. Reflection will add to transition in progress changing its shape. This will affect moment in time when level change is recognized by the DAC (threshold). Pick 1.5m cable and we're dealing with 15ns until reflection comes back. It will still add but to second half of transition when level most likely will be already recognized. In this application 1.5m cable will be better than 1m cable but, as I said, it might depend on slew rate of the transport (expensive dedicated transports are often faster) plus dielectric and metal used.
This time variation of edges in time is called jitter. Jitter is basically noise in time domain. In frequency domain it shows as small sidebands not harmonically related to root frequency (music tone). Imagine playing 1kHz tone while digital cable is in close proximity of strong 60Hz noise (power cable). This might produce 60Hz jitter of digital signal creating two sidebands (sum and difference) to tone that is being played. One sideband will have frequency of 1060Hz while another will be 940Hz (-50 to -60dB typical). So instead of just single frequency you'll get three. Now play thousands of frequencies (music) and you'll get 3x more. Replace 60Hz noise with combination of many frequencies (radio stations, 60Hz, etc) plus effects of reflections in the cable and you'll get total mess. This mess is noise that is proportional to amplitude of music signal. The only way to detect it is to see effects of it as lack of clarity, less precise imaging, less "black" background etc. Without signal there will be silence since there will be nothing to modulate.
If we won't take into consideration other effects like ground loops created by the cable or noise collected and injected into analog section, then the only difference between digital cables is jitter. Character of this jitter is affected by the character of the noise - is it random (uncorrelated) or caused by offending frequency like 60Hz (correlated), is shield better in suppressing high or low frequencies, are there any reflections in the cable?
One cable might be better characteristic impedance match to your system, while the other might have shield working better at the particular noise present in your room. More expensive cables tend to have better shielding but might be not the best impedance match to your system. Expensive cable doesn't have to be better. Coax is usually better than Toslink having wider bandwidth (hundreds of MHz vs tens of MHz) but when transport transitions are slow and therefore susceptible to noise and this electrical noise is present then Toslink might sound better. There is no right or wrong because it is system dependent. Toslink might be even blessing if you have ground loops.
Now let's get back to our 940Hz, 1000Hz, 1060Hz example. These sidebands are close to 1kHz tone thus masked a little bit more than frequencies further apart. It is sonic signature of sort, related to type of electric interference in your room, system noise, impedance matching, transport slew rate , cable propagation speed, shield effectiveness at different frequencies and perhaps few other I cannot think of now.
Average person using electricity has "idea" about it but very few understand it. Same should be true for digital cable. We have an idea but we have to try it since there is no way anybody can predict or measure how it will sound in particular system.
There are differences to me. In a word, it may come down to jitter. Some cable designs adversly influence jitter more than others and all the physical elements of the cable design are in play. There are at least 3 kinds of jitter induction: optical, electrical and mechanical. Coax cables may affect both mechanical and electrical jitter. Insulation (less is more) may also influence the sound (openness) as well. Lots of people prefer silver in this application (I use copper and am very satisfied with it) and it may be due to the higher frequencies than with analog, but I'm leaving those with more of an ee background to expound on that. My cables are not shielded as I believe this also constrains the openness a bit (at least with analog IC's, IME) and I believe should ordinarily be avoided unless you have a discernable problem with interference. But, Mlsstl is right - no matter which way you go with it, there will always be someone to tell you you're wrong. Regards.
Many of us simply think that, "hey, digital is nothing more than ones and zeros, so just as long as those ones and zeros get to their destination without changing state or being corrupted, a perfect transmission will occur and no sound difference can possibly be heard, end of story."
As explained by a couple of responses above, there is one aspect of digital transmission that many audiophiles don't get. It is called [drum roll please]...timing.
Those ones and zeros must enter the DAC chip at exactly the right time to be converted into the proper analog waveform shape. If the timing is off (by mere picoseconds) the constructed waveform will be there, but not exactly the correct shape it should be. And that is where much of the sound differences of different cables come into play (and CD transports, etc., for that matter.)
I suspect that, for various reasons, all digital cables have slightly different timing characteristics. Whether your DAC chip and associated circuitry is compatible with a particular timing characteristic determines the sound outcome. The cable's timing, and whether the DAC "likes" it, is not dependant on the cost of the cable.
Stop thinking so simplistically, digital audio is not just about the ones and zeros - that is only part of the story.
Are there any engineers or physicists in the posts above?
Why is is that everyone thinks they are an expert?
Well, I'm an engineer, and I used to manufacture excellent digital and analog cables, so here are the reasons:
1) Losses that slow the risetime of the signals on the cable - this causes the receiving component to detect the edges with less certainty resulting in more jitter
2) Dielectric Absorption - this is also called "soakage" and is analogous to a sponge absorbing water. The dielectrics absorb some of the charge and then it is not discharged at a constant rate. Some cables eliminate this effect by putting a DC charge on the cable with a battery. Others minimize the effect by using air dielectrics or air-filled teflon etc.. The effect is that the energy required in the signal to make a rising or falling edge is not the same for each edge because of the charge in the dielectric. The signal must overcome this charge and it cannot, so some edges are displaced in time, causing jitter.
3) Impedance mismatches - The nominal impedance of a S/PDIF coax cable should be 75 ohms, but this varies all over the map with different cables and the connectors on the ends also affect this. Impedance discontinuities cause reflections on the cable when the signal is launched into it. These reflections can bounce from end to end until they finally dissipate with the cable losses. If they happen to hit the receiving end when it is detecting the signal edge, the edge may be pushed in time, creating jitter.
4) Metallurgical defects in the conductors - Low-jitter S/PDIF signals can have risetimes in the 1nsec range. When signals this fast are launched into a cable, the conductor metallurgy affects the signal propagation down the cable. If there are a lot of faults in the crystal lattice of the metal conductors, this causes small reflections. They are like small impedance discontinuities. These reflections can appear at the receiver at the time it is detecting the edge and cause the edge to be displaced in time, causing jitter. You can look at TDR plots of this effect on real conductors here:
5) Length of the cable - All S/PDIF coax cables are imperfect and therefore cause some level of reflections, which can result in jitter if the timing of these reflections is unfortunate. By making the cable at least a certain length, one can avoid the effects of these unavoidable reflections, thereby avoiding the added jitter. This has been proven in double-blind tests by the magazine UHF in Canada. Here is a white-paper on the effect:
Steve, thanks for your inputs. Do you feel that the following may also be significant contributors to sonic differences between S/PDIF interconnects, at least in some systems?
6)Differences in noise-induced jitter, due to ground loop effects and/or RFI/EMI pickup, both of which may be sensitive to cable differences.
7)Differences in radiated RFI, that may affect circuitry in the system that is not directly related to the S/PDIF interface.
Concerning your no. 3, impedance mismatches, and with respect specifically to the impedance match to the components that are being connected (as opposed to mismatches between cable and connector, or impedance discontinuities within the cable) I would add the thought that what is important is not how accurately the impedance of the cable and connectors match the 75 ohm standard, but how closely they correspond to the actual output impedance of the component driving the cable, and to the actual input impedance of the component that is at the receiving end. Everything else being equal, a cable that is less accurate relative to the 75 ohm standard may therefore outperform a more accurate cable in some systems, if it happens to be a closer match to the component impedances.
Finally, I would be interested in your take on what degree of correlation can generally be expected between cable performance and cable price, for S/PDIF interconnects, given the many variables and system dependencies that are involved in the effects that have been mentioned.
P.S: Re your first question, I am an EE with an extensive background in digital signal transmission (not for audio).
My brain tells me no two cables that are physically different conduct electricity (or light for optical) the exact same way. So there has to be differences to some degree. The question for me is then how much and are the differences significant enough to matter in practice?
I wonder about digital ICs in general in this regard more so than analog ones. No two analog ICs usually sound the same to me. But on the several occasions where I have compared different digital cables going into my DAC(s), if there was a difference, it was not enough for me to take clear notice or even care. I know that in theory different levels of jitter is the result and that jitter level matters. But does it really in practice? It's something I have not been able to discern with my own ears so far.
So I wonder.....
Steve, What you describe is general quality of the cable and not performance of the cable in particular system. Characteristic impedance different than 75 ohm can be very good, as Al mentioned, if it is better match for given system. Same for slowing down the edges. Uncertainty of threshold is not caused by long transitions but by the noise. Long transitions make it only more susceptible to noise induced jitter. With very little noise present longer edges might reduce impedance mismatch caused reflections, reducing jitter in effect. Making cable "at least certain length" is not precise since cable is not even considered transmission line when propagation time (one way) is shorter than 1/8 of transition time being about 0.6m for typical 25ns transitions (assuming 5ns/m).
Yes I'm also an EE with 34 years design engineering experience involved in Data Acquisition design for last 25 years - since you asked, otherwise I don't feel it would be appropriate for me to fortify my posts with it.
let me add the obvious:
take 2 audiophiles, 1 stereo system and 2 digital cables. place them in a room and have them compare the two cables.
there is a chance that they will agree on what they are hearing but they may disagree on which they prefer.
there is a chance that they will disagree on what they hear but agree on their preferences
there are two other obvious possibilities.
so what's the point.
there is no definitive answer to the question posed, because, perception and preference may differ among audiophiles.
in the empirical world, just listen and decide for yourself. it's not an original thought.
Jitter is measurable, correct?
Will a cable of some determinate length not add some measurable, repeatable, non-arbitrary amount of jitter within a particular range of measurement, regardless of any jitter coming from the source component?
Are there any cable manufacturers that measure and publish jitter specifications for each of their different cable products and cable lengths?
Seasoned got it right, but I still wonder with current state of digital technology if in practice it is really that much of a problem with most modern gear. Digital technology in this regard has come a long way since the CDs outset around 30 years ago. That makes a big difference.
Of course to whatever extent it may still be a problem in practice, audiophiles will care more about it than most normal people.
In my case, to date, I would have to say that digital cable tweaks have made the least difference of most any tweak I have tried. Most others (digital and analog related) I hear a difference. With digital cables, I am still waiting. I have mostly compared optical versus coax to-date specifically. These are each significantly different so I expected to hear something but have not so far. I have also yet to hear a practical difference through the same DAC from various digital sources. I have compared several digital sources including Marantz DVD player, Denon CD player, ROku SOundbridge and Logitech Squeezebox. They all tend to sound similar and essentially equally good to the point where I determined it did not matter to me.
Will a cable of some determinate length not add some measurable, repeatable, non-arbitrary amount of jitter within a particular range of measurement, regardless of any jitter coming from the source component?No, absolutely not. As implied in some of the preceding posts, the amount of jitter that will result with a given cable in a given system, at the point where D/A conversion is performed within the DAC (which is where it matters) depends on a complex set of relationships and interactions between the parameters of the cable, including length, impedance accuracy, shielding effectiveness, shield resistance, propagation velocity, bandwidth, etc., and the technical characteristics of the components it is connecting, including signal risetimes and falltimes, impedance accuracy, jitter rejection capability, ground loop susceptibility, etc.
Many of the relevant component parameters are usually unspecified, and even if they were specified in great detail predictability of the net result of those interactions would still be limited at best.
We need to look at the whole system. Cable impedance is designed to 75 ohm to match everything else in the signal path. However, everyone knows nothing will be perfect, so it comes down to how close is the actual impedance to 75 ohm. How about the connectors, trace of PC board, etc. What matters is at the point where the clock information is recovered from the signal. If it is not exactly 44.1kHz, there is jitter and it causes distortion to the audio signal. Therefore, cable quality matters if everything else in the system closely matches to 75 ohm. If by accident, everything matches to 70 ohm, it will be good too, but chance for that to happen is very small.
Therefore, professional studio use master clock such that only the 1 and 0s are decoded from incoming signal and not the clock information. In that case, jitter can be minimized to just between the DAC and master clock generator. Then most effect from the cable can be eliminated. A few high end decoder manufacturers adopt this setup like Esoteric and DCS for home use. It is a much more expensive solution, but it works.
Cheers and enjoy music
Digital cables are fundamentally different than analog cables.....the cable is an electrical transmission line at 44+ kHz. The 75 ohm impedance match at each end is critical for jitter, minimizing reflections, etc. BNC connectors are made for this, but Canare makes one of the only 75 ohm RCA connectors available......I've tried 'em and they sound better. Any good video cable in between works well, but the cable pedigree is much less important than with analog cables.
They do matter just listern to your ears + trust them,
I have tested many many digital coax cables + i found the more money i spent the better they sounded to me. this INMO
Opinan is down to the design,materials + connectors been,
used. the best digi coax cable i've ever tested + still own,
it the audioquest "eagle eye" at £700GBP per mtr!, it is just the business, solided silver conductor mutli sheilding,
+ DBS battery pack, its not hog-wash it works period!!!.
"Do you feel that the following may also be significant contributors to sonic differences between S/PDIF interconnects, at least in some systems?
6)Differences in noise-induced jitter, due to ground loop effects and/or RFI/EMI pickup, both of which may be sensitive to cable differences.
7)Differences in radiated RFI, that may affect circuitry in the system that is not directly related to the S/PDIF interface."
These are both potential contributors to jitter, although #6 is not directly related to cable quality, and # 7 is mostly a function of the receiving device IMO.
As for cable pricing, I have found that in general cables below the $500 mark for 1.5m length sound about the same. Significant improvements are not realized until one spends more than $500. This is when you start to get the more exotic constructions, conductors and dielectrics, as well as better shielding. Just my experience.
"on the several occasions where I have compared different digital cables going into my DAC(s), if there was a difference, it was not enough for me to take clear notice or even care. I know that in theory different levels of jitter is the result and that jitter level matters. But does it really in practice?"
Perhaps you didn't test the right cables, or your preamp creates enough distortion, noise and compression that you dont hear the benefits because they are masked. This is fairly common when using an active preamp. I dont use a preamp, so I dont experience this masking anymore. It's a system after all, so every component and cable matters.
"Characteristic impedance different than 75 ohm can be very good, as Al mentioned, if it is better match for given system."
Sure, but I would sell that system and get one that meets the specs so I dont have to try to find a wacked-out cable that matches it.
"Same for slowing down the edges. Uncertainty of threshold is not caused by long transitions but by the noise."
Noise will certainly cause jitter (signal integrity or ground-bounce), but slow edges by themselves will also cause jitter and usually worse based on my experience. The problem is the voltage reference which sets the switching threshold at the receiver. This reference is usually noisy due to the system voltages and ground-bounce. Very difficult to make it noise free.
"With very little noise present longer edges might reduce impedance mismatch caused reflections, reducing jitter in effect. "
It sounds like common sense, but it doesnt work in practice. Faster edges and precise matching works a LOT better.
"Making cable "at least certain length" is not precise since cable is not even considered transmission line when propagation time (one way) is shorter than 1/8 of transition time being about 0.6m for typical 25ns transitions (assuming 5ns/m)"
I know this "rule of thumb", but really low jitter systems have risetimes of 3ns or less, not 25nsec. Even at 25nsec, the cable length helps however. the A/BX testing proves it.
Jitter is measurable, correct?
"Will a cable of some determinate length not add some measurable, repeatable, non-arbitrary amount of jitter within a particular range of measurement, regardless of any jitter coming from the source component?"
Yes, assuming the signal is repeatable.
"Are there any cable manufacturers that measure and publish jitter specifications for each of their different cable products and cable lengths?"
I cannot think of any cable manufacturers that can afford a $150K analyzer from Agilent that it takes to measure this. Even JA of Stereophile with his latest and greatest AP system cannot measure it.
The other thing you must understand is that a lot of the jitter problem in cables is caused by imperfect receivers in the DAC, not the cable itself. If you put an analyzer at the end of the cable instead of the DAC receiver, everything changes. You lose half of the effects.
Ones and zeroes doe not exist in real life; it is all analogue signals, interpreted as numbers.
You would like thee signals to be the only one on the cable, and you want them unaltered, so interpretation is flawless.
If the cable picks up or rejcts other signals, noise, it could upset f.i. feedback loops in amp stages, giving distortion.
It the signals that represent the numers themselves are distorted, interpretation and timind can suffer; this is jitter.
So a cable could do a few things wrong, and then there is impedance match which I frankly do not quite understand.
Because the SPDIF digital signal is NOT a matter of the presence or absence of a signal (standard binary). It is the transition from the falling square wave to the next and transition across the top of square wave to the next drop that comprises the binary data. Additionally it is out of sequence to give error correction a fighting chance to keep things straight. You can only see this on an oscilloscope that is rated out to 100 mHz. The rise and fall times are in the nanosecond range and cable quality and correct impedance are key to good sound reproduction. Maybe more so than analog interconnects.
My two cents.
Steve, thanks very much for your comments and insights.
My one comment in response is in regard to:
10-06-12: AudioengrThe problem as I see it, in at least most cases, is that there is no practical way for the consumer to know what the transport's output impedance or the DAC's input impedance is. Even JA's measurements don't address those parameters, at least in the reviews I've looked at. And, if I recall correctly, the tolerance defined by the S/PDIF standard is a very loose one, something like +/- 20 ohms or +/- 20%.
Also, as indicated in your paper:
I have never seen impedance control on any Transport or DAC circuit board. Occasionally, the wiring from the circuit board to the connector is impedance-controlled, but this is the exception, not the rule.It all seems to me to add up to a very hit or miss situation, and even more so given that another key parameter, the risetime and falltime of the transport's output signal, is also usually unspecified, and widely variable (e.g., 25 ns or so in many cases, per your paper; 3 ns or less in some cases, per your statement above).
10-06-12: DuraSee Steve's paper, linked to above, which explains it all nicely.
For digital, you just have to pass the eye-pattern level for proper digital signal retention. The more open the "eye" the fewer error bits, the lower the jitter. This should be the full channel, too. Cable and connectors both. You don't need expensive cables to virtually remove bit errors. Example, Ethernet can transmit signal 328 feet at 10 MB/sec with virually zero dropped bits with 100-ohm 4PR24 23 AWG wire (250 Meg per wire pair times 4 ). Once you go digital, and with sufficient eye pattern margin, the "sound" is the AD converter, not the bits...which have no sound at all. I would look more at your AD converter than the cables, and such short cables seldom have issues with data rate Xmission. What's a few feet (with proper impedance matching) at best compared to 328 feet / 100 meters?
Shielding is only as important as the environment is poor. Mosy twisted pair digital cable has plenty of CMRR ratio to passively remove noise. But, a shield will attenuate the artifacts so that the remainder of the noise common mode rejection doesn't remove is 20-30dB smaller in magnitude. The shield doesn't improve the CMRR of the system, it just makes the noise the system has to deal with SMALLER. The reduction is the same "something" times X of the orignal signal, the original signal is just smaller.
Don't be fooled with shields, though. A shield concentrates the ground plane tightly around the cable which also magnifies a cables defects. Poor quality UTP cable that works, can be a disaster with an added shield. Once you BEND the cable, the issues keep getting worse, too. So shielding isn't to be taken lightly as an improvement unless you are seriously in the know about the cable under that shield. Most will have better performance with unshielded cables that are twisted pairs. Coaxial cables do better with shield (and have to have one to work, anyway) since the internal construction is easier to manage as it is a round dielectric and holds the shield at a constant distance from the internal unbalanced signal wire, and while the cable is bent, compared to balanced pair cables. Issues with noise are much less a problem with balanced cables that remove the noise through CMRR (RF) and the pair twists (low frequency magnetic interference), and good coaxial cables can put RF noise 100 dB below the signal but are more helpless against magnetic interference when used at audio frequencies below 1 MHz or so.
The typical wires to meet digital Ethernet eye patter and attenuation to cross talk margins (ACR) are sufficiently small to make "skin effect" not much of an issue. The 0 dB (where the signal is larger than the noise) ACR value extends past 500 MHz on good cables. Common electronics can see a signal as much as 6 dB below the 0 dB ACR frequency, too. So "finding" the digital stream isn't too tough, what's tough is the DA conversion at each end. All the advances in Ethernet technology trickle down into other digital media as it becomes affordable.
Kijanki et all are right on that the "system" length is important as you get shorter lengths as the reflected energy caused by impedance mismatch at SPDIF frequencies can be pretty large. On longer cables, reflected energy is attenuated out. Most all digital systems have a much harder time passing "short length" channels than long length channels for that reason. RL (Return Loss) is not so good with RCA connectors as they aren't 75-ohms. So digital audio seems to gets a deserved bad rap on qualty of the interface far in excess of the cable between the connectors.
It is also true that to have a "transmission line" in the true sense, the wavelength nees to be at least 1/4 length relative to the velocity of propogation of the cable. Shorter than that and you don't get reflections (remember the open and closed menometer physics experiments?).
Audioengr point number two is somewhat confused. Dielectric is called a di-electric because it apposes electric flow. This is true because it sits between two conductors and makes a capacitor, which apposes signal change or "flow". The capacitance is a fixed figure when you decide a cable's impedance which is 101670 / (capacitance x velocity of propagation of the dielectic). There is no "absorption" at all based on the capacitance, just the storage and release of energy. So if you want a faster cable at a given impedance, lower the capacitance by changing the dielectric.
101670/ (20.5 x 66) = 75 ohms
101670/ (17.3 x 78) = 75 ohms
Now, what does grab away the energy is the dissipation factor or loss tangent (same thing) of the material. The transverse electromagnetic wave energy transmitted between the two wire (view it as a wave between two plates), and traveling in the dielctric medium, is a reactive vector, and the "real" part verses the "imaginary" part create inefficiencies that causes lost energy. The imaginary (the loss "tangent") part is lost.
He's 100% correct that a vacuum is the best dielectric (or air as the poor man's vacuum) to lessen losses. Teflon does NOT need to be used, however. PP or PE is cheaper, and better than Teflon at normal room temperatures. Teflon is just expensive, and you can listen to your stereo in a 200F room if you like! The cable won't melt. Oh, Teflon is pretty, though.
Be wary of mixed conductors though. With such short distances the advantages are not significant. Yes, the higher frequencies attenuate more, so higher order digital frequencies should arrive less attenuated through silver...but at what length? And at what skin depth? I'd worry a whole lot more about the connectors and DA converter. Those pretty silver cables may look like you care more, but the hidden stuff no one sees matter more.
Sure, but I would sell that system and get one that meets the specs so I dont have to try to find a wacked-out cable that matches it.
How user can possibly know that wire or system meets the spects? I prefer to choose cable for the system and not the system for the cable.
This reference is usually noisy due to the system voltages and ground-bounce. Very difficult to make it noise free.
Yes, that's part of the noise I'm talking about. There is no perfectly quiet system and there is no perfectly impedance matched cable. It is always compromise. In noisy system (external or internal noise) it is better to get fast switching transport getting more of reflections but in very quiet system it might be better to get slower switching transport to minimize reflections.
Even at 25nsec, the cable length helps however. the A/BX testing proves it.
Are you saying that, assuming some impedance mismatch, 1.5m cable will be always better than 6" cable (that I used not long ago)? It doesn't make sens. There will be no reflections in 6" mismatched cable, assuming average transport (with 25ns transitions), but a lot of reflections in 1.5m cable. Even if the first reflection misses originating edge there will be consecutive reflections. There are techniques to predict effect of multiple reflections on the signal (Bergeron Diagrams) but it is very complicated task.
As for the measuring the jitter - effects can be measured but I agree with Al that it will be useless since it will depend on all the factors he mentioned. Measuring jitter effects at particular frequency in particular system in particular home etc. has no value to anybody.
"For digital, you just have to pass the eye-pattern level for proper digital signal retention. The more open the "eye" the fewer error bits, the lower the jitter. This should be the full channel, too. Cable and connectors both. You don't need expensive cables to virtually remove bit errors."
True, but we are not talking about bit errors here, we are talking about psecs of jitter. The cable matters, as does practically everything else.
"How user can possibly know that wire or system meets the specs?"
You can't. The best option for consumers is to read the reviews from a reputable reviewer and then try one.
"In noisy system (external or internal noise) it is better to get fast switching transport getting more of reflections but in very quiet system it might be better to get slower switching transport to minimize reflections."
Have you tried it? It sounds nice in theory, but usually does not work well.
"Are you saying that, assuming some impedance mismatch, 1.5m cable will be always better than 6" cable (that I used not long ago)?"
No, 6" cables are generally not commercially available. I'm saying that a 0.5m or 1m cable will not be as good as a 1.5m cable. In order to actually get 6" total, you would need probably a 3" cable since there is cable in the transmitting and receiving device.
Jitter measurements are a rat-hole IMO. Jitter has never been effectively correlated with SQ anyway, and based on my experience, it is very dependent on the spectral signature of the jitter. Single jitter measurements are useless to say the least.
Surprised nobody has mentioned the SPDIF clock which is an analog RF waveform (if I'm not mistaken) that must stay in sync with the data.
Not sure if AES/EBU any better in that regard, but it is supposed to be easier to guarantee impedance match (110 ohm in this case).
Some manufacturers like Sonic Frontiers implemented an I2Se interface to avoid these issues.
I guess my message is, if you supposedly need a $500+ coax cable to get the job done then maybe you need to choose a better interface. Stack the deck in your favor at least, don't be a victim!
"Perhaps you didn't test the right cables, or your preamp creates enough distortion, noise and compression that you dont hear the benefits because they are masked. This is fairly common when using an active preamp. I dont use a preamp, so I dont experience this masking anymore. It's a system after all, so every component and cable matters."
Maybe. But I do hear differences in most everything else I tweak beside digital source and ICs. So I think I have the relative magnitude right at a minimum. I hear a lot of systems and live music and I do not hear any distortion or dynamics issues of significance, but of course we know such things are always in play to some extent in home audio.
"Some manufacturers like Sonic Frontiers implemented an I2Se interface to avoid these issues."
I2S is available on Empirical Audio, PSAudio, Wired for Sound and other gear. Some SE and some differential.
Even I2S requires a good cable. Actually more-so than S/PDIF because the frequencies are a lot higher on I2S.
"I guess my message is, if you supposedly need a $500+ coax cable to get the job done then maybe you need to choose a better interface. Stack the deck in your favor at least, don't be a victim!"
Like what interface? They all need good cables.
...True, but we are not talking about bit errors here, we are talking about psecs of jitter. The cable matters, as does practically everything else...
Jitter is directly tied to the Eye pattern tests, so the concept is valid for digital audio. The percent jitter is actually calculated off the eye pattern. The jitter in turn can be used to predict BER rate.
Rower30: what you have outlined is my understanding as well, based on Rf transmission theory. Impedance match at the connectors and loss tangent (dissipation factor) in the cable are the two key parameters.....everything else is secondary. That is why cheap CAT5 cable is far superior to most analog audio cable for digital transmission.
Also reclocking the data at the DAC is the key to superior reproduction.
Putting the clock in the data stream was poor engineering from the start. Fortunately, HDMI 1.3+ provides for this and is generally superior to any SPDIF interface, (even with million dollar cables), for this reason. Buy an Oppo BDP-95 and a DAC with an HDMI 1.3 input (Meridian HD621), and you won't have to worry about jitter.
Rower - if you are getting bit errors, the eye pattern is so bad that jitter is the least of your concerns.
"reclocking the data at the DAC is the key to superior reproduction"
You would think so, but unless you can synchronize the source to the DAC using word-clock, it is not the best solution. Very few sources have this capability.
You will achieve much lower jitter making the source jitter low and feeding it to a DAC without internal reclocking. This is because all of the techniques for asynchronous reclocking for jitter reduction in DACs are inferior, both PLL and ASRC.
'Fortunately, HDMI 1.3+ provides for this and is generally superior to any SPDIF interface, (even with million dollar cables), for this reason. Buy an Oppo BDP-95 and a DAC with an HDMI 1.3 input (Meridian HD621), and you won't have to worry about jitter.'
You must listen to different HDMI stuff than me. On my NAD M51 SPDF and USB easily bests HDMI. And when fed with an Off-Ramp which has jitter below 10ps you can easily hear the improvement over any other method. HDMI is far from jitter free - not by a long shot.
I hasten to add it probably has nothing to do with the interface per-se but what's feeding it via the interface.
Also anyone one who thinks there is any method available today where you don't have to worry about jitter they are whistling dixie. Even on sources with jitter below 10ps like the Audiophello 2 and Off-Ramp you can easily hear the difference.
I have a Playback Designs DAC that they advertise as jitter immune. Even on that DAC you can hear the difference when fed with a low jitter source like an Off-Ramp - although it is not as great as with other DAC's - but it is still there.
Steve Nugent Wrote:
'Jitter measurements are a rat-hole IMO. Jitter has never been effectively correlated with SQ anyway, and based on my experience, it is very dependent on the spectral signature of the jitter. Single jitter measurements are useless to say the least.'
Bingo - we have a winner.
I have mucked around with all sorts of sources and thats it exactly.
Was just reading about how 1.5m is the best length for a digital cable. Something to do w/ reflections. I read somewhere else that all interconnects can benefit from being 1.5m. Somehow that doesn't sound right to me. So, is it true- will 1.5m RCA interconnects going from my DAC to my pre amp sound better than the 2 ft length I am currently using? Thanks
Mrblackcrow 8-26-2017You are correct about that not sounding right. The 1.5 meter length recommendation that is often seen for digital cables has no relevance whatsoever to cables conducting analog audio signals. In general, in the case of analog audio cables the shorter the better, if it makes any difference at all.
Also, regarding S/PDIF and AES/EBU digital cables, as you can see in some of the earlier posts in this thread the optimal length is dependent on a great many component and cable dependent variables, some of which are not usually specified (e.g., risetimes and falltimes of the output of the signal source). So 1.5 meters should be considered as having the best odds of being optimal for a digital cable (unless a very short length is practicable, such as 6 or 8 inches), but other lengths may be better in some cases, and there may be some cases where it won’t matter very much if at all.