Quality of digital cable from from source to DAC?


Hoping just to gain some understanding.

With respect to the transfer of digital data via metal cable in the audio spectrum, the seeming common thought I’ve managed to discern thus far is that while there is likely no apparent SQ differences to be gained in upgrading the quality of my Cat 6 ethernet cable that link my router (ATT Optical feeding an Apple Airport Extreme) to my streamer (50’ run of off the shelf Cat 6 feeding a Lumin D1), there seems to be considerable reviews claiming significant SQ improvements to be had in utilizing ‘higher quality’ digital cables linking the source (streamer or CD transport) to a DAC (Qutest). Why would this be? Is the digital data file going from the router to the streamer somehow different, or more suspect to error, than the digital data file going from the streamer to the DAC?

Any insight would be greatly appreciated, even if nothing more than passing a long a related link.

Thanks,
Todd
ecolnago

@ecolnago
All digital cables affect sound of the DAC differently.  In order to make sure DAC receives just right amount of data it has to synchronize internal D/A conversion rate with incoming data (S/Pidif) or receives data at different rate, but signals back to increase or reduce amount of incoming data (async USB, Ethernet).  In case of async USB or Ethernet D/A conversion clock is independent of incoming data timing, but it might be affected indirectly by noise injected by the cable into DAC. S/Pdif delivers data in real time and the average rate of this data adjusts rate of D/A conversion. S/Pdif cable can affect this rate two ways. Coax S/Pdif cable transitions are fast, hence susceptible to electrical reflections in the cable, that add to and modify shape of transitions affecting timing. DAC corrects most of it by adjusting to average of the frequency, but it is not perfect. In addition cable injects picked up electrical noise. Optical cable is different. It doesn't inject electrical noise and doesn't create ground loops (like coax cable can), but transitions are very slow. Slow transitions in presence of electrical noise can affect exact moment in time of level recognition (crossing threshold). It all comes to making clock of internal D/A conversion stable in time. Jittery conversion clock produces noise added to music.

In case of Ethernet or USB I would pay attention only to quality of shielding and run cable away from other cables. Same goes for coax S/Pdif, but matching characteristic impedance of the cable (to avoid reflections) is also very important. It is also desired to keep cable very short - less than foot (to avoid reflections) or longer than 1.5m (to avoid first reflection).  With optical cable, quality of the cable (clarity, etc) plays role, but the most important is to keep system electrical noise low, by using power conditioners for the source of the signal and the DAC.


Thanks again to everyone's input.

kijanki, your's is one of most thorough explanations I've encountered. Thanks for that effort. Occasionally do some sea kayaking. There are seemingly always some occasions where, due to any number of factors (i.e. changing wind speed, water depth, shore reflections, etc.) that what are typically dependable and predictable wave movements become what are referred to as 'confused waves'. Guessing on a micro-scale, much the same applies with electrons in a sea of copper wire. Sorry, I live by association. Don't know that it's an accurate correlation, but seems like maybe it ought to be?
andy2, please note this is in NO way a challenge to what you say, but simply for me to express my lack of understanding. '2. It's electrical signals' I totally get. What I question (because I have NO idea) is that are they two different 'ways' (for lack of better word) that the electrical signals are transferring information?

My perception (and this is what I'm SO hoping to clean up) is that the source (server/cd/whatever) transferring the data of a 'digital' file by emitting electrical signals in simple (relative to analog), concise, rigidly defined, regulated pulses (the ones and zeros) are painting a complex (understatement) 'coded' picture (maybe movie's the better term, as it's constantly changing?) that the DAC must decode and translate into an analog electrical signal.

The electronic 'digital' signal carrying this coded information deviates little to none (I would think?) in amplitude or frequency. The electronic signal (pulse) is either go, or no go, yes or no, (1 or 0), and sequenced to relay a defined coded message that the DAC can interpolate. Given the velocity of an electrical wave through a copper (or silver)  wire is deemed to be roughly 90% the speed of light, essentially instantaneous relative to the lengths of cables in our systems, seems that 1. deviations in the devices 'clocking' mechanism or 2. potential electromagnetic interference from surrounding electronic wave movement could really be the only sources to alter the passage of this data transfer through a wire.    

Electronic signals transferring analog information (I would think) are another animal altogether, deviating in both frequency and amplitude, and in turn being more more complex signal transfer that would be susceptible to alteration (error) in part due to its complexity. 

With this point of view (wrong as it likely is), seems that other than obtaining the best possible shielding there would be little to gain relative to different cables in transferring digital (coded) information, where as with the complexity of an analog electronic signal would actually benefit from a transducer that imposed as few impediments as possible to the electrons path.    

Well aware this is absurdly simplistic in (my) thought and supremely complex in reality. And I reiterate for any one reading what I've written, this in all likelihood IS NOT how things really work, again, it's only my perception. Not all that bright here, but smart enough to know it's likely way off in reality...
Came across an 'jitter' article that seems both broad and approachable (for us not so technically aware), in case anyone's interested:
http://www.enjoythemusic.com/magazine/manufacture/0509/
Not asking that anyone write a dissertation to cleanse my ignorance to the matter, but if anyone can at least guide me towards some reading that might enlighten me in my quest to (at least sort'a) know how it really works, will be much appreciated.
Todd 

 

 

 

 
Todd.  Let me try to explain jitter.  Imagine you play 1kHz sinewave recorded on your CD.  Digital words of changing amplitude, representing sinewave, are converted in even intervals into analog values by D/A converter.  You get analog 1kHz sinewave.
Now imagine that these time intervals are not exactly even, but are getting shorter and longer 50 times a second.  Now you won't get only 1kHz sinewave but also other frequencies, mainly 950Hz and 1050Hz called "sidebands".  Distance from the main (root) frequency depends on the frequency of the interval change (jitter), while their amplitude is proportional to amount of interval change.  These new sidebands have very small amplitude, but are not harmonically related to root frequency (1kHz) and that makes them still audible.
With many frequencies (music) there will be many sidebands - practically a noise added to music.  Sidebands have small amplitude that is proportional to amplitude of the signal.  This noise stops (is not detectable) when music stops playing.  You can only hear it as lack of clarity in the music (since something was added).  

Many factors can produce D/A conversion clock jitter.  Cable can be one of them, while injected electrical noise can be another.  In case of Ethernet or USB cable cannot affect timing directly, since samples are placed in the buffer ahead of time, but cable can still inject electrical noise into the DAC. 

Jitter can be induced by one noise frequency (correlated jitter), like 50Hz in our example, or by multi-frequency electrical noise (uncorrelated jitter).  Effect of jitter is less audible when frequency of the jitter is low, because sidebands will be closer to root frequency, hiding better (less audibly visible) . Also random jitter (uncorrelated) should be less audible, being sort of averaged.

(As for 90% speed of light it is more like 60-70% dependent on the dielectric.  I assume about 5ns/m)