Digital vs Interconnect Cables - Difference?


Can someone explain the difference between digital and interconnect cables? Are they inter-changable? Is digital for connecting CD/SACD transport and DAC?

How about the cables between CD player and pre-amp - Interconnect or digital cables? And between pre-amp and power-amp? Are the same type of interconnect cables?

Also, how many types of interconnect cables are availabe in the market? Digitals - with various connection options?

Thanks.
r0817

Showing 2 responses by kijanki

I'd like to expand a little on Al's excellent (as usual) post. Each cable has characteristic impedance. This impedance depends on cable geometry and dielectric and can be simplified as SQRT(L/C). When this impedance is different from the gear impedance, that cable is connected to, we will get transition echo from point of impedance change back to the output. Severity of this echo is dependent on the amount of impedance mismatch and slew rate of transitions in digital signal. This echo might reflect many times inside of the cable colliding with original or next transition. This collision will change the shape of transition from smooth swing to jagged one. Jaggies in transition will effect moment in time when logic level change is recognized at certain threshold voltage, resulting in time jitter that D/A converts to noise. Slow transitions would help to reduce this effect but will make system more susceptible to similar jaggies induced by noise that is either picked up by the cable or exists in the gear itself. Very fast change will reduce effect of transition jaggies (shorter time=shorter time variation) but will require better match of cable and gear impedance. 75ohm and 110ohms are standards agreed upon so that we know what we're matching to, but it could be any number. Cable might have 85ohm and it is fine as long as gear happens to have also 85ohm. That's why it is all system dependent. Cable that is perfect in one system might work poorly in another.

Let me try to explain jitter. Imagine sinewave created by series of many equally spaced dots (each dot corresponds to one D/A conversion) - like dotted line. Connecting dots together will result in smooth sinewave - that's what filter does. Now make time distance between dots different (alternate shorter-longer) and repeat connecting dots (keep each dot's amplitude - move in time horizontally). Sinewave will become less smooth. It will have jaggies, like if another frequency is on the top of it. That's what jitter does on analog side. It creates additional frequencies of very small amplitude. With music (a lot of frequencies) jitter will create a lot of additional unwanted frequencies - a noise that is only present when music is present. It shows as a lack of clarity. If you look at this sinewave again you'll agree that size of the jaggies will grow with amplitude of original sinewave. Jitter induced noise is proportional to loudness/level of the music. Digital is not only 0s and 1s but also their moment of arrival, unless music is transferred without timing as data (hard disk, WiFi, Ethernet etc.) Eventually timing has to be recreated for D/A conversion introducing possibility of jitter.
A/D conversion also suffers from the jitter. Artifacts become embedded in digital file and cannot be removed. Many original recordings were digitized poorly with unstable clock and the only option is to do it again if analog tapes still exist.
All three AES/EBU, coax S/PDIF and optical S/PDIF in home systems are different forms of S/PDIF protocol. One of the main differences between protocols is that AES/EBU does not contain digital copy protection while S/PDIF does. What comes to balanced (XLR) input of my DAC is S/PDIF protocol that shouldn't be called AES/EBU. To avoid confusion I would also call them by type of connection/connector: unbalanced (or coax), balanced (or XLR) and optical (or Toslink).