Does the quality of a digital signal matter?


I recently heard a demonstration where a CD player was played with and without being supported with three Nordost Sort Kones. The difference was audible to me, but did not blow me away.

I was discussing the Sort Kones with a friend of mine that is an electrical engineer and also a musical audio guy. It was his opinion that these items could certain make an improvement in an analogue signal, but shouldn't do anything for a digital signal. He said that as long as the component receiving the digital signal can recognize a 1 or 0 then the signal is successful. It's a pass/fail situation and doesn't rely on levels of quality.

An example that he gave me was that we think nothing of using a cheap CDRW drive to duplicate a CD with no worry about the quality being reduced. If the signal isn't read in full an error is reported so we know that the entire signal has been sent.

I believe he said that it's possible to show that a more expensive digital cable is better than another, but the end product doesn't change.

There was a test done with HDMI cables that tested cables of different prices. The only difference in picture quality was noted when a cable was defective and there was an obvious problem on the display.

I realize that the most use analogue signals, but for those of us that use a receiver for our D/A, does the CD players quality matter? Any thoughts?
mceljo
.....original CD was not readable and your rip is not "bit perfect".

"Bit perfect" always reminds me of "bits are bits". :-)

In both cases, when it comes to audio quality, this is not really true (it's irrelevant), IMHO!

Best,

Alex Peychev
www.aplhifi.com
Above when I say "not sure" - what I mean is that the software can tell the drive what to do and I don't know if MAX software is telling the drive "not to interpolate" uncorrectable errors. Some softwares will do this and then you get an error report during ripping so you know for sure that the original CD was not readable and your rip is not "bit perfect".
Shadorne - I'm not sure how this interpolation works. Is it happening also when I use program that rips CD as data (like MAX) - I hope not. Do you know?

Not sure if the software or drive will conceal an error or not but from what I understand - only damaged (scratched) CD's are likely to require interpolation, as the error correction (although less rigorous than data CD's) should be more than enough for well looked after CD's. The CD's error rate is very low - certainly much lower than the number of glitches coming out of the studio and on to the CD master. Only a few out of one thousand CD's should require interpolation in a few places (when in new and unscratched condition).
I am a believer in the audibility of jitter. Here is another link to some useful information about jitter. I have also found a lot of useful discussion of this topic in the A'gon archives.

My conversion to believing in the audibility of jitter occurred when I added a reclocker to my system, which discards the timing data from a S/PDIF signal and reclocks it using a high precision clock. The addition of the reclocker resulted in significant improvement in perceived resolution, among other things.
Jea48 - I currently have the SACD player hooked up in 2 channel analog mode so I'm not even listening to the multi-channel versions. I think most, if not all, SACDs have stereo versions. I don't really care about the multi-channel because my rear channel speakers are no where near the qualility of my Focals.

2 channel standard vs. 2 channel SACD is what I'm comparing. I'm planning to swap the disk and compare the result of standard CDs on both players, but a toslink vs. analog isn't really an equal comparision.
The solution is to get a better CD player that will read the data without affecting the quality of D to A.
Shadorne

Shadorne,

I'll let the two dealers as well as other friends, that have heard the difference, they need to get better CDPs.

Best regards,
Jim
Shadorne - I'm not sure how this interpolation works. Is it happening also when I use program that rips CD as data (like MAX) - I hope not. Do you know?
I knew when I posted it,... I would get that answer.... All I can say to you is, have you tried it? I suggest you try it for yourself.

The science is pretty clear on this. If you suffer issues then the difference between the disc material is causing poor D to A conversion. (The machine gets the digital info off the disc but is unable to convert it reliably without audible distortion)

I am not surprised that there were differences - there is a lot of inadequate equipment out there - much of it at the high end. Jitter only became well understood in the mid 90's and it only takes poor isolation of the servo motor driving the CD lens from the clock driving the DAC to get distortion due to jitter. Since the lens servo and motor will be acting in a cyclical pattern (highly likely since it is reading a rotating disc) then these patterns can mess up the sound quality of the CD player - if you replace the disc and it behaves a little differently when rotating then bingo you get a slightly better or worse sound.

The solution is to get a better CD player that will read the data without affecting the quality of D to A.
Kijanki,
The Reed Solomon interleave is actually quite robust however CD players will indeed "interpolate" as a last resort when data is missing

It is true that you might not know when your CD player is interpolating unless the disc is quite badly damaged and you get pops or clicks. Normally you should be well aware of errors when music is copied with a PC with good software (sometimes you need to set the software to warn you about read errors).

I have had some CD's that suffered CD rot - they played on a CD player but could not be copied without error on a PC - to me this means that they are beyond repair and the data cannot be recovered - but this problem is possible with any digital format that gets really badly corrupted or damaged. Under normal use with good quality discs one should not normally run into problems.
If redbook manufacture is supposed to commence play at exactly 2.00 seconds from the true zero pressed on the cd aren't we discussing data here.Programs like the forgotten Perfect Rip take this very seriously,correcting for sub-channel data in the process etc.Bit correction for offsets is what it is all about.I have discovered that Nero CD Speed Tool does report offsets correctly,just giving a total figure.Perfect Rip confirms Nero and this means that the Accurate Rip database is incorrect.All offsets given there are +30 samples out for every drive listed.I no longer get silly anamolies with ripping,they are perfect every time.I might add that voltage brown-outs and spikes badly effect ripping also.You reap what you sow.
Since the CD copy should normally be a bit perfect copy (you can confirm this easily using a computer), you may want to invest in a better CD player or DAC. What you are experiencing are differences in sound quality due to small differences in the media disc such as weight, color, coating, central hole alignment, balance of the disc etc. - normally a good player will be immune to such differences - it should read the bits correctly without affecting the built in DAC and low jitter clock:it should result in identical sound.
Shadorne

I knew when I posted it,... I would get that answer.... All I can say to you is, have you tried it? I suggest you try it for yourself.

I have a couple of dealers in my area that said the same thing as you. In both cases I proved them wrong on their own systems. In fact the more revealing the system the easier it is to hear the difference.

My experience the "Exact copy" from a home PC lacks the body and fullness as a copy made on a decent stand alone CDR recorder..... A couple of key things to listen for is Female vocals, and piano.

The manufacture of the blank CDR will also make a difference.
Shadorne - CD copy should be bit perfect only if CD is copied as data and not as a music CD. For example - with I tunes I can make copy of CD that is not readable as data using MAX (with "do not allow to skip" option). You can make 10 copies of scratched CD with Itunes or similar program and every copy will be different (because many sectors might be on the edge of readability).

I use MAX for ripping but have few CDs that required use of Itunes to rip them because MAX with multiple attempts was failing.

The worse part is that Cross Interleaved Reed Solomon error correction code is approximating/interpolating uncorrectable data. Read what I found i Wikipedia under "ripping"

"CD audio has two major design constraints that make it difficult to obtain accurate copies in the form of a standard digital file. First, the system is designed to provide audio in real time in order to ensure continuous playback without gaps. For this reason, it does not provide a reliable stream of data from the disc to the computer.

Secondly, the designers felt that it would be preferable for major scratches in the disc to be covered up rather than resulting in total failure. Normally, an error correction system such as Reed Solomon would provide either a perfect copy of the original error-free data, or no result at all. However, CD audio's Cross-interleaved Reed-Solomon coding includes an extra facility that interpolates across uncorrectable errors. This means that the data read from an audio CD may not in fact be a faithful reproduction of the original.

Another practical factor in obtaining faithful copies of the music data is that different CD drives have widely varying quality for reading audio. Some drives are thought to deliver extremely accurate copies while others may do little or no error correction and even misreport error correction information."
Not to open up another can worms, but there is differences in sound heard from a blank CDR burned on a home computer and one burned on a stand alone CDR recorder. And yes even when using "Exact Copy".

Since the CD copy should normally be a bit perfect copy (you can confirm this easily using a computer), you may want to invest in a better CD player or DAC. What you are experiencing are differences in sound quality due to small differences in the media disc such as weight, color, coating, central hole alignment, balance of the disc etc. - normally a good player will be immune to such differences - it should read the bits correctly without affecting the built in DAC and low jitter clock:it should result in identical sound.
Mceljo,

Before you run out and buy an SACD player you might want to start another thread. Like which do you prefer to listen to more, multi channel SACD or Red book 2 channel CDs? Which gets played more often? Which do you have more of? CDs or SACDs?

>>>>>>>>>>>>>>>>>>>>

An example that he gave me was that we think nothing of using a cheap CDRW drive to duplicate a CD with no worry about the quality being reduced. If the signal isn't read in full an error is reported so we know that the entire signal has been sent.
Mceljo

Mceljo,

Not to open up another can worms, but there is differences in sound heard from a blank CDR burned on a home computer and one burned on a stand alone CDR recorder. And yes even when using "Exact Copy".
I have not as I've only had it for about a day, but I'll give it a shot. I wouldn't purchase another CD player for the single purpose of making regular CDs sound slightly better, but if I determine that SACDs are a significant improvement it would be worth the upgrade that may do both.

Are you suggesting that a large portion of the SACD improvement is the player iteself and not the additional data?
Why would the SACD player make standard CDs sound different?
Did you listen and compare the sound from the two players using "standard CDs"?
"Apples and oranges......

I suggest you compare the two players just using the CDs you have now. You should be able to hear a difference between the two players."

Why would the SACD player make standard CDs sound different? The SACD has a much higher sampling rate that should be responsible for the vast majority of any difference. I have burned some standard CDs from the hybrid SACDs and I'll probably be getting an SACD player fairly soon. Everything is more crisp and detailed.
Paulsax: Once the digital signal is converted back to an analog signal there can be any amount of additional amplification, filtering, etc within the CDP prior to feeding the analog signal to the preamplifier. The effect of this additional signal processing is what is important and would certainly affect the sound,and indeed would be likely to have been designed for just that purpose. It is this post conversion processing that colors the sound, not the completeley insignificant effect of jitter.
Paul, I believe the thread you are referring to is dealing with a one-box cd player (a Music Hall CD25.2).

Although it has a digital output and can be used as a transport in conjunction with a separate dac, presumably the discussion pertains to its analog outputs, which are generated by its own internal dac, and processed through its analog circuitry. Given that, as Mapman indicated, tonality and color can certainly be affected by the design and quality of the dac and the analog circuitry in the player.

Jitter is the predominant consideration just in the digital parts of the signal path, up to and including the dac chip. And it becomes a MUCH more critical consideration when the transport and dac are in separate components, because of the impedance matching, reflection, noise, clock recovery, and other interface-related issues that have been discussed above.

Best regards,
-- Al
Paulsax - IMHO everything plays role. In addition to sound of jitter that was described in mentioned Stereophile article there is digital signal processing (oversampling, non-oversampling, upsampling) and filtering algorithm, type of DAC (traditional or sigma-delta, voltage or current output, single or dual differential etc.), particular DAC selected (sound differently) and chosen update rate, type of current to voltage conversion (transformer, tube, op-amp), analog electronics (tubes, op-amps, discrete), type of op-amp or tubes, quality of components and PCB, quality of power supply. It is endless list. Even something trivial like "mute" circuit can be responsible for loss of sound quality.

At the end sound in your system is the only thing that matters.
let me ask a related but slightly different question. I read often of a player being colored. THere is a thread right now about a music hall player being slightly "forward" and "bright". All things equal and in a world where jitter is the dominate error (or am I misunderstanding the tone of this thread?) how does a "color" creep in? Jitter seems like a strictly time domain issue and the coloration of music would seem like a level vs frequency issue? If so that may imply that the DAC or perhaps some outside electronics issue outside of bit reading is at play. Is that a reasonable assessment?

Sorry about the dumb questions. I'm your boy for mechanical engineering / physics / material science but I'm just about the "bang the rocks together" stage with digital electronics!
I picked up an SACD player from a friend today to borrow for a few days. We'll see how much difference there is once I get a copies of a single album in both formats. It'll probably be Nora Jones since I already have the CD and know it's a quality recording.
Apples and oranges......

I suggest you compare the two players just using the CDs you have now. You should be able to hear a difference between the two players.

Post back your findings.
.
On thing to note is that the information in one of the long and detailed articles linked in this discussion is 17 years old. My EE friend pointed out that a 2x CD player was a big deal 17 years ago. I would hope that many of the problems described have been reduced or solved by now. When it comes to electronics, 17 years is a very long time for technology to develop.

I picked up an SACD player from a friend today to borrow for a few days. We'll see how much difference there is once I get a copies of a single album in both formats. It'll probably be Nora Jones since I already have the CD and know it's a quality recording.
It seems to me that the bit stream speed is independent of the bit content. If this is correct than should not the jitter be either constant of possible a function of the disc itself (like radial position or burn/pressing quality)?

That was assumed when CD players were first invented. However, many things can affect the accuracy of the clock signal in the DAC. And even the bitsream is variable - error bursts and misreads may be cyclical and perhaps only the digital "preamble" is fairly consistent - so the data may vary in a certain repeating patterns.

Provided jitter is random, it is in general a negligible problem. However when patterns - such as power supply oscillations due to cyclical laser servo movements to track the pits on the rotating disc occur - then we can get non-random jitter. Another major cause of non-random jitter may be the Phase Locked Loop between teh master and slave clock - in this case, the very act of trying to keep the slave clock in time with the master cause oscillatory patterns as the slave hunts back and forth trying to keep in time. These repetitive patterns in clock timing erros cause new oscillatory audio signals to appear in the analog music coming out of the DAC - sometimes called sidebands - non-harmonically related signals. It is these very small (-40 db) but 'correlated' sounds that become audible - usually as hash or lack of clarity in upper midrange and HF (although this may significantly affect the perceptive sound of percussive instruments with low frequencies - like piano or drums - due to the way we "hear")

Anyway - jitter is an analog problem - it only appears upon conversion to analog or, up front, when converting analog to digital.

If you have a perfect clock then you will not have jitter.
DAC's have evolved to have better clocks. Early designs like Meitner used patterns in the digital data called "preamble" to try and achieve a more accurate clock. Others like Lavry used algorithms to maintain a very slow correction pattern on the slave clock that could be filtered out. Since about 2002 the problem has been substantially addressed by "asynchronous DACs" - basically these type DACs ignore the master clock altogther - and in these designs the jitter is totally determined by the clock quality in the DAC along and nothing upstream of the DAC.
In Jea48's linked article the implication was that the level of jitter was related to or at least different for different frequency levels of sound (presumably after the DAC). Someone straighten me out on this. It seems to me that the bit stream speed is independent of the bit content. If this is correct than should not the jitter be either constant of possible a function of the disc itself (like radial position or burn/pressing quality)?
Paul, you raise a good question, and I believe that the key to the answer is that jitter should be thought of as noise in the time domain.

As you will realize, an analog signal will always have some amount of noise riding on it, which causes its amplitude to fluctuate to some degree, in a manner which is to some extent random. That noise will typically consist of a great many frequency components, all mixed together. Essentially a mix of ALL frequencies within some finite bandwidth, with different frequencies having different magnitudes.

Similarly, the random or pseudo-random timing fluctuations that characterize jitter in a digital signal will have a spectrum of a great many jitter frequencies all mixed together. In other words, there may be slow fluctuations in the timing, that are of some magnitude, accompanied by rapid fluctuations in the timing, that are of other magnitudes.

Some frequency components of the jitter spectra can be data dependent, because a major contributor to the electrical noise that is a fundamental cause of jitter is the rapid transitions of transistors and integrated circuits between the 0 and 1 states, and vice versa.

BTW, re the references in your two posts to disk speed, radial position, etc., keep in mind that fluctuations and inaccuracies in the rotational speed of the disk (which figure to be far larger in magnitude than the electronic jitter we have been discussing) are, or at least should be, taken out by subsequent buffering in the transport's electronics.

Best regards,
-- Al
Paulsax - Jitter is a function of CD pressing quality, transport quality, digital cable quality, jitter suppressing scheme, electrical noise etc. It is a function of whole system. Even if we assume that amount of jitter is constant at given moment effects of jitter after D/A conversion are proportional to magnitude of the analog signal. Second page of Stereophile article (thank you Jea48) describes audible effects of jitter. They describe loss of detail and change in sound of instruments (harsh sounding violins) that might be effect of burying lower level harmonics in noise. Effects that they describe are often called "digititis".

Some people believe that as along as exact digital data gets to DAC timing doesn't matter. Try drawing sinewave on moving paper by marking predefined points (horizontal lines on paper to make it easier) in exact time intervals and then joining them. If intervals are not exact resulting sinewave won't be smooth - it will be jagged. Horizontal/time error got converted to vertical/value error.

Bob - Yes, error correction scheme will take care of most of the problems but used scheme (Cross Interleaved Reed-Solomon) can only correct 4000 bits of data (about 0.1"). If you have tiny scratch along the disk longer than 0.1" correction fails (only for this error). CDP won't try same sector again resulting in loss of sound quality. On the top of it transport might have poor tracking (skip track) because of CD vibrating, poor light reflection etc.

I like the fact that CDs surface can be re-polished (that's what our library did to all CDs). It tried to re-polish LP once but for some reason it didn't work.
p.s. I had another thought : )

It seems to me that, when it comes to audio, we clearly don't know everything yet !
This is another one of those issues/questions that comes up now and then (like double-blind testing, differences in cables, etc), and gets talked about a lot for a while. The things that always seem true with the threads include: 1) very few people agree; and 2) people make fairly bold statements one way or the other (often without actual personal experience, e.g., having compared cables under *controlled* conditions)

If the question is "have you heard differences in the same system and same room, using transport A vs transport B?", my answer is "yes..definitely". (if one wants to "disagree or argue with what I experienced, that's a "dead-end" I see no point in going down) If you are asking "why?" or "how big a difference", or "is it worth it", etc...well, those are different questions.

p.s. While the question speaks of digital, the OP seems to forget (or not know?) that analog is involved in a CD player, at least one that is not using an external DAC.
Post removed 
Post removed 
Curious. In Jea48's linked article the implication was that the level of jitter was related to or at least different for different frequency levels of sound (presumably after the DAC). Someone straighten me out on this. It seems to me that the bit stream speed is independent of the bit content. If this is correct than should not the jitter be either constant of possible a function of the disc itself (like radial position or burn/pressing quality)?
I believe he said that it's possible to show that a more expensive digital cable is better than another, but the end product doesn't change.

There was a test done with HDMI cables that tested cables of different prices. The only difference in picture quality was noted when a cable was defective and there was an obvious problem on the display.
Mceljo 7-19-10

Disagree.... A digital cable can and will make a difference.
http://www.tnt-audio.com/accessories/digitalcables_e.html
http://www.tnt-audio.com/accessories/digitalcables_e.html

Not all transports sound a like. An example.....
http://www.stereophile.com/features/368/index8.html#
Entire text:
http://www.stereophile.com/features/368/index.html
Mceljo, with all due respect your friend seems to have missed my point.

My point was NOT that bit errors would occur in the link between transport and dac, due to logic threshold problems or due to any other reason. I would expect that any such interface that is not defective, and that is Walmart quality or better, will provide 100% accuracy in conveying the 1's and 0's from one component to the other.

My point in mentioning the logic threshold of the receiver chip was that variations in its exact value, within normally expectable tolerances, may affect whether or not the receiver chip responds to reflection-induced distortion that may be present on the edges of the incoming signal waveform. (By "edges" I mean the transitions from the 0 state to the 1 state, and from the 1 state to the 0 state). And thereby affect the TIMING of the conversion of each sample to analog.

Signal reflections caused by impedance mismatches, as I explained and as the article describes, will propagate from the dac input circuit back to the transport output, and then partially re-reflect back to the dac input, where whatever fraction of the re-reflection that is not reflected once again will sum together with the original waveform.

If the cable length is such that the time required for that round trip results in the re-reflection returning to the dac input when the original waveform is at or near the mid-point of a transition between 0 and 1 or 1 and 0, since the receiver's logic threshold is likely to be somewhere around that mid-point the result will be increased jitter.

Again, no one is claiming that bits are not received by the dac with 100% accuracy. The claim is that the TIMING of the conversion of each sample to analog will randomly fluctuate. The degree of that fluctuation will be small, and will be a function of the many factors I mentioned (and no doubt others as well), but there seems to be wide acceptance across both the objectivist and the subjectivist constituents of the audiophile spectrum that jitter effects can be audibly significant.

If your friend disagrees with that, he should keep in mind two key facts, which he may not realize:

1)The S/PDIF and AES/EBU interfaces we are discussing convey both clock and data together, multiplexed (i.e., combined) into a single signal.

2)The timing of each of the 44,100 conversions that are performed each second by the dac is determined by the clock that is extracted from that interface signal.

Best regards,
-- Al
Almarg - Here is a response from my EE friend that I've been discussion this topic with at work.

"One of the most important factors discussed is "the value of the logic threshold for the digital receiver chip at the input of the dac" which, and this is important, supersedes ALL OTHERS in properly designed electronic equipment. If it didn't, the computer you are typing on would not work, the key-strokes would get lost, data you receive over the internet would be incomplete, pixels would be missing from the image in your video screen--ALL of which operate at WAY higher frequencies than any CD audio signal. Compared to modern computers, digital audio is simply rudimentary. If the audio equipment cannot transmit or identify logic signals that are above the background noise (all other elements discussed fall into this category) than the equipment in question is simply junk. I could, in the digital electronics lab at school, design and build a digital data transmission device and associated data receiver that would operate at 1MHz (far above any audio signal, but low frequency for digital electronics) and not lose a single bit of data.

Again, everything mentioned is real and true, but IS NOT A FACTOR in properly designed and built equipment. It is FAR more applicable to things like cell phone and computer design, and if the electronics industry were unable to overcome all the factors discussed in mere audio equipment, then a working cell phone and 3GHz processor would simply be pipe dreams.

As far as the SPDIF issue addressed in the linked article is concerned, it too is correct, but not a factor in your system. If you think it might be, switch to an optical cable or HDMI and see if you can hear a difference. I bet not. The information getting to the DAC in your amplifier will be bit for bit identical. If not, you have broken equipment."
Jitter is not a problem with "digital" part of digital (the robust part). Jitter is part of the analog problem with digital and can be regarded as a D to A problem (or, in the studio an A to D problem). It is an analog timing problem whereby distortion can be introduced at the DAC/ADC stage because of drift in the clock. To accurately convert digital to analog or analog to digital requires an extremely accurate clock.

I stand by my statement that you can copy a copy of a digital signal and repeat the copy of each subsequent copy 1000's of times with no degradation.

You cannot do this with any analog media - within ten to twenty copies or a copy the degradation becomes extremely audible (or visible in the case of a VHS cassette)

The evidence is that digital signals are extremely robust compared to analog.
I may be over simplifying this a bit, but it sounds like the proximity of the components that "read" the CD can have an effect on the analog signal created in the DAC. Would this be justification for a completely seperate DAC?

How does this relate to a Toslink cable that is optical?
The points Kijanki made about timing, jitter, and reflections on impedance boundaries merit added emphasis and explanation, imo.

The S/PDIF and AES/EBU interfaces which are most commonly used to transmit data from transport to dac are inherently prone to jitter, meaning short-term random fluctuations in the amount of time between each of the 44,100 samples which are converted by the dac for each channel in each second (for redbook cd data).

As Kijanki stated, "Jitter creates sidebands at very low level (in order of <-60dB) but audible since not harmonically related to root frequency. With music (many frequencies) it means noise. This noise is difficult to detect because it is present only when signal is present thus manifest itself as a lack of clarity."

One major contributor to jitter is electrical noise that will be riding on the digital signal. Another is what are called vswr (voltage standing wave ratio) effects, that come into play at high frequencies (such as the frequency components of digital audio signals), which result in reflection back toward the source of some of the signal energy whenever an impedance match (between connectors, cables, output circuits, and input circuits) is less than perfect.

Some fraction of the signal energy that is reflected back from the dac input toward the transport output will be re-reflected from the transport output or other impedance discontinuity, and arrive at the dac input at a later time than the originally incident waveform, causing distortion of the waveform. Whether or not that distortion will result in audibly significant jitter, besides being dependent on the amplitude of the re-reflections, is very much dependent on what point on the original waveform their arrival coincides with.

Therefore the LENGTH of the connecting cable can assume major importance, conceivably much more so than the quality of the cable. And in this case, shorter is not necessarily better. See this paper, which as an EE strikes me as technically plausible, and which is also supported by experimental evidence from at least one member here whose opinions I respect:

http://www.positive-feedback.com/Issue14/spdif.htm

Factors which determine the significance of these effects, besides cable length and quality, include the risetime and falltime of the output signal of the particular transport, the jitter rejection capabilities of the dac, the amount of electrical noise that may be generated by and picked up from other components in the system, ground offsets between the two components; the value of the logic threshold for the digital receiver chip at the input of the dac; the clock rate of the data (redbook or high rez), the degree of the impedance mismatches that are present, and many other factors.

Also, keep in mind that what we are dealing with is an audio SYSTEM, the implication being that components can interact in ways that are non-obvious and that do not directly relate to the signal path that is being considered.

For instance, physical placement of a digital component relative to analog components and cables, as well as the ac power distribution arrangement, can affect coupling of digital noise into analog circuit points, with unpredictable effects. Digital signals have substantial radio frequency content, which can couple to other parts of the system through cables, power wiring, and the air.

All of which adds up to the fact that differences can be expected, but does NOT necessarily mean that more expensive = better.

Regards,
-- Al

P.S: I am also an EE, in my case having considerable experience designing high speed a/d and d/a converter circuits for non-audio applications.

"I've never owned a player with one, so I assume that error handling is not an issue."

I would not assume that all devices or software programs are designed to deliver optimal sound and hence all source bits in real time.

Some may take short cuts and have less robust error correction if assuring optimal sound quality is not a primary goal.

Digital devices that are designed to enable optimal sound quality should be able to accomplish that goal by assuring that all source bits available are in fact transmitted and utilized, but there is nothing that guarantees all devices or software programs in play do this.
Post removed 
Post removed 
It's interesting that the majority of arguments for the quality of a digital signal making a difference are directed at the D/A conversion process which is exactly where everyone agrees that the signal can be influenced.

It also interesting that another Electrical Engineer (EE) agrees with what my friend explained to me.
"It's an analogue process using digital factors."

Or more precisely its an analogue process (all real world physical processes including electronics and sound are analog in nature) but one which uses digital signal encoding and processing techniques.
Post removed 
It can matter long before the CD skips. If the error correction is struggling for whatever reason the sound quality will be affected.
Yes, it matters.

All the bits have to be retrieved and transmitted accurately.

Then the bits comprising each sample have to be converted to the proper analog voltage level by the DAC at precisely the right time.

Variations in these two fundamental operations will affect sound quality to some extent.

The good news is the technology needed to do this reliably and within tolerances needed to produce good results, at least with CD redbook digital is becoming quite mature and is not radically expensive. Different devices will produce different results however and the differences are often audible.
Post removed