Of course not, these expensive CD players are just a rip off. Since the cheapest ones at Walmart are already perfect what use is there in paying more then $59.95 for one?
67 responses Add your response
That's exactly the question. I would suspect that the vast majority of audiophiles use analogue connections to avoid ever having to make the transition into a digital format.
If a CD player is doing nothing more than reading the digital data (i.e. ones and zeros) on the disk and sending the information to another component for the D/A converstion what could effect the signal?
Do you insist on having a $1,000 CD drive in your computer to ensure that you have an accurate copied disk? How about burning a disk on an external drive using a USB cable, is there any risk of not getting a perfect duplication of the original without getting an error?
From How Stuff Works:
"In analog technology, a wave is recorded or used in its original form. So, for example, in an analog tape recorder, a signal is taken straight from the microphone and laid onto tape. The wave from the microphone is an analog wave, and therefore the wave on the tape is analog as well. That wave on the tape can be read, amplified and sent to a speaker to produce the sound.
In digital technology, the analog wave is sampled at some interval, and then turned into numbers that are stored in the digital device. On a CD, the sampling rate is 44,000 samples per second. So on a CD, there are 44,000 numbers stored per second of music. To hear the music, the numbers are turned into a voltage wave that approximates the original wave."
What this means is that until the numbers of the digital signal are converted back to a analog voltage wave via a D/A converter the only thing that matters is that the signal be transferred.
Maybe this is my my CD player recommends using a digital connection to my receiver rather than analog. At my equipment level he preservation of the analog signal isn't able to match bypassing the component all together and "shortening" the path of the analog signal.
This is something laypeople never seem to grasp. The entire advantage of digital is just that = "signal quality matters much less than analog".
This is a FACT.
The idea of representing information as 1's and 0's means that information can be stored and transmitted with no loss - something that is IMPOSSIBLE to achieve with analog.
So I would say YES - signal quality is much less of a factor in digital than in analog.
In fact the biggest source of quality differences with digital audio is the conversion to analog - this is where differences are audible - in the quality of the D to A converter.
You can copy a CD with a cheap drive 1000 times and it will be the same (copy of a copy) however it will sound better with a dedicated high quality DAC or a good quality CD player.
If Shadorne's argument is correct then why should the quality of the DAC or CD player matter? Even cheap one seem to measure very well. "Signal quality much less of a factor than in analog"? No wonder my LPs sound so much better. Why do transports make such a difference? A friend of mine didn't believe they would until he heard different ones on his system. Why do the best CD playback systems cost so much unless they are susceptible to degradations just as analog is?
CD's are not 0s and 1s. They are pits burnt into the metal layer and are measured for lenght by the lazer and then converted into a digital format.
These pits (or the transition) represent bits: a 1 or a 0. All digital information must be stored in analog form including what is on your computer hard drive. However, the digital approach allows the use of a threshold level or clear demarcation between a 1 and a 0 that does not exist in analog approaches.
Example of a digital scheme (not from a CD)
Signal level between -.5 and +0.5 Volts = 0. Signal level between +0.51 to 1.5 volts = 1.
This means you can have a lot of analog error or noise in the media and still get a perfect translation of the data as exactly what it should be - a 1 or 0.
If you add parity bits or polynomial redundancy check bits to the data you can also improve the robustness further (allows detection of data errors or even allowing for recovery of completely missing data)
Using the same example, compare this to an entirely analog approach where the difference between 0.O and 0.4 volts may be significant.
If Shadorne's argument is correct then why should the quality of the DAC or CD player matter?
I think I covered this:
In fact the biggest source of quality differences with digital audio is the conversion to analog - this is where differences are audible - in the quality of the D to A converter.
Clock & converter accuracy as well as the quality of the analog circuitry in the output stage can still make a difference.
However, digital eliminates the problems of media storage degradation and analog read errors from media. (dust, feedback, surface noise, pressing imperfections, pre-echo, poor chanel separation, lack of dynamic range of analog storage methods etc.)
keep in mind that while digital info is just 0's and 1's there also has to be a timing element that insures that the bits are in the correct place. Easy to get them in order but if your disk is spinning too fast then your read is too fast and such. Mind I'm not advocating anything as I am suspicious of so many things here but I could see timing perhaps being affected by cheap stuff. One problem with isolation is that there is a delay between setups and the listener often knows which is which. Any blind tests out there with multiple listeners and under statistical or sampling control?
I'm glad to see some objective insight on this subject. Keep in mind that I'm making a clear distinction between the two cases of having analog vs. digital being output from the CD player.
In my mind, a clean analog system would be the following:
(1) turntable - pre-amp - amp - speakers
In the digital world it would look like one of the following:
(1) CD player (DAC) - pre-amp - amp - speakers
(2) CD player - seperate DAC - pre-amp - amp - speakers
(3) CD player - integrated DAC/amp / receiver
I suspect that having an analog signal go through my home theater receiver would probably cause more degradation of the signal quality to nullify any advantage of an auidophile grade CD player.
I do not think that timing would be an issue for CDs as most can read a disk much faster than is required. This is was is known as a buffer and we all know what happens on youtube when the buffer isn't adequate.
My thought is that someone that will be using a home theater receiver (possibly other options depending on budget) would do better to put their money towards a better receiver and speakers than to invest in an expensive CD player and increase the number of things in the analog signal stream.
The issues assocated with degradation begin wherever the D/A happens.
Ever hear a CD "skip"? That's when the quality matters.
The distortion from a bad reading of 0's and 1's will most certainly not be euphonic.
So if your tone-deaf buddy/brother/significant other/etc. that thinks that you're crazy to spend all that time and $$ on your stereo listens to your system and immediately tells you "dude, your CD is broken", then you've got a problem with the quality of your digital signal. If such a person does not immediately point the finger at your CD player, then you don't have a problem with your digital signal.
Don't worry, you will find other problem with your stereo--I know I have. :)
The problem with the just 1s and 0s is that it simply doesn't hold up in practice. To repeat a story I have alluded to before, years ago a large Japanese CD pressing firm send [I think] HiF News some different pressings of the same CD, some with standard material and some with a mix of materials that would cost slightly more and which they were attempting unsuccessfully to get the record companies to adopt. There was such a huge difference in sound between them that they had to download them into a computer to see if the data was the same. It was exactly the same, if digital is so foolproof what made the difference? The laser system is a mechanical one and constitutes a change from analog to digital, they are not ones and zeros but REPRESENT ones and zeros in the same way the groves in an LP represents sound waves. Many mechanical factors can interfere with the ability to correctly read the pits and translate them into digital signal.
Stanwal, the problem that you are describing is properly ascribed to the DAC or the output stage, just like Shadorne said above (twice).
Trust me, if the 0's and 1's were getting messed up, it would not be subtle, but horrible--like the sound of a skipping CD. Your computer would tell you that it can't read that CD, etc.
Horrible, catastrophic errors--those probably come from a bad reading of the 0s and 1s. Subtler, "audiophillic" errors--those probably come from an inferior DAC or output stage.
To the OP: Take it from another EE; your friend is correct. Everything that matters as to signal quality occurrs before and after the digitization. You cannot do anything about what happens before the signal is digitized so if you want to improve quality concentrate on what happens afterwards. Moving the digital signal around is not a place to spend any effort.
Error checking and correction is very loose in CDPs since it has to read CD in real time (dust, scratches). There are programs like MAX for Mac that read music CD as data going multiple time to the same sector until finds right checksum. My CDP plays and Itunes rips CDs that MAX refuses to read or reads extremely long time.
Digital data from CDP is jittery (contains jitter - noise in time domain). Jitter creates sidebands at very low level (in order of <-60dB) but audible since not harmonically related to root frequency. With music (many frequencies) it means noise. This noise is difficult to detect because it is present only when signal is present thus manifest itself as a lack of clarity. Jitter can be suppressed by asynchronous upsampling DACs (like Benchmark DAC1) or reclocking devices. Jitter depends on quality of CDP transport and power supply. Typical digital transition of CDP is in order of 25ns making it susceptible to noise (slow crossing of threshold). High quality transports can transition many times faster reducing noise coupling but creating a lot of problems with reflections on cable characteristic impedance boundaries (therefore require better digital cable).
Jitter in D/A playback can be suppressed but recorded jitter in A/D process stays forever. For some early A/D conversions the only option is to convert it again if analog tapes still exist.
Yes, it matters.
All the bits have to be retrieved and transmitted accurately.
Then the bits comprising each sample have to be converted to the proper analog voltage level by the DAC at precisely the right time.
Variations in these two fundamental operations will affect sound quality to some extent.
The good news is the technology needed to do this reliably and within tolerances needed to produce good results, at least with CD redbook digital is becoming quite mature and is not radically expensive. Different devices will produce different results however and the differences are often audible.
LOL! more arguements just like how many angels can dance on the head of a pin. So I'll toss in my crap...
The signal is NOT just zeros and ones. The signal IS made UP of zeros and ones, but is actually 64,000 possible levels of info made up of 'zeros and ones' with all sorts of error correction and various algorythm stuff in those 'byte' size clumps of zeros and ones.
The pits are NOT zeros and ones, the pits have various lengths, the transition from land to pit is the switch from zero to ones WITH A TIMING FACTOR. how long it takes to go from zero to one, etc.. MATTERS. Guess what!: It's an analogue process using digital factors. Chew on that.
It's interesting that the majority of arguments for the quality of a digital signal making a difference are directed at the D/A conversion process which is exactly where everyone agrees that the signal can be influenced.
It also interesting that another Electrical Engineer (EE) agrees with what my friend explained to me.
"I've never owned a player with one, so I assume that error handling is not an issue."
I would not assume that all devices or software programs are designed to deliver optimal sound and hence all source bits in real time.
Some may take short cuts and have less robust error correction if assuring optimal sound quality is not a primary goal.
Digital devices that are designed to enable optimal sound quality should be able to accomplish that goal by assuring that all source bits available are in fact transmitted and utilized, but there is nothing that guarantees all devices or software programs in play do this.
The points Kijanki made about timing, jitter, and reflections on impedance boundaries merit added emphasis and explanation, imo.
The S/PDIF and AES/EBU interfaces which are most commonly used to transmit data from transport to dac are inherently prone to jitter, meaning short-term random fluctuations in the amount of time between each of the 44,100 samples which are converted by the dac for each channel in each second (for redbook cd data).
As Kijanki stated, "Jitter creates sidebands at very low level (in order of <-60dB) but audible since not harmonically related to root frequency. With music (many frequencies) it means noise. This noise is difficult to detect because it is present only when signal is present thus manifest itself as a lack of clarity."
One major contributor to jitter is electrical noise that will be riding on the digital signal. Another is what are called vswr (voltage standing wave ratio) effects, that come into play at high frequencies (such as the frequency components of digital audio signals), which result in reflection back toward the source of some of the signal energy whenever an impedance match (between connectors, cables, output circuits, and input circuits) is less than perfect.
Some fraction of the signal energy that is reflected back from the dac input toward the transport output will be re-reflected from the transport output or other impedance discontinuity, and arrive at the dac input at a later time than the originally incident waveform, causing distortion of the waveform. Whether or not that distortion will result in audibly significant jitter, besides being dependent on the amplitude of the re-reflections, is very much dependent on what point on the original waveform their arrival coincides with.
Therefore the LENGTH of the connecting cable can assume major importance, conceivably much more so than the quality of the cable. And in this case, shorter is not necessarily better. See this paper, which as an EE strikes me as technically plausible, and which is also supported by experimental evidence from at least one member here whose opinions I respect:
Factors which determine the significance of these effects, besides cable length and quality, include the risetime and falltime of the output signal of the particular transport, the jitter rejection capabilities of the dac, the amount of electrical noise that may be generated by and picked up from other components in the system, ground offsets between the two components; the value of the logic threshold for the digital receiver chip at the input of the dac; the clock rate of the data (redbook or high rez), the degree of the impedance mismatches that are present, and many other factors.
Also, keep in mind that what we are dealing with is an audio SYSTEM, the implication being that components can interact in ways that are non-obvious and that do not directly relate to the signal path that is being considered.
For instance, physical placement of a digital component relative to analog components and cables, as well as the ac power distribution arrangement, can affect coupling of digital noise into analog circuit points, with unpredictable effects. Digital signals have substantial radio frequency content, which can couple to other parts of the system through cables, power wiring, and the air.
All of which adds up to the fact that differences can be expected, but does NOT necessarily mean that more expensive = better.
P.S: I am also an EE, in my case having considerable experience designing high speed a/d and d/a converter circuits for non-audio applications.
Jitter is not a problem with "digital" part of digital (the robust part). Jitter is part of the analog problem with digital and can be regarded as a D to A problem (or, in the studio an A to D problem). It is an analog timing problem whereby distortion can be introduced at the DAC/ADC stage because of drift in the clock. To accurately convert digital to analog or analog to digital requires an extremely accurate clock.
I stand by my statement that you can copy a copy of a digital signal and repeat the copy of each subsequent copy 1000's of times with no degradation.
You cannot do this with any analog media - within ten to twenty copies or a copy the degradation becomes extremely audible (or visible in the case of a VHS cassette)
The evidence is that digital signals are extremely robust compared to analog.
Almarg - Here is a response from my EE friend that I've been discussion this topic with at work.
"One of the most important factors discussed is "the value of the logic threshold for the digital receiver chip at the input of the dac" which, and this is important, supersedes ALL OTHERS in properly designed electronic equipment. If it didn't, the computer you are typing on would not work, the key-strokes would get lost, data you receive over the internet would be incomplete, pixels would be missing from the image in your video screen--ALL of which operate at WAY higher frequencies than any CD audio signal. Compared to modern computers, digital audio is simply rudimentary. If the audio equipment cannot transmit or identify logic signals that are above the background noise (all other elements discussed fall into this category) than the equipment in question is simply junk. I could, in the digital electronics lab at school, design and build a digital data transmission device and associated data receiver that would operate at 1MHz (far above any audio signal, but low frequency for digital electronics) and not lose a single bit of data.
Again, everything mentioned is real and true, but IS NOT A FACTOR in properly designed and built equipment. It is FAR more applicable to things like cell phone and computer design, and if the electronics industry were unable to overcome all the factors discussed in mere audio equipment, then a working cell phone and 3GHz processor would simply be pipe dreams.
As far as the SPDIF issue addressed in the linked article is concerned, it too is correct, but not a factor in your system. If you think it might be, switch to an optical cable or HDMI and see if you can hear a difference. I bet not. The information getting to the DAC in your amplifier will be bit for bit identical. If not, you have broken equipment."
Mceljo, with all due respect your friend seems to have missed my point.
My point was NOT that bit errors would occur in the link between transport and dac, due to logic threshold problems or due to any other reason. I would expect that any such interface that is not defective, and that is Walmart quality or better, will provide 100% accuracy in conveying the 1's and 0's from one component to the other.
My point in mentioning the logic threshold of the receiver chip was that variations in its exact value, within normally expectable tolerances, may affect whether or not the receiver chip responds to reflection-induced distortion that may be present on the edges of the incoming signal waveform. (By "edges" I mean the transitions from the 0 state to the 1 state, and from the 1 state to the 0 state). And thereby affect the TIMING of the conversion of each sample to analog.
Signal reflections caused by impedance mismatches, as I explained and as the article describes, will propagate from the dac input circuit back to the transport output, and then partially re-reflect back to the dac input, where whatever fraction of the re-reflection that is not reflected once again will sum together with the original waveform.
If the cable length is such that the time required for that round trip results in the re-reflection returning to the dac input when the original waveform is at or near the mid-point of a transition between 0 and 1 or 1 and 0, since the receiver's logic threshold is likely to be somewhere around that mid-point the result will be increased jitter.
Again, no one is claiming that bits are not received by the dac with 100% accuracy. The claim is that the TIMING of the conversion of each sample to analog will randomly fluctuate. The degree of that fluctuation will be small, and will be a function of the many factors I mentioned (and no doubt others as well), but there seems to be wide acceptance across both the objectivist and the subjectivist constituents of the audiophile spectrum that jitter effects can be audibly significant.
If your friend disagrees with that, he should keep in mind two key facts, which he may not realize:
1)The S/PDIF and AES/EBU interfaces we are discussing convey both clock and data together, multiplexed (i.e., combined) into a single signal.
2)The timing of each of the 44,100 conversions that are performed each second by the dac is determined by the clock that is extracted from that interface signal.
I believe he said that it's possible to show that a more expensive digital cable is better than another, but the end product doesn't change.
Disagree.... A digital cable can and will make a difference.
Not all transports sound a like. An example.....
Curious. In Jea48's linked article the implication was that the level of jitter was related to or at least different for different frequency levels of sound (presumably after the DAC). Someone straighten me out on this. It seems to me that the bit stream speed is independent of the bit content. If this is correct than should not the jitter be either constant of possible a function of the disc itself (like radial position or burn/pressing quality)?
This is another one of those issues/questions that comes up now and then (like double-blind testing, differences in cables, etc), and gets talked about a lot for a while. The things that always seem true with the threads include: 1) very few people agree; and 2) people make fairly bold statements one way or the other (often without actual personal experience, e.g., having compared cables under *controlled* conditions)
If the question is "have you heard differences in the same system and same room, using transport A vs transport B?", my answer is "yes..definitely". (if one wants to "disagree or argue with what I experienced, that's a "dead-end" I see no point in going down) If you are asking "why?" or "how big a difference", or "is it worth it", etc...well, those are different questions.
p.s. While the question speaks of digital, the OP seems to forget (or not know?) that analog is involved in a CD player, at least one that is not using an external DAC.
Paulsax - Jitter is a function of CD pressing quality, transport quality, digital cable quality, jitter suppressing scheme, electrical noise etc. It is a function of whole system. Even if we assume that amount of jitter is constant at given moment effects of jitter after D/A conversion are proportional to magnitude of the analog signal. Second page of Stereophile article (thank you Jea48) describes audible effects of jitter. They describe loss of detail and change in sound of instruments (harsh sounding violins) that might be effect of burying lower level harmonics in noise. Effects that they describe are often called "digititis".
Some people believe that as along as exact digital data gets to DAC timing doesn't matter. Try drawing sinewave on moving paper by marking predefined points (horizontal lines on paper to make it easier) in exact time intervals and then joining them. If intervals are not exact resulting sinewave won't be smooth - it will be jagged. Horizontal/time error got converted to vertical/value error.
Bob - Yes, error correction scheme will take care of most of the problems but used scheme (Cross Interleaved Reed-Solomon) can only correct 4000 bits of data (about 0.1"). If you have tiny scratch along the disk longer than 0.1" correction fails (only for this error). CDP won't try same sector again resulting in loss of sound quality. On the top of it transport might have poor tracking (skip track) because of CD vibrating, poor light reflection etc.
I like the fact that CDs surface can be re-polished (that's what our library did to all CDs). It tried to re-polish LP once but for some reason it didn't work.
In Jea48's linked article the implication was that the level of jitter was related to or at least different for different frequency levels of sound (presumably after the DAC). Someone straighten me out on this. It seems to me that the bit stream speed is independent of the bit content. If this is correct than should not the jitter be either constant of possible a function of the disc itself (like radial position or burn/pressing quality)?Paul, you raise a good question, and I believe that the key to the answer is that jitter should be thought of as noise in the time domain.
As you will realize, an analog signal will always have some amount of noise riding on it, which causes its amplitude to fluctuate to some degree, in a manner which is to some extent random. That noise will typically consist of a great many frequency components, all mixed together. Essentially a mix of ALL frequencies within some finite bandwidth, with different frequencies having different magnitudes.
Similarly, the random or pseudo-random timing fluctuations that characterize jitter in a digital signal will have a spectrum of a great many jitter frequencies all mixed together. In other words, there may be slow fluctuations in the timing, that are of some magnitude, accompanied by rapid fluctuations in the timing, that are of other magnitudes.
Some frequency components of the jitter spectra can be data dependent, because a major contributor to the electrical noise that is a fundamental cause of jitter is the rapid transitions of transistors and integrated circuits between the 0 and 1 states, and vice versa.
BTW, re the references in your two posts to disk speed, radial position, etc., keep in mind that fluctuations and inaccuracies in the rotational speed of the disk (which figure to be far larger in magnitude than the electronic jitter we have been discussing) are, or at least should be, taken out by subsequent buffering in the transport's electronics.
It seems to me that the bit stream speed is independent of the bit content. If this is correct than should not the jitter be either constant of possible a function of the disc itself (like radial position or burn/pressing quality)?
That was assumed when CD players were first invented. However, many things can affect the accuracy of the clock signal in the DAC. And even the bitsream is variable - error bursts and misreads may be cyclical and perhaps only the digital "preamble" is fairly consistent - so the data may vary in a certain repeating patterns.
Provided jitter is random, it is in general a negligible problem. However when patterns - such as power supply oscillations due to cyclical laser servo movements to track the pits on the rotating disc occur - then we can get non-random jitter. Another major cause of non-random jitter may be the Phase Locked Loop between teh master and slave clock - in this case, the very act of trying to keep the slave clock in time with the master cause oscillatory patterns as the slave hunts back and forth trying to keep in time. These repetitive patterns in clock timing erros cause new oscillatory audio signals to appear in the analog music coming out of the DAC - sometimes called sidebands - non-harmonically related signals. It is these very small (-40 db) but 'correlated' sounds that become audible - usually as hash or lack of clarity in upper midrange and HF (although this may significantly affect the perceptive sound of percussive instruments with low frequencies - like piano or drums - due to the way we "hear")
Anyway - jitter is an analog problem - it only appears upon conversion to analog or, up front, when converting analog to digital.
If you have a perfect clock then you will not have jitter.
DAC's have evolved to have better clocks. Early designs like Meitner used patterns in the digital data called "preamble" to try and achieve a more accurate clock. Others like Lavry used algorithms to maintain a very slow correction pattern on the slave clock that could be filtered out. Since about 2002 the problem has been substantially addressed by "asynchronous DACs" - basically these type DACs ignore the master clock altogther - and in these designs the jitter is totally determined by the clock quality in the DAC along and nothing upstream of the DAC.
On thing to note is that the information in one of the long and detailed articles linked in this discussion is 17 years old. My EE friend pointed out that a 2x CD player was a big deal 17 years ago. I would hope that many of the problems described have been reduced or solved by now. When it comes to electronics, 17 years is a very long time for technology to develop.
I picked up an SACD player from a friend today to borrow for a few days. We'll see how much difference there is once I get a copies of a single album in both formats. It'll probably be Nora Jones since I already have the CD and know it's a quality recording.
I picked up an SACD player from a friend today to borrow for a few days. We'll see how much difference there is once I get a copies of a single album in both formats. It'll probably be Nora Jones since I already have the CD and know it's a quality recording.Apples and oranges......
I suggest you compare the two players just using the CDs you have now. You should be able to hear a difference between the two players.
Post back your findings.
let me ask a related but slightly different question. I read often of a player being colored. THere is a thread right now about a music hall player being slightly "forward" and "bright". All things equal and in a world where jitter is the dominate error (or am I misunderstanding the tone of this thread?) how does a "color" creep in? Jitter seems like a strictly time domain issue and the coloration of music would seem like a level vs frequency issue? If so that may imply that the DAC or perhaps some outside electronics issue outside of bit reading is at play. Is that a reasonable assessment?
Sorry about the dumb questions. I'm your boy for mechanical engineering / physics / material science but I'm just about the "bang the rocks together" stage with digital electronics!
Paulsax - IMHO everything plays role. In addition to sound of jitter that was described in mentioned Stereophile article there is digital signal processing (oversampling, non-oversampling, upsampling) and filtering algorithm, type of DAC (traditional or sigma-delta, voltage or current output, single or dual differential etc.), particular DAC selected (sound differently) and chosen update rate, type of current to voltage conversion (transformer, tube, op-amp), analog electronics (tubes, op-amps, discrete), type of op-amp or tubes, quality of components and PCB, quality of power supply. It is endless list. Even something trivial like "mute" circuit can be responsible for loss of sound quality.
At the end sound in your system is the only thing that matters.
Paul, I believe the thread you are referring to is dealing with a one-box cd player (a Music Hall CD25.2).
Although it has a digital output and can be used as a transport in conjunction with a separate dac, presumably the discussion pertains to its analog outputs, which are generated by its own internal dac, and processed through its analog circuitry. Given that, as Mapman indicated, tonality and color can certainly be affected by the design and quality of the dac and the analog circuitry in the player.
Jitter is the predominant consideration just in the digital parts of the signal path, up to and including the dac chip. And it becomes a MUCH more critical consideration when the transport and dac are in separate components, because of the impedance matching, reflection, noise, clock recovery, and other interface-related issues that have been discussed above.
Paulsax: Once the digital signal is converted back to an analog signal there can be any amount of additional amplification, filtering, etc within the CDP prior to feeding the analog signal to the preamplifier. The effect of this additional signal processing is what is important and would certainly affect the sound,and indeed would be likely to have been designed for just that purpose. It is this post conversion processing that colors the sound, not the completeley insignificant effect of jitter.
"Apples and oranges......
I suggest you compare the two players just using the CDs you have now. You should be able to hear a difference between the two players."
Why would the SACD player make standard CDs sound different? The SACD has a much higher sampling rate that should be responsible for the vast majority of any difference. I have burned some standard CDs from the hybrid SACDs and I'll probably be getting an SACD player fairly soon. Everything is more crisp and detailed.