Does a ripped cd onto a digital format sound better than the cd played on the cdp


the title says it all. if i rip my collection onto a sever will it increase SQ? dumb question i am sure but here i am. if the digital system is above average will it make the sound better?
128x128veroman

Showing 6 responses by kijanki

Veroman, it is hard to compare since playback gear is different. On the brand new CD it might not make any difference, but scratched CD is different story. Most of CDPs cannot correct errors (cannot read again), playing in real time. They can correct scratches along the track shorter than about 4mm, have to interpolate for scratches 4-8mm and will fail for >8mm. Computer while ripping disk, can read the same sector hundreds of times having better chances of recovering data (renewing CD). Proper checksums for whole CDs are even published. I instructed program I use (XLD) to read each sector up to 200 times, if necessary. Once it is ripped it becomes data, that can be copied, stored, backed up etc. In ideal case this data would be delivered to DAC as a data (without timing). That’s the case of async USB, Ethernet, Wi-Fi.
Then how is it that some "digital" cables sound better than others? Or do you deny that?
Let me answer that. There might be three reasons.
1. Digital cable injects electrical noise from the source (computer) or this noise is induced from ambient electrical noise (some cables induce more, some less).
2. Cable doesn’t transfer digital data, but music. The difference is timing. Computer data has no timing attached (will always be the same) but some of digital music timing might have effect on D/A converter clock. When this clock is uneven in time (jitter) it produces artifacts (frequencies) not present in original signal.
3. Transfer of any high frequency signal (digital or analog) require characteristic impedance of the cable matching (that in simplification is SQRT(L/C) ). When this impedance is not matched (between source, cable, DAC) reflection from the point of impedance change will appear changing timing of the signal or even completely flipping zeroes and ones.
Jitter can be added two ways - either during transmission of digital signal that can influence D/A clock or by inducing electrical noise, that produces time jitter (point in time of signal crossing threshold changes when signal has added noise). In case of Ethernet, USB etc - your DAC gets bit perfect signal and your DAC clock is independent, but cable injects induced electrical noise, that in effect produces jitter. Specification of Ethernet, for instance, calls for isolation of data, but when it is typically done with transformers it will still conduct highest frequency electrical noise thru transformer’s capacitance. I’m sure there are separating devices for that.
Thank you Sbank. Let me give small example why bits are not the bits.
When we attempt to do something very simple like sending digital signal representing 1kHz we might actually receive three different signals. It is because in order to produce only 1kHz D/A converter would have to receive digital words in precisely exact intervals. Any variation would create additional signals - sidebands. Imagine that digital transmission of this 1kHz signal jitters in time because of 60Hz noise. That would create additional 940Hz and 1060Hz signals - most likely very small but audible, being not harmonically related to 1kHz. Amplitude of these sidebands would be proportional to time variation from ideal moment of delivery while frequency would be difference from 1kHz by how often this variation happens (in this case 60 times a second). Since music is not just simple 1kHz signal but a lot of them time jitter of digital signal will create a lot of additional frequencies - a noise, proportional in amplitude to amplitude of original signal (and undetectable without it).
Thanks Steve. Of course "received" is a big simplification. The moment of D/A conversion does not have to to be the same as moment of data arrival, but often conversion clock is based on incoming data rate to avoid getting out of sync (losing data). The question is what to do to avoid it. One option is to use system that buffers data, using different, independent clock for D/A conversion with signaling to make sure buffer always has enough of data, or doesn’t overflow and that’s how async USB works. Another option is to use device that reclocks serial S/Pdif signal. You can use reclocking DAC (like my Benchmark DAC3) or reclocking device before DAC (Audioengr makes one). Separate reclocking device has advantage of giving wider choice of DACs available. I like the sound of Benchmark with my gear, but changing it to non-reclocking DAC might require reclocking device or completely different method of delivery, like USB, that brings own problems (injecting computer noise).
Spatialking,  reflections in digital cable, from impedance boundary, are related to highest rate of change and not the frequency of the signal.  The rule of thumb says, that if signal propagation is longer than 1/8 of the fastest transition time than cable might have reflections.  For example, transitions in typical transport are about 25ns.  Divided by 8 it will make around 3ns.  Since signal moves in cable at about 5ns/m then 3ns corresponds to about 0.6m.  Any cable longer than that (including all internal connections of the source and receiver) will behave like transmission line (might have reflections).  I would feel comfortable with non-impedance-matched cables much shorter than 12"  (I would try to get it under 8").  Otherwise you have to match characteristic impedance of everything on the way and it is not always possible - like in the case of RCA connector, you mentioned.  Since reflections can happen from any impedance boundary it is important to match, as close as possible, of impedance of anything on the way.  That's why good digital cable can be great in one system but poor in another.