Can the digital "signal" be over-laundered, unlike money?


Pretty much what is implied by the title. 

Credit to @sns who got me thinking about this. I've chosen a path of refrain. Others have chosen differently.

I'm curious about members' thoughts and experiences on this? 

Though this comes from a 'clocking thread' by no means am I restricting the topic to clocking alone.

Please consider my question from the perspective of all ["cleaning"] devices used in the digital chain, active and passive.

 

From member 'sns' and the Ethernet Clocking thread [for more context]:

 

"I recently experienced an issue of what I perceive as overclocking with addition of audiophile switch with OXCO clock.  Adding switch in front of server, NAS resulted in overly precise sound staging and images."

"My take is there can be an excessive amount of clocking within particular streaming setups.

...One can go [to0] far, based on my experience."

 

Acknowledgement and Request:

- For the bits are bits camp, the answer is obvious and given and I accept that.

- The OP is directed to those that have utilized devices in the signal path for "cleaning" purposes.

Note: I am using 'cleaning' as a broad and general catch-all term...it goes by many different names and approaches.

 

Thank You! - David.

david_ten

Showing 4 responses by itsjustme

to better understand the analog nature of a digital audio signal, and therefore what might impact it, read my (somewhat old) blog over at sonogy research .com I point you there so i don’t need to duplicate typing and diagrams. Bottom line: its not entirely digital. 2nd bottom line: the fixes are not magic and excellent DAC interfaces ought to mitigate the need most of them. And yet, so far they don’t, not 100%

 

Ahhh, the vagaries of audio. But for those of us with a scientific bent, it can lead us to ask questions and learn. When it soudns different..... something must be up.

 

Quick answer: if my "laundering" you basically mean clock and isolate - i would think not. Now, if you mess with it so badly that you create bit errors - all that goes out the window. But is not about bit errors - those are very rare.  But in any real-time stream,, uncorrectable (to the PC crowd thinking this is easy and already solved - nope, not in real time protocols - including the inetrnet's own RTP!)

Cost <$100 to "isolate" USB. Most tolerable DACs already isolate SPDIF (something that most audiophiles are clueless about -- going on about isolation, TOSLINK, etc.). So there goes your isolation argument. Isolate USB and with SPDIF on most tolerable DACs, no RF, there goes that argument. Timing? No timing in USB, not timing in Ethernet (also isolated and fairly immune to RF too). $15 DAC chips can remove nanoseconds of jitter, so that there goes that argument.

 

I don't want to misinterpret you or misquote you, so could you please write this more clearly before i reply? For example, are you saying that timing/jitter does not matter on the USB interface? If so, you are confusing a purely data signal with the quasi-analog signal that is fed to a DAC.So, please clarify the whole thing. Thanks.

 

G

Two problems with that argument.  bear in mind i design these things both in T&M and audio.  The first problems i "not all DACs do a perfect re-clocking, and many, so as not to over/under run buffers begin with the input timing and then reduce timing variations".  The second is, like many thigns in audio, listening tests tell me that some issues remain after competent isolation and re-clocking.

 

Now, i have posted several times that when i have built my own USB interface (isolation, power, FIFO, etc.) and applied them to legacy DACs, the dependence on a good input signal is far less.  But its not zero.  You can deduce what you wish - i don't have all the answers, but at least i listen, and then ask questions.

 

Do i think the Ethernet switches make a difference? No i don't. Its isolated anyway and are queued anyhow. (I supposed que handling might matter, i assume that is mapped by the router, maybe not).  but clearly adding a bridge between server and endpoint helps, and a great USB interface helps, and providing a good input signal helps.

 

I do agree that most of the benefit comes from competent design of the USB interface. I also know that most designs are "data sheet engineered" and not ideal.

 

But be careful of blanket statements.  Not only can they mislead given real world equipment, but they turn off people who have heard "its perfect" too many times before, only to find out otherwise (and have the industry fix issues, you know those silly guys at AD and Burr-Brown).

 

Thanks for clarifying. I assumed we were not 100% in agreement, though probably more yes than no.

 

G