Can the digital "signal" be over-laundered, unlike money?


Pretty much what is implied by the title. 

Credit to @sns who got me thinking about this. I've chosen a path of refrain. Others have chosen differently.

I'm curious about members' thoughts and experiences on this? 

Though this comes from a 'clocking thread' by no means am I restricting the topic to clocking alone.

Please consider my question from the perspective of all ["cleaning"] devices used in the digital chain, active and passive.

 

From member 'sns' and the Ethernet Clocking thread [for more context]:

 

"I recently experienced an issue of what I perceive as overclocking with addition of audiophile switch with OXCO clock.  Adding switch in front of server, NAS resulted in overly precise sound staging and images."

"My take is there can be an excessive amount of clocking within particular streaming setups.

...One can go [to0] far, based on my experience."

 

Acknowledgement and Request:

- For the bits are bits camp, the answer is obvious and given and I accept that.

- The OP is directed to those that have utilized devices in the signal path for "cleaning" purposes.

Note: I am using 'cleaning' as a broad and general catch-all term...it goes by many different names and approaches.

 

Thank You! - David.

david_ten

I have a friend who also had a negative experience with an OCXO clocked network switch in front of the music server.  The switch is just a data packet transfer mechanism, but I actually think that it's injecting a certain type of "character" with the digital pulses it sends down the line to the server.

Having an OCXO clock in the dac or server generally does not have this type of result. 

This is all n assumption, though.

If additional clocking results in a sterile sound, two possible reasons apply:

1.mismatch when the device requires a sine wave and the clock provides a square wave or vice versa

2. when there is a cable mismatch or quality issue: e.g. 50 Ohm cable on 75Ohm port, long cable suffering from mirroring or RMI/EFI interference due to ineffectual shielding.

Superior clocking in my experience always results in superior rendition of the soundstage as well as intruments‘ attack and reverb

With anything digital, messing with the original signal is bound to result in a change.

Whether it is 'good or bad' will be a matter of personal opinion.

FWIW, I do believe digital will get very close to analog in the near future.

B

Thanks @david_ten  for posting this question. I presume network clocks solely affect sound stage, imaging, perhaps resolution. Some presume my issues with the added clock in audiophile switch is due to inferior quality of said switch. So, if the clocking in this switch is doing its job, I should have more precise sound staging, imaging, more resolution. My listening experience with switch confirmed my presumptions of what added clocking would do, more precise sound stage, imaging and a bit more resolution, in my case sound stage, imaging overly precise. It seems intuitive to me that a better/more expensive switch and/or clock would only increase that precision, this I don't want.

 

If this not the case, please explain how a higher priced, supposed higher quality clock/switch would improve over my switch/clock. Are there flavors of switches/clocks, do network appliances affect things like timbre, tonality, micro and macro dynamics? I've not heard any of these kind of changes with any of my network improvements, solely sound staging, imaging and resolution changes.

 

I've heard of the sine wave vs square wave issue, don't know if this is issue in this case.  And the attack and decay issue is an interesting concept, this allied to micro/macro dynamics. The defects in sound staging I'm hearing could be interpreted as micro dynamic issue, the overly precise imaging/sharp outlines mimics micro dynamic decay, but in my case solely sound staging related, no perceptive change in dynamics.

 

I'm certainly not alone in hearing defects with audiophile switches. Just not sure if their issues are the added clocking or something else?

 

Its also possible router mods have diminished my need for switch/added clock. I'm powering with over spec'd LPS (amperage supply greater than need) and added rfi shielding. Entire network and USB chain post server already optimized.

 

And my digital surpasses my tt setup by quite a large margin, and only sounds increasingly analog as resolution increases.

 

“Can the digital "signal" be over-laundered, unlike money?”

@david_ten,

Yes!!! And this is not limited to audio :-)

As with anything else in life, striking a ‘balance’ is the key. IME, careful selection of fewer high quality components and ‘sensible’ tweaking will always yield to superior sound vs. plethora of sub par components and ‘band-aid’ tweaks.

I subscribe to everything matters philosophy so efficacy of any single component or tweak squarely depends on rest of your system. A high quality ethernet switch or external re-clocker device is not going to magically transform your digital streaming if as an example there is a laptop, node 2 or mac-mini type of component ahead in the chain.

@lalitk  I don't know if you're referring to me when you keep on mentioning mac mini server. Yes, I use a server in mac mini clothing, but its bespoke in execution. Internal power supply out, Uptone MMK DC internal pc, Uptone JS-2 LPS, upgraded RAM, and SSD, nearly all services disabled, including going into DOS command disabling even more services, no wifi, wifi antenna removed, extra RFI shielding, modified for two ethernet cables. This is not a mac mini, has no capability of general service computer. This is Frankenstein or diy server, not some low resolution junk shop solution. Not saving any money with this build, add labor and research, this route not for most. Not lacking resolution with my digital setup, any and all defects will be illuminated. And this not at expense of timbre, tonality, micro/macro dynamics. Very close to my vinyl setup in sense of ease, far higher resolving power with the digital.

to better understand the analog nature of a digital audio signal, and therefore what might impact it, read my (somewhat old) blog over at sonogy research .com I point you there so i don’t need to duplicate typing and diagrams. Bottom line: its not entirely digital. 2nd bottom line: the fixes are not magic and excellent DAC interfaces ought to mitigate the need most of them. And yet, so far they don’t, not 100%

 

Ahhh, the vagaries of audio. But for those of us with a scientific bent, it can lead us to ask questions and learn. When it soudns different..... something must be up.

 

Quick answer: if my "laundering" you basically mean clock and isolate - i would think not. Now, if you mess with it so badly that you create bit errors - all that goes out the window. But is not about bit errors - those are very rare.  But in any real-time stream,, uncorrectable (to the PC crowd thinking this is easy and already solved - nope, not in real time protocols - including the inetrnet's own RTP!)

@sns,

Nope, I wasn’t referring to you or your mac-mini setup. I’ve been approached by few with similar setups seeking advice on how to improve the sound. So my comment in previous post ‘if as an example’ was in that context.

I had this experience with a Silent Angel network OCXO switch with LPS - more detail and resolution but to the point where it was sometimes too much (recording dependent)

I found that the issue was down to weaknesses in the rest of the audio chain which were being revealed by the better signal.

my solutions were:

1. bought new tubes for the pre-amp

2. reduced the gain into the pre amp (via ROON) by about 6db.

3. sold WWP 6's and upgraded to Sasha 2's

The detail and high resolution remain but now it's all good man! 

 

One could argue that small boxes to "filter" Ethernet signals, and especially reclocking, may be less important if the server has a high quality input such as JCAT's OCXO clocked NET Card XE network card.  I could still imagine a benefit from isolation to remove EMI/RFI, which is why I use fiber from my router to my server (with converters and short Ethernet cables at each end).  The GigaFOILv4-INLINE Ethernet Filter can also be used to provide optic isolation.

@sns there are a lot of people who hate the concept of using practical technology as a source (a computer)….to do the same thing as the defunct tech they’re using (“streamer”) they spent thousands on to accomplish the same thing. Audiophile has become synonymous with “old people overpaying for dressed up old tech” in a lot of other forums. 

My experience is that a really good Ethernet switch with attention paid to the PSU and the clock improves the natural timbre of voices and instruments, along with a general increase in clarity. Adding a top class Ethernet Filter to remove RFI and stop it getting into the ground plane of the downstream electronics lowers the noise floor of the music, giving more fine detail and dynamics and a larger soundstage with more air and space around instruments. So the two working  together is the golden ticket.

I doubt the devices mentioned here would over launder., consider how many times this signal has been through switches and routers, changed from copper to optical and back before it reaches your house. You can do anything you want with these contraptions as long as it measures below a certain db at the analog of the DAC you're good .

+2, @richtruss 

In addition to high quality Ethernet switch with LPS and Filter (passive or active), a streamer or server designed specifically for audio is just as important. 

Jump to the end. Does it sound good? What sound are you looking for. I am in the camp of not liking to change the sound the artist originally wanted. This is like remastered albums they are changing what the original is.

@sgreg1 

Does it sound good? - Yes

What sound are you looking for - The artist and mastering engineer originally created in the recording studio. 

Since you jump to the end, here is what you’ve missed….

“ A really good Ethernet switch with attention paid to the PSU and the clock improves the natural timbre of voices and instruments, along with a general increase in clarity. Adding a top class Ethernet Filter to remove RFI and stop it getting into the ground plane of the downstream electronics lowers the noise floor of the music, giving more fine detail and dynamics and a larger soundstage with more air and space around instruments.”

It’s not about changing the sound of what originally recorded but rather a pursuit to hear the music as close to the original in our homes. 

 

Thanks for the thoughtful responses. I've been reflecting on aspects of each post and will follow up with questions, etc.

I believe I mentioned this in prior post on another thread.  I think we can all agree optimal network performance requires galvanic isolation, proper timing, maximum jitter reduction, shielding from emi/rfi. With so many choices of equipment to address these issues, highly likely every streaming solution is unique. What works for one situation may not work for another, this especially true at the margins when one has optimal or near optimal setup already. One may upset delicate balance they may have achieved by adding another network appliance.

 

One can speculate or presume my issues with switch were due to inferior clocking, poor implementation, inferior parts. Perhaps a higher quality switch would further optimize my network, perhaps not, only insertion of such a switch would provide empirical evidence.

 

At this point I question how does one know when network is optimized? If one's system is providing high resolution, natural timbre, balanced tonality, freq extension at both ends, wonderful micro and macro dynamics, precise and natural sound stage, imaging, is that not proof of optimized network? Is there a point where we can say enough is enough?  The conundrum is this is one of those known unknowns, the reason so many are never satisfied. We can't know if our present networks are optimized until we've tried any number of other network configurations.

 

While I try never to say never, I'm at the point where I'm satisfied with present network, other bigger fish to fry. My take is until we have all fiber solution, I'm done.

Post removed 

Cost <$100 to "isolate" USB. Most tolerable DACs already isolate SPDIF (something that most audiophiles are clueless about -- going on about isolation, TOSLINK, etc.). So there goes your isolation argument. Isolate USB and with SPDIF on most tolerable DACs, no RF, there goes that argument. Timing? No timing in USB, not timing in Ethernet (also isolated and fairly immune to RF too). $15 DAC chips can remove nanoseconds of jitter, so that there goes that argument.

 

I don't want to misinterpret you or misquote you, so could you please write this more clearly before i reply? For example, are you saying that timing/jitter does not matter on the USB interface? If so, you are confusing a purely data signal with the quasi-analog signal that is fed to a DAC.So, please clarify the whole thing. Thanks.

 

G

I don't want to misinterpret you or misquote you, so could you please write this more clearly before i reply? For example, are you saying that timing/jitter does not matter on the USB interface? If so, you are confusing a purely data signal with the quasi-analog signal that is fed to a DAC.So, please clarify the whole thing. Thanks.

Timing/jitter on the USB interface does not matter. This does not have any impact on the DAC analog output which uses a completely separate clock. That does not need to be a very expensive clock for very good audio performance. Sure, lots of expensive equipment makers say it does, but they can't ever support that claim. Chip based DACs, new ones at least, not 30 year old ones, are more immune to clock jitter as well.

Saying "quasi-analog" is marketing speak. It has no meaning. Clock jitter has meaning, namely clock jitter at the input to the DAC.

Electrical noise on the USB I/F due to ground loops is an issue, hence why I addressed isolation.

Please don't come back with pseudo-technical marketing fluff. That is not going to cut it except with people who have also drunk the Koolaid.

Competently designed DACs with asynchronous USB will buffer the input and use their own clock so no matter how many of these devices precede the DAC they won't affect the analog out.

@djones51,

Clueless but convinced to be right! Your definition of competently designed unfortunately seems to exist nowhere in the real world.

Really? No company makes DACs that can properly deal with USB input?  I think the one clueless about digital is you 

Two problems with that argument.  bear in mind i design these things both in T&M and audio.  The first problems i "not all DACs do a perfect re-clocking, and many, so as not to over/under run buffers begin with the input timing and then reduce timing variations".  The second is, like many thigns in audio, listening tests tell me that some issues remain after competent isolation and re-clocking.

 

Now, i have posted several times that when i have built my own USB interface (isolation, power, FIFO, etc.) and applied them to legacy DACs, the dependence on a good input signal is far less.  But its not zero.  You can deduce what you wish - i don't have all the answers, but at least i listen, and then ask questions.

 

Do i think the Ethernet switches make a difference? No i don't. Its isolated anyway and are queued anyhow. (I supposed que handling might matter, i assume that is mapped by the router, maybe not).  but clearly adding a bridge between server and endpoint helps, and a great USB interface helps, and providing a good input signal helps.

 

I do agree that most of the benefit comes from competent design of the USB interface. I also know that most designs are "data sheet engineered" and not ideal.

 

But be careful of blanket statements.  Not only can they mislead given real world equipment, but they turn off people who have heard "its perfect" too many times before, only to find out otherwise (and have the industry fix issues, you know those silly guys at AD and Burr-Brown).

 

Thanks for clarifying. I assumed we were not 100% in agreement, though probably more yes than no.

 

G

 

@itsjustme , virtually all USB made now are async and they are effectively as queued as Ethernet. Bit errors on USB are close enough to 0 to be zero.

There is no "reclocking" in a USB DAC. There is only clocking. Not all are perfect, but if they were not close to perfect, then THD would escalate and even cheap DACs have excellent THD so that argument has little merit unless done poorly which seems to inflict boutique brands more than others (based on Stereophile tests).

What denotes a "properly engineered USB IF". The one issue that regularly comes up, and few deny, is system level noise mainly from the source. Easy solved. Isolate the USB.

You ignored everything i said. I'm not being baited.

 

If baiting is asking you to justify your position to which I pointed out the inconsistencies and errors, then I guess I baited you. If I was baiting you though, i would just say I don't think you can justify many of your statements based on how these products work and inability to define "properly engineered",  though as been often said and does not seem controversial is that non-isolated USB can be a source of power/ground loop noise.