Does Digital Try Too Hard?


Digital glare. A plague of digital sound playback systems. It seems the best comment a CD player or digital source can get is to sound “analog-like.” I’ve gone to great lengths to battle this in my CD-based 2-channel system but it’s never ending. My father, upon hearing my system for the first time (and at loud volumes), said this: “The treble isn’t offensive to my ears.” What a great compliment.

So what does digital do wrong? The tech specs tell us it’s far superior to vinyl or reel to reel. Does it try too hard? Where digital is trying to capture the micro details of complex passages, analog just “rounds it off” and says “good enough,” and it sounds good enough. Or does digital have some other issue in the chain - noise in the DAC chip, high frequency harmonics, or issues with the anti-aliasing filter? Does it have to do with the power supply?

There are studies that show people prefer the sound of vinyl, even if only by a small margin. That doesn’t quite add up when we consider digital’s dominant technical specifications. On paper, digital should win.

So what’s really going on here? Why doesn’t digital knock the socks off vinyl and why does there appear to be some issue with “digital glare” in digital systems.
128x128mkgus
@cleeds 

Thank you for your friendly and helpful response. I'll take a look and we can discuss.
“I’ve heard things you people wouldn’t believe. Attack notes on fire off the shoulder of Mozart. I’ve heard C-notes glitter in the dark with the PPT Gate. All that music will be lost in time, like tears in rain. Time to die.”
I don't always appreciate MC's posts or the tenor thereof (and we'll leave the question of a PPT-related quid pro quo un-asked) but for that bit of genius quoted above, all can be forgiven.  Well done.  

Sadly, you will probably know Rutger has passed.
@cleeds 

First, I learned something useful from the presentation you referenced, that step functions are no longer used in DAC. Thanks for that. However, I still have concerns, which may arise from misunderstanding, which I submit for comment and correction.

It follows that some form of interpolation is being used to convert the discrete sample values taken at discrete time intervals into an analogue signal. The alternative, a smooth perfect fit to the data, appears from the presentation to require the SW to know which frequency it is dealing with  (although it might be able to guess, using FFT on previous segments for example, so informing that interpolation). This is important because the talk continues to use this result (a perfect, smooth waveform) as proven, which I do not grant, but also appears to assume mathematically perfect observation (else how could there be a unique waveform which fits the data?).

There seem to me to be only two alternatives: (1) stick with a safe linear interpolation, or (2) guess. But with a guess, sometimes the SW is going to guess wrong (perhaps on transients?), and then the output is going to be far more distorted than a simple linear interpolation would suggest.

Therein lies information loss. What is known comprise the samples and intervals - the rest is processing. I hypothesize that the success of one processing algorithm over another represents digital's progress. Is this correct Cleeds? 

Oddly enough, I was just reviewing uniqueness theorems concerning representations of ordered semi-groups, which, assuming perfect information, is pretty much what we are dealing with here. A few points occur to me: (1) samples are taken in finite time, and are therefore averages of some kind  (2) samples are taken at intervals of finite precision, therefore there is temporal smearing (3) samples are taken with finite precision, hence further uncertainty is built into each (averaged) sample.

In physics, data are always presented with error bars in one or more dimensions. It leads one to ask, why does this engineer think he has points? Is he confusing this problem with talk of S/N ratio?

These considerations lead us, contrary to the presentation, to the conclusion that we do not have lollypop graphs of points, we have regularly spaced blobs of uncertainty, which are being idealized. However, this also shows that, regardless of the time allowed for sampling and reconstruction, there is an infinity of curves which fit the actual imperfect data. Not a unique curve by any means.

Again, I have Cleeds to thank for refining my understanding of digital. I agree that we can't discuss digital intelligently unless we understand how it works and how it doesn't. Please correct that which you find to be in error.



So terry9 are you accusing Monty of somehow messing with the scope or hacking the SW since it showed a perfect smooth waveform ? I am not sure anyone cares whether you grant it simply get the equipment and replicate it or can you prove he was wrong? Just announcing " I do not grant " isn't going to cut it when we can all see his work, where's  yours? Post a YouTube refuting it. 


"This is important because the talk continues to use this result (a perfect, smooth waveform) as proven, which I do not grant" 
I gave my conclusions and my reasons for all to see and criticize. That's the way of math and science. I suggest that you re-read my post and then decide who it is that is "Just announcing".