Upsampling. Truth vs Marketing


Has anyone done a blind AB test of the up sampling capabilities of a player? If so what was the result?

The reason why I ask because all the players and converters that do support up sampling are going to 192 from 44.1. And that is just plane wrong.

This would add huge amount of interpolation errors to the conversion. And should sound like crap, compared.
I understand why MFG don't go the logical 176.4khz, because once again they would have to write more software.

All and all I would like to hear from users who think their player sounds better playing Redbook (44.1) up sampled to 192. I have never come across a sample rate converter chip that does this well sonically and if one exist, then it is truly a silver bullet, then again....44.1 should only be up sample to 88.2 or 176.4 unless you can first go to many GHz and then down sample it 192, even then you will have interpolation errors.
izsakmixer
Germanboxers,
basically, you are correct in pointing out that 96K & 192K were selected owing to other hi-res audio formats (namely DVD-A). Here the electronics is a multi-rate system wherein it up/oversamples by 160 & then decimates by 147 to change the sampling rate to 48K from 44.1K.

However, if you buy any of SimAudio's products, then you'll find that they oversample at exactly 8X, which is 352.8KHz!!! So, here is one commercial co. that doesn't use 96K, 192K or 384K. There must be others too but I cannot think of them right now.
FWIW.
Pabelson...I guess you mean that if the sine wave frequency is EXACTLY one half the sampling frequency, a sync situation exists. OK. Change the sampling frequency enough so that the phasing of the sine wave drifts across the sampling interval. Picky, Picky :-)

I personally don't have much of a gripe about CDs, but then my ears are 67 years old, and don't have the HF sensitivity of some of our golden eared friends. Based on my experience, which led me to believe that Nyquist was an optimist, I can believe that HF is a lot better with 96KHz sampling.

Sean...I disagree about the effect on quality of "off the shelf" parts. In the military electronics business, we used to design all our own chips, even microprocessors. However, even at great expense we could never match the research and development effort, propriatary skill, and quantity production, typical of commercial products that were functionally equivalent to our designs. A mature "off the shelf" product has had all its bugs weeded out.
Since Sean has confessed his error, I will do the same. My explanation actually showed a ramping signal of 3 units in four samples. While this was not incorrect, it is not consistent with the analog signal that I assumed at the beginning. The following is an updated version of my explanation, for posterity.

Phillips used 4 times oversampling in their first CD players so that they could achieve 16 bit accuracy from a 14 bit D/A. At that time, 16 bit D/A, as used by Sony, were lousy, but the 14 bit units that Phillips used were good. The really cool part of the story is that Phillips didn't tell Sony what they were up to until it was too late for Sony to respond, and the Phillips players ran circles around the Sony ones.

In Sean's explanation the second set of 20 dots in set B should not be random. Those dots should lie somewhere between the two dots adjacent to them.

Here is my explanation.

Assume there is a smoothly varying analog waveform with values at uniform time spacing, as follows. (Actually there are an infinite number of in-between points).

..0.. 1.. 2.. 3.. 4.. 5.. 6.. 7.. 8.. 9.. 10. 11. 12 etc.

If the waveform is sampled at a frequency 1/4 that of uniform time spacing of the example, (44.1 KHz perhaps) the data will look like the following:

..0............... 4.............. 8...............12..
THIS IS ALL THERE IS ON THE DISC.

A D/A reading this data, at however high a frequency, will output an analog "staircase" voltage as follows:

..000000000000000004444444444444444488888888888888812

But suppose we read the digital data just four times faster than it is really changing, add the four values up,
and divide by 4.

First point……..(0+0+0+4)/4 = 1
Second point....(0+0+4+4)/4 = 2
Third point.....(0+4+4+4)/4 = 3
Fourth point....(4+4+4+4)/4 = 4
Fifth point.....(4+4+4+8)/4 = 5
Sixth point.....(4+4+8+8)/4 = 6
Seventh point...(4+8+8+8)/4 = 7
Eighth point....(8+8+8+8)/4 = 8
....And so on

Again we have a staircase that only approximates the instantaneous analog voltage gererated by the microphone when the music was recorded and digitized, but the steps of this staircase are much smaller than the staircase obtained when the digital data stream from the disc is only processed at the same rate that it was digitized at. The smaller steps mean that the staircase stays closer to the original analog continuously ramping signal.

Note also that we are now quantized at 1, instead of 4, which is the quantization of the raw data stream obtained from the disc. A factor of 4. That’s like 2 bits of additional resolution. That’s how Phillips got 16 bit performance from a 14 bit D/A.
The Esoteric DV-50 is another player that does not use 96K, 192K, or 384K. The first upsampling point on the DV-50 is 352.8K, and the higher selections continue to follow that pattern.
Hmmm... I'm surprised that nobody jumped all over me for stating the obvious. That is, digital is a poor replication of what is originally an analogue source.

I'm also glad to see that nobody contradicts the fact that having more sampling points can only improve the linearity of a system which is less than linear to begin with. After all, if digital was linear, we could linearly reproduce standardized test tones. The fact that we can't do that, at least not as of yet with current standards, would only lead one to believe that analogue is still a more accurate means of reproducing even more complex waveforms.

Converting analogue to digital back to analogue again only lends itself to potential signal degradation and a loss of information. One would think that by sampling as much of the data as possible ( via upsampling above the normal sampling rate ), that one would have the greatest chances for better performance with a reduction the amount of non-linearities that already exist in the format. Evidently, there are those that see things differently. Sean
>