Hi-rez digital, is the analog amp the bottleneck?


Interesting interview with John Siau of Benchmark Media. About 1/2 way down, is this quote:
The 32-bit systems will offer no advantage until converters reach a SNR of about 138 dB, but many other things would have to change as well. The best line-level analog circuits barely exceed 130 dB, many power amplifiers barely exceed 16-bit (96 dB) performance, and today’s best 24-bit recordings barely exceed the SNR of a 16-bit system. Most audiophile systems are limited by the SNR of the volume control circuits.
JA (Stereophile) has made similar comments in his measurements section of recent amp reviews. So what amps are people using with their hi-rez digital audio sourced systems?
bob_reynolds

Showing 2 responses by almarg

Hi Bob,

I agree with the preceding responses, and I think that the increased sample rates of hi rez are much more significant than the increased number of bits per sample.

It is probably tempting to think of a 96kHz sample rate as little more than a doubling of the 44.1kHz redbook standard, while an 8-bit increase in the resolution of each sample represents a 256-fold improvement.

But I think that the sample rate increase is best viewed in relation to the Nyquist frequency (the 40kHz minimum sampling frequency which is theoretically required to digitally represent a 20kHz bandwidth). Redbook's 44.1kHz exceeds the Nyquist rate by only about 10% (which is such a small margin that it has always seemed wondrous to me that it works as well as it does). But 96kHz exceeds the Nyquist rate by 140%, and 192kHz exceeds it by 380%. Assuming good implementation, those higher margins should make possible vastly reduced side effects from anti-aliasing and reconstruction filters, those effects having been generally recognized to have limited cd sound quality right from the start.

So my answer to your original question is no, I don't think the analog amp is a bottleneck.

Best regards,
-- Al
Ben & Lev, I appreciate the nice words!

Putting aside sample rate considerations, my feeling is that IN ITSELF the difference between 24 bit resolution and 32 bit resolution will be utterly inaudible. The music doesn't require it, even if the music is minimally compressed and has extremely wide dynamic range, such as some well-recorded classical symphonic performances; the ambient noise levels in our listening environments won't support it; and our ears could at best only marginally hear it anyway, even under the most ideal of circumstances.

However, any practical a/d or d/a converter chip, and its surrounding circuitry, will have a host of error mechanisms that will degrade performance from the theoretical ideal that corresponds to the number of bits being converted. Things like differential non-linearity, harmonic distortion, internally generated timing jitter, passband ripple, inter-channel crosstalk, internally generated noise, etc. Presumably and hopefully the main advantage of having 32 bits of resolution instead of 24 bits, as I see it, will be that those other sources of error will be correspondingly improved.

And concerning amplifiers, my feeling is that I definitely would not place ultra-good signal-to-noise performance among my leading criteria in amplifier selection. The difference in noise performance between a hypothetical amplifier that could approach supporting 32 bit performance and one that supports 16 to 24 bit noise performance is simply not going to be audible, and would most likely be far outweighed by many other sonic differences.

Best regards,
-- Al