Hi folks, there is one paradigm that bothers me a bit: many experts and audiophiles are stating that Red Book technology is outdated because of it's bandwidth limited function. I've read the human ear is capable of perception of frequencies beyond the normal human hearing, up to 40kHz. But this is only with live music! When listening to recorded music there is a restricted bandwidth because many microphones can only pick up frequencies up to 20kHz. So why the need for more and more bandwidth with regard to digital sound reproduction technology? What is not present in the recording can't be heard either, even with very wide bandwidth music reproduction gear
Chris, you are quite confused about the need for bandwidth in the reproduction of digital playback.
Basically, what mankind is trying to do is trying to make digital sound more & more like analog. I believe that this is an implicit acknowledgement of analog reproduction being more suitable for the human ear. However, people want the positives of analog playback (naturalness) without its hassles (tape hiss, crackles & pops, flipping a side every 20-25 minutes, groove wear-out due to repeated use, too many minutes to clean a LP, etc).
In the earlier days of CDs, people were not careful when mastering digital music - they used 16-b of music data, mixed tracks & processed the whole album. As a result, several bits were lost due to additive noise & what one effectively got was 12-13 bits of music. This manifested itself as drastically reduced dynamic range in the music. To the listener it felt like compressed music as the range of lowest music signal to highest music signal was not large enough to portray the essence of the music.
As time went by & the recording industry understood this, they started using 20-b of raw music data as the starting point of their mixing, processing. In this case, even if bits were lost due to additive noise, they were still left with 16-17 music bits. When that was pressed onto CDs, the sound got far more dynamic. It sounded much more like real music.
Several corollary advancements were made such as HDCD, XRCD, XRCD2, XRCD24. These technologies used 20-b & even 24-b as the starting point for the processing.
Then, came 24-b, 96KHz audio in the form of DVD-A.
These days we have hi-rez downloadable music in 24-b, 192KHz.
Then, there is the DSD signal - also called SACD - that is a 1-b signal oversampled at a 2.8+ MHz rate (I think I guessed the sampling rate correct? or did I??)
So, what's happening here? The sampling rate of the digital signal is steadily increasing - from 44.1KHz --> 48KHz --> 96KHz --> 192KHz. This bascially means that there are more samples of the analog signal being taken during the conversion of the analog signal to digital (if old master tapes are being converted) OR, if music is being recorded fresh today, the analog signal coming from the mic is being sampled at 96KHz or 192KHz right off the bat & is being stored on HDD.
If one has more samples of a signal , there is much less variation in the music signal amplitude from one sample to the next - like analog! :-) If you draw an analog waveform (say, a sine wave) you'll notice that the values change smoothly from one value to another - no abrupt changes. If you want to digitize an analog waveform & you take a lot of closely spaced samples, you are trying to emulate an analog waveform in the discrete-time domain.
If one is increasing the sampling rate of the digital signal in an effort to make it sound more analog-ish, then the bandwidth of the electronics processing this signal also has to increase (otherwise, the electronics will not be able to settle to their final voltages before the next clock cycle & this will lead to signal-dependent distortion - a very bad thing).
Also, if you are dealing with higher sampled rate music data when using a USB mode of communication then the data is flowing in a *serial* fashion between computer & DAC (or jitter box). USB - univeral SERIAL bus.
So, if you are want to transport a 16-b word of music @ a 44.1KHz rate - this is what a CD laser mechanism does: every 1/44.1KHz seconds it spits out a 16-b word read off the physical spinning CD - you would have to transport each bit in 1/(16-b * 44.1KHz = 705.6KHz) seconds (so that you are ready to transport the next 16-b word that will arrive 1/44.1KHz seconds later.
So, your USB cable needs to have 705.6KHz bandwidth (& your DAC needs to run at 2*705.6KHz, as per Nyquist's criteria).
When you want to transport a 24-b, 96KHz word using the USB port, your USB cable needs to have 2.304MHz bandwidth.
So, now you can see how the bandwidth of the hardware is increasing to support higher & higher sampling rates.
The music info in redbook CD still remains 16-b but we are trying to make the sound more & more analog-ish (by inc. the sampling rate) & we need hardware to keep up with this higher rate digital signal; hence, the bandwidth of the hardware is also inc. in tandem.
There are DACs today, like the Weiss Minerva, that will natively accept a 24-b, 192KHz signal over FireWire (another high-speed interface invented at Apple) but I believe that the DAC inside is still a 20-b DAC (someone correct me if I'm wrong. Thanx). I do not think that there are many (or any) 24-b *audio* DACs - I believe that it's very hard to achieve the fidelity with so many bits. (I could be wrong). Even if you have a 20-b or 24-b DAC, your cables, preamp, power amp & speakers, noisy AC power, inadequate chassis damping, inadequate rack damping, room acoustics, etc will ulitmately limit your overall dynamic range as heard by you at your listening chair.
Ok, I've rambled way too much! I hope tho' that I could be some help. Thanks.