I do not know what you read or from where but it sounds like nonsense to me.
9 responses Add your response
Here's something related from Telarc:
"Telarc's first digital recordings utilized the Soundstream recording system which is based on a sampling rate of 50kHz, compared to a standard compact disc, which has a sampling rate of 44.1kHz. The higher rate of the Soundstream system offers an extended frequency response (up to 25kHz) and increased detail. To produce the original compact disc, the Soundstream signal had to be converted from 50kHz to 44.1kHz, a process that inherently causes a loss of quality not only by lowering the frequency response, but also by the complex mathematical process needed to derive 44.1kHz from 50kHz. Until recently, no digital system has had the capability to capture the full quality the Soundstream process had to offer."
I understand that digital records have gotten much better since the old Soundstreams. So a good modern recording at 24-bit/96kHz is even more drastically compressed to fit the Redbook? Seems like the only thing truly "dead" to me much of the information from the master tape. Unless....
I agree with the above that it is nonsense to claim a 1:4 loss. I make own recordings of choral/orchestral music so I know the master tapes before they leave the studio for the CD fabrication. I often do comparisons of master and CD and with a real high end CD rig the results are very satisfying. The better the original recording/master the better the CD: so that 1:4 thing sounds to me like another attempt to push SACD or whatever format into a market which doesn't need it.
Redbook is 44,100 samples per second.
First of all, understand the performance being recorded literally has no sampling. Rather there is a continuous stream of music, lets call it infinite (zillions) of samples per second.
If the master tape is digital, recorded at a higher resolution than the early generation digital recorders, then the sampling rate might be 96K (~1/2) or 192K (~1/4) or 2.7M (DSD).
No matter what the master tape is, redbook has been and will continue to be a 44.1K media. So I ask, what does it matter what the original master resolution is as long as it is high enough to get 44.1K min onto the CD?
So the real question is not what the master was, it is what other formats than redbook CD will sound like if we can get that higher sampling rate to a different media such as SACD or DVD-A etc.
It's not just an issue of sample rate, you also have to consider word length. Redbook CD uses a 16 bit word whereas better quality digital recording and processing is done at 24 bit or higher. The sample rate conversion process usually entails the altering of both factors. Opinions differ but it's my experience that word length reduction is more noticeable than just a reduction in sample rate. There are different algorithms for accomplishing SRC and some are clearly more transparent than others. The amount and type of dithering also has a major influence. It's misleading to ask what is the ratio of loss in that the data reduction does not linearly correspond to what the ear hears. Depending on the type of music, the skill of the engineer and the choice of SRC processing the conversion from 96/24 to 44/16 can be very noticeable to barely perceptible. Roughly speaking it's of the magnitude of going from analog tape to vinyl.
You certainly don't need an infinite sampling rate to capture live music let alone what's contained on a first generation tape. In theory a 192kHz/32 bit recording should have 192dB dynamic range and be completely linear out 48kHz.
Here's a really good papaer from Meridian
The summary is that redbook CD can produce very good quality sound when done properly (with correct dithering in particular). The problem is not redbook CD, but rather, poor implementation of redbook CD. This stacks up with my personal experience that really well recorded CDs sound phenomenal ... much better than average LPs. It's the engineering that counts more than the medium.
In addition it is not necessary to have more than 20 bits per sample (120dB dynamic range) since there is presently no audio amplifier capable of greater than 120dB dynamic range ... more bits would simply encode noise.
The reason 24 bits are chosen is because most commercial DSPs (signal processors) accept a 24 bit wordlength, and if the source is limited to 20 bits the 4 extra bits make the sample manipulation very much simpler (no overflow).
As for sampling rate 44.1kHz is only problematic in as much as the anti-aliasing filter causes either phase or amplitude distortion. Notice that some DACs are filterless (Audio-Note) to try to remedy this. Sampling above 96kHz has no benefits for 20 bit PCM.
So if the absolute optimum is 20bit*96kHz you could argue that redbook only gives 36% of the total info. However, in practise it probably delivers more than 95% of the useful information.