I have to nit-pick with Alex a bit, although I think he basically has it down. The term "upsampling" applies to any interpolation method that injects additional samples into the 44.1kHz data stream BEFORE DIGITAL FILTERING. This same process is called "oversampling" if it is performed AFTER DIGITAL FILTERING in the DAC itself. Oversampling is always done at integer multiples of 44.1 as the DAC chip itself does not have the computing horsepower to interpolate at non-integer multiples. Upsampling can and is done at non-integer multiples of 44.1 (i.e. 96kHz and 192kHz) because it is usually handled by a dedicated IC which does have the required computing resources. The actual CD data is not truncated at all in non-integer (or any other) interpolation scheme. The original samples are all still there in the upsampled data stream, you've just injected new ones to make what one hopes to be a more accurate reproduction of the analog waveform. The truncation error Alex refers to for non-integer interpolation is inherent in binary floating-point representation of irrational numbers. For example, in upsampling 44.1 to 96 you need to inject 2.1768707.... additional samples per original sample. Of course there is no way to exactly represent the number 96000/44100 in binary. That means, depending upon the number of bits available for the computations, some floating-point computations will be truncated resulting in spacing between the samples that is not EXACTLY 1/96000. Of course, the truncation is generally happening in such a low-order bit that the error should be negligible. Of course since there does exist SOME amount of spacing error in non-integer interpolation many people make the hard-to-refute argument that all interpolation should be done at integer multiples of 44.1 whether it is during upsampling or oversampling. Of course the audibility of integer vs non-integer upsampling schemes (i.e. 88.2kHz vs 96Khz or 176.4kHz vs 192Khz) is certainly an open debate.