Although in principle any half-way decent modern computer should have no trouble de-compressing and processing any audio format, my suspicion is that a major reason for reported sonic differences between lossless formats relates to the fact that in modern computers the clock rate, voltage, and power draw of the cpu chip are usually dynamically varied as a function of the processing requirements at any instant of time. That is done for purposes of minimizing power consumption and heat generation, and in the case of laptops to prolong battery run-time.
That switching involves changes in current that are both large and abrupt, which can be expected to cause significant noise transients to propagate through a lot of the circuitry on the computer's motherboard. That in turn can be expected to contribute to jitter, or even outright mis-clocking and breakups, on the signals that are used to output the audio data.
See my post here for a description of how to disable that switching in Windows 7. That change did in fact resolve the audio breakup problem the OP in that thread was having. Some computer BIOS's also allow "EIST" (aka "Speed Step") to be disabled, which may accomplish the same thing. I'm not particularly familiar with Mac's, but I believe that third-party software might be needed to do this.
The sonic significance of all of this will obviously be dependent on the particular computer that is being used, on what kind of output is being used (USB, S/PDIF, Ethernet, etc.), and on the jitter sensitivity of the component to which the signal is being sent. Ethernet and wireless presumably have little if any sensitivity to these issues, because of the packetized and buffered nature of the data transmission.
None of this necessarily correlates with the resolution or quality of the audio system.
It would be interesting to know if those who report sonic differences between these formats perceive the same differences when these power conservation features are disabled, and the cpu is running at the same speed and voltage all the time.
Regards,
-- Al
That switching involves changes in current that are both large and abrupt, which can be expected to cause significant noise transients to propagate through a lot of the circuitry on the computer's motherboard. That in turn can be expected to contribute to jitter, or even outright mis-clocking and breakups, on the signals that are used to output the audio data.
See my post here for a description of how to disable that switching in Windows 7. That change did in fact resolve the audio breakup problem the OP in that thread was having. Some computer BIOS's also allow "EIST" (aka "Speed Step") to be disabled, which may accomplish the same thing. I'm not particularly familiar with Mac's, but I believe that third-party software might be needed to do this.
The sonic significance of all of this will obviously be dependent on the particular computer that is being used, on what kind of output is being used (USB, S/PDIF, Ethernet, etc.), and on the jitter sensitivity of the component to which the signal is being sent. Ethernet and wireless presumably have little if any sensitivity to these issues, because of the packetized and buffered nature of the data transmission.
None of this necessarily correlates with the resolution or quality of the audio system.
It would be interesting to know if those who report sonic differences between these formats perceive the same differences when these power conservation features are disabled, and the cpu is running at the same speed and voltage all the time.
Regards,
-- Al