Okay, here's my hypothesis:
When using networked servers, such as Sonos and Logitech, the decoding processes of FLAC and ALAC have plenty of time to execute because the other processes are not real-time. This is because the networked stream is packetized and transmitted very quickly, and does not get involved with the S/W audio stack in the computer. The data processing in the computer is minimal and happens very quickly making the latency very low for these transfers. This allows the CODEC to run as slow or fast as it needs to run to achieve accurate results. As a result, the sound quality differences with these lossless formats is usually minimal if even detectable when using network protocol.
On the other hand, when you use Firewire or USB for data streaming, some of the audio stack is involved and the CODEC must run in real-time and keep-up with the stream rate. Because the audio stack creates a lot of latency, even when playing uncompressed files, there is evidently not much time left for FLAC decoding to keep up with the bit stream. As a result, the timings are very tight and sound quality suffers as a result. This is why I believe on a resolving system using USB or Firewire, FLAC files sound like listening through a tunnel and the same FLAC uncompressed to .wav sounds normal. I dont know if this is a result of poor programming for the FLAC and ALAC CODECs or maybe just the way that they execute when they are competing for resources and repeatedly queue and stall in the execution sequence. With multi-threading in computer OS now, these applications dont run continuously ever.
There is a lot of anecdotal evidence to support the above hypothesis. I have no technical proof however.
Steve N.
Empirical Audio
When using networked servers, such as Sonos and Logitech, the decoding processes of FLAC and ALAC have plenty of time to execute because the other processes are not real-time. This is because the networked stream is packetized and transmitted very quickly, and does not get involved with the S/W audio stack in the computer. The data processing in the computer is minimal and happens very quickly making the latency very low for these transfers. This allows the CODEC to run as slow or fast as it needs to run to achieve accurate results. As a result, the sound quality differences with these lossless formats is usually minimal if even detectable when using network protocol.
On the other hand, when you use Firewire or USB for data streaming, some of the audio stack is involved and the CODEC must run in real-time and keep-up with the stream rate. Because the audio stack creates a lot of latency, even when playing uncompressed files, there is evidently not much time left for FLAC decoding to keep up with the bit stream. As a result, the timings are very tight and sound quality suffers as a result. This is why I believe on a resolving system using USB or Firewire, FLAC files sound like listening through a tunnel and the same FLAC uncompressed to .wav sounds normal. I dont know if this is a result of poor programming for the FLAC and ALAC CODECs or maybe just the way that they execute when they are competing for resources and repeatedly queue and stall in the execution sequence. With multi-threading in computer OS now, these applications dont run continuously ever.
There is a lot of anecdotal evidence to support the above hypothesis. I have no technical proof however.
Steve N.
Empirical Audio