WAV vs. FLAC vs. AIFF


Hi, has anyone experience any sound quality difference between the three formats? Unfortunately I been using only the wav lossless formats. I have no experience with the other two. If you have experience the three, which one do prefer and why? Thanks and happy listening
Ag insider logo xs@2xhighend64
Generally WAV is the best sounding with AIF a close second place and FLAC a bit behind AIF. The problem with WAV is music tags and meta data do not follow the file. It can be very challenging with AMARA and Pure Music when you cannot find the name of the song and you have have search by song numbers.
There is no difference other than metadata. If anyone thinks there is, they need to get a life or better gear!
To add depth to this discussion and to not hijack it, what settings are the best if you use WAV? Which settings for FLAC, which settings for etc.?
CHAD:
If one can hear the difference doesn't that mean their gear is too good? Typically one can not hear differences on inferior gear because they are not transparent and resolving enough to hear the differences. (higher noise floor, cheaper power supplies etc).
There is a difference between AIF, FLAC, & WAV files. AIF & FLAC files have one more process that the music player must get through before it can play music as opposed to WAV. FLAC and AIF files have a wrapper file. The AIFs wrapper file is less obtrusive than FLAC. Removing processes makes the player sound cleaner and more transparent. The reason I know this is I can hear the difference and speaking with friends who are in the mastering business and they basically confirmed the differences in sound quality I was hearing. It only makes sense that less one less process would improve the sound. Chad I have a great a great life and system that I enjoy once in a while.
My assumption is that with any of these formats, the bits being delivered are the same and timing is the real issue. I agree that in the past it was likely that a lot of activity on the PC could affect the sound, mostly because of timing (jitter) issues. But with the use of async USB the activity on the PC seems to be a real secondary issue. As long as the buffer is full and the activity on the PC is relatively low, it seems that the aysnc USB is the determining factor. I use a PC with a very low end processor, and running FLAC through J River, the CPU hardly goes about 5%. If the FLAC decompression is a problem on a system, it is probably because there are too many other things running. Although, as I said, using an async USB converter/DAC should minimize that effect. There can still be noise issues (like small ground issues) but that should not be affected by the minor differences in processing time to decompress files. I am of the mind that "everything matters", but with properly done async converters/DACs, it seems that the other effects can be extremely small. That said, there are always people who say they hear differences in everything. I would suggest converting a few of your wav files to FLAC and listening. dBpoweramp is a good converter and it has a free version.

What hardware/software are you using?
Although in principle any half-way decent modern computer should have no trouble de-compressing and processing any audio format, my suspicion is that a major reason for reported sonic differences between lossless formats relates to the fact that in modern computers the clock rate, voltage, and power draw of the cpu chip are usually dynamically varied as a function of the processing requirements at any instant of time. That is done for purposes of minimizing power consumption and heat generation, and in the case of laptops to prolong battery run-time.

That switching involves changes in current that are both large and abrupt, which can be expected to cause significant noise transients to propagate through a lot of the circuitry on the computer's motherboard. That in turn can be expected to contribute to jitter, or even outright mis-clocking and breakups, on the signals that are used to output the audio data.

See my post here for a description of how to disable that switching in Windows 7. That change did in fact resolve the audio breakup problem the OP in that thread was having. Some computer BIOS's also allow "EIST" (aka "Speed Step") to be disabled, which may accomplish the same thing. I'm not particularly familiar with Mac's, but I believe that third-party software might be needed to do this.

The sonic significance of all of this will obviously be dependent on the particular computer that is being used, on what kind of output is being used (USB, S/PDIF, Ethernet, etc.), and on the jitter sensitivity of the component to which the signal is being sent. Ethernet and wireless presumably have little if any sensitivity to these issues, because of the packetized and buffered nature of the data transmission.

None of this necessarily correlates with the resolution or quality of the audio system.

It would be interesting to know if those who report sonic differences between these formats perceive the same differences when these power conservation features are disabled, and the cpu is running at the same speed and voltage all the time.

Regards,
-- Al
Hi tom,
Maybe there are many reasons why one file sounds different on system than another.

I like to think I have a reasonable set up. The only differences I have heard between the same file is in the conversion. I.e coverting from 44.1 to 48k etc. Whether a WAV or Aiff they sound the same if taken from the same master file. While flac has some live unpacking it too sounds the same unless there is some terrible software being used.

If someone believes any different send me a PM with the original music file, and I will gladly convert it on professional equipment to whatever you fancy. WAV to AIFF or whatever & send it back to you. Then see.
Al - I agree that minimizing system changes is a good practice, as is removing unnecessary software and processes. I take a different route from many - I use a Netbook with Windows 7 Starter. There are very few processes running, not even virus software (which is OK because it is a dediated system and I almost never have it on the Internet directly). You can even turn off networking if you have the files local. I still think these tricks are less important with a good aysns USB converter/DAC. I use a async USB converter and a DAC that does its own reclocking, so hopefully I am immune to PC induced jitter. If you have galvanic isolation you can elimiate other types of noise also. Unfornately, when people hear differences it is very difficutlt to find out why.
Al:

Thanks for the clarification. I've read the same elsewhere and it seems to all boil down to the competition for power and the sound card that goes on inside of a computer. It is not optimized for music playback as currently set up. That's probably the reason why a third party add on like Bitperfect (for those of us with shallow pockets) dramatically cleans up the sound.

Once you have an optimized platform, like a well designed music server, then differences between files should be minimal, at best, when it comes to the better ones. It will be nice when some sort of standardization takes effect so makers can concentrate on perfecting software/hardware and we can just rip and download to our hearts content and sit back and enjoy.

I have yet to hear all that's out there so this is conjecture, on my part, but it seems to make sense.

Al the best,
Nonoise
Okay, here's my hypothesis:

When using networked servers, such as Sonos and Logitech, the decoding processes of FLAC and ALAC have plenty of time to execute because the other processes are not real-time. This is because the networked stream is packetized and transmitted very quickly, and does not get involved with the S/W audio stack in the computer. The data processing in the computer is minimal and happens very quickly making the latency very low for these transfers. This allows the CODEC to run as slow or fast as it needs to run to achieve accurate results. As a result, the sound quality differences with these lossless formats is usually minimal if even detectable when using network protocol.

On the other hand, when you use Firewire or USB for data streaming, some of the audio stack is involved and the CODEC must run in real-time and keep-up with the stream rate. Because the audio stack creates a lot of latency, even when playing uncompressed files, there is evidently not much time left for FLAC decoding to keep up with the bit stream. As a result, the timings are very tight and sound quality suffers as a result. This is why I believe on a resolving system using USB or Firewire, FLAC files sound like listening through a tunnel and the same FLAC uncompressed to .wav sounds normal. I dont know if this is a result of poor programming for the FLAC and ALAC CODECs or maybe just the way that they execute when they are competing for resources and repeatedly queue and stall in the execution sequence. With multi-threading in computer OS now, these applications dont run continuously ever.

There is a lot of anecdotal evidence to support the above hypothesis. I have no technical proof however.

Steve N.
Empirical Audio
Steve - I am using flac files, running J River from memory and with less than 5% CPU usage on Windows 7 and an async USB converter. Do you still think that the PC cannot keep up with a async USB converter in that case? I would guess J River can keep the memory buffer full. Are you suggesting that there can be enough latency in WASAPI event mode to cause timing problems with an async USB converter?
DTC - I believe the problem is latency and interference, not CPU usage. When the core audio stack interferes with smooth execution of the FLAC CODEC, this is where the problems start.

Has nothing to do with async USB converter and delivering data to that.

Have you tried changing the FLAC file back to .wav with dbpoweramp and compare the two tracks by listening?

Steve N.
Empirical Audio
First, sorry for moving the discussion to the age old jitter, timing discussion.

Steve - I do understand everything that is going on. I belive the flac codec is filling a buffer, in my case the memory buffer for J River. I am assuming that J River decompresses the flac before it goes into memory, but that might not be the case.

If the audio stack delivers in real time, rather than through a buffer, then the question seems to be what the async driver (like the M2Tech one you use) is actually doing. My assunmption was that the async driver is drawing from a buffer, not from real time delivery of data from the Windows audio stack. That may be incorrect. Do you have enough details on the async drivers to know if process swapping can actually effect the async driver significantly? If there really is a problem there, then improving the clocks in async converters should not be important. It is a complicated process. I probably just do not have enough detail on the audio stack/asyn driver to understand why flac decoding (or any other running process) should interfer with the async driver timing.

I am not trying to be argumentative. I just do not understand the internal details of what is actually happening.

I do not hear a difference between wav and flac files. But I believe my DAC also reclocks so that may be the determing factor.
I have not done extensive comparisons, but .wav and flac seem to be a wash in terms of inherent sound quality due to format alone. The difference is what is done with teh format, ie how well the recording is made and how well delivered during playback. Just like CDE, vinyl or any other format, recording quality will range from very bad to very good.

The main considerations are compatibility with your gear and how you will handle metadata/tagging.

flac is better for flexible metadata and tagging over time, if that is something you really want to spend time doing. Personaly I do not. I mostly use .wav and make sure the metadata is correct before ripping. This approach works well ripping with Windows Media Player (good quality rips possible and is included with most WIndows computers) and using Logitech MEdia Server (formerly Squeezeserver) as the music server for Squeeze or other compatible devices . The caveat is thatyou cannot change metadata tags (artis, album, title, etc.) once ripped with .wav. You have to redo the rip with new metadata to make a change, which is pretty easy assuming you actually have access to the original CDs ripped when needed. I recommend keeping your CDs as archived versions of your music and for reference as needed. Do not rip and then get rid of the CDs. You might regret it later.

You need to pick another program to rip .flac, but once you do, then that format works well for editing tags when needed and also sounds good with the Logitech/Squeeze system.

Beware of any conclusions drawn about sound quality differences between formats based on a limited test sample. any results are possible. In teh end, I believe the format to be essentially a wash in regards to how good it can sound.
"Do you have enough details on the async drivers to know if process swapping can actually effect the async driver significantly? If there really is a problem there, then improving the clocks in async converters should not be important."

Improving the master clock in async USB converter is always worthwhile, even if your DAC resamples. Resampling of course puts the jitter of the resampling clock on the data stream, so it can make things worse for sure.

The clock in the USB converter is orthogonal to the problems with FLAC decompression I believe. They are both important effects.

Steve N.
Empirical Audio

Steve N.
Empirical Audio
Mapman - whether you can hear these differences or not is highly dependent on your system. For instance, if you use an active preamp of any kind, you may not hear a difference. Preamps add a layer of noise, distortion and compression that is significant. Until you run without a preamp (and I dont mean with a resistive passive linestage), you will not realize how much grunge your pre actually adds.

I always believed that my highly modified Mark Levinson Pre was really transparent. Then I built a DAC with a good volume. Boy was I wrong. The pre is now gathering dust...

Steve N.
Empirical Audio
I use network players designed to stream audio as the source feed to the dac. That keeps any issues that might be associated with using a general purpose computer as the source out of the picture.

With this approach there is no audible difference.

OTherwise, there are many factors that can come into play that affects sound with any source type for that matter. Power/jitter issues associated with decompression processing can stand in line with all the rest.

But the format itself does not correlate to sound quality in general though. Lots of other crap can go wrong and chances are it does so differently because of different hardware and software processing scenarios for different formats. The devil is all in the details. But not in the source format itself. If processed properly, teh results are the same. That can be a big if though.

Personally, I prefer .wav. Probably lower risk in general but not inherently better or worse otherwise.

Roku, Logitech, .wav, flac. It all sounds essentially the same and quite excellent to the point where if there is a difference it is not an issue at least for me.

FWIW, I can change most anything else in my system and hear a clear difference, including ICs, but none at all with any combo of Roku, Logitech, .wav, FLAC.

It also doesn't matter what kind of computer I use for the server. I've used various notebooks over teh past few years. They all sound the same witht eh network player approach. The only issue is if they have enough memory and CPU speed to stream in real time without rebuffering at the network player occurring and how fast library scans and such take. Squeezeserver on my current 8 Gb Gateway laptop can completely reload its music library from a USB disk drive in about 10 minutes (1700 albums, 18000 tracks, 99% .wav, 1% flac and mp3 downloads so far).

Roku Soundbridge is an older and curently poorly supported platform, so I do not recommend that these days, but otherwise the sound quality through a good DAC is top notch as is Squeezebox Touch through the same DAC (I've used several....DACs make a HUGE sound difference, so worry about that first).
Mapman - that figures. With networked audio, you are less likely to hear differences. Data is just data, not audio data as with USB or Firewire, so the audio stqck is less involved, if at all. Jitter from the computer is also a non-issue with network playback. Only the end-point device jitter and clock matters.

Steve N.
Empirical Audio
AE,

Exactly!

The network player essentially serves as a "proxy" and effectively isolates the DAC and the rest of the audio system from direct interaction with the computer.

ITs definitely the way to go for those who would fret about how well their computer might play in a high end audio system, assuming your network has sufficient bandwidth.

Most home wireless G networks with moderate to strong connections should work fine under normal circumstances unless others in the house are in heavy competition for bandwidth. Squeezebox Server (now Logitech Media Server) actually converts files to lossless but compressed FLAC to make better use of network bandwidth and this solution works very well. Roku soundbridge does not but also tends to rebuffer more frequently with weaker connections in that data is not compressed before transmission. Roku Soundbridge and Squeezebox Touch both sound essentially identical in my rig.
Steve - It seems like what you are describing is a timing problem that comes from the competing flac decoding and audio stack processes. How does that timing issue get passed to the DAC. As I understand it, the async USB is suppose to control the timing of the data received by the USB async converter using its internal clock and it takes the computer timing out of the process. If their is a timing error/latency from the audio stack and flac converter competing for run time that is coming through the aysnc USB converter, then it seems like the async USB converter is not effectively controlling the timing of the data. Are you suggesting that the run time issues make the aysnc USB process not work properly? If you think the aysnc USB process is working properly, then why would the flac and audio stack processes introduce timing error in the aysnc USB device?
I believe the run-time problems cause the FLAC CODEC to malfunction.

I do not believe it has anything to do with async USB. My customers told me that this happens even with my older adaptive USB interface.

Steve N.
Empirical Audio
Mapman - Even though I sell only USB interfaces, and I'm able to achieve good performance, I believe that the ultimate solution that takes the computer out of the equation is networked audio streaming.

There are several proprietary networked systems available now, but the SQ of these leaves a lot to be desired IMO. Because they are proprietary, expert developers cannot create either devices or DSP software for these. The ultimate goal should be a ubiquitous player and an open system. I am currently submitting proposals to enable this for the industry.

Steve N.
Empirical Audio
"Malfunction"? Are you saying that the 16/24 bit data for each sample is incorrect? That the data that comes out of a flac decode is different that the wav data?
AE,

I have been very impressed with sound of both older Soundbridge and newer Logitech Squeezebox Touch as a source mainly ( I do not use built in DACs of either). Definitely better than anything I had prior and competitive in my mind with the better reference digital rigs I have heard at various dealers. The ultimate complement is that I have a large record collection and vinyl is not getting a lot of play time these days.

I am interested to see what devices like these come along down the road, but I must say, as a fairly picky and particular listener, and also as one who has worked in computer systems and software development for almost 30 years now, that I think the SQ Touch hits a nice target in terms of offering a combo of top notch sound (as a source mainly, I have never even tried the built in DAC though I hear it is not bad), features and effective design overall at a very favorable price point. Definitely a device that when used properly changes the playing field of high end audio considerably and serves as a good omen for perhaps even better things down the road.
""Malfunction"? Are you saying that the 16/24 bit data for each sample is incorrect? That the data that comes out of a flac decode is different that the wav data?"

That is exactly the point. There are many reasons why the data in each format might be different even if originating from the same CD. Anything can happen with computers and their programming at any time and often does. But the one thing that is not different is the ability of each format to store the exact same digital representation bit per bit.

That is why the format is not the issue, rather the issues may occur with everything that happens both during the rip and during playing/streaming of the digital data stream at each phase of processing prior to hitting the DAC and being converted to analog.

So the bottom line is that each format may result in different decisions being made in terms of how to minimize the risk of all the gadgets involved in ripping and playing doing more harm than good. Network audio streaming I agree is one of the simplest, least expensive and practical ways to help accomplish this.

Most general purpose computers have no business being connected directly to your high end audio gear! Think of this as a form of isolation, similar to other steps you might take to isolate your rig from potential sources of noise.

Also think of network players as a specialized type of computer that is designed to stream audio effectively to your rig. Although this is still an emerging audio solution, it is one that lends itself well to solving the problems using technology that is readily available and affordable TODAY.
The idea that the FLAC decoder produces wrong numbers is just not credible. People have repeatedly shown that the compression/decompression algorithms works. And, computers very, very seldom make computing mistakes. If each time you opened a spreadsheet it produced different results, people would not use them. If there is one thing that a computer can do it is do computations correctly. If people think that the computer is regularly doing the FLAC computations incorrectly and in a random manner, then I would love to see some actual proof of that. I just do not think it happens.

So, others issues for audio seem to be electrical noise and timing. Electrical noise, for example grounds, can potentially be an issue. That is why people are building galvanic isolation into higher end devices - to break the electrical connection between the PC and the DAC. Of course, electric noise is also present in network players, it just is not tied to the PC.

That leaves timing. Digital audio depends on precise timing of each sample. Before aysnc USB, the timing was problamatic and jitter was a real issue. That is why I keep coming back to aysnc USB. If it works are advertised, the jitter should be very low and independent of the source format. If someone can explain why the source format processing influences the final timing in a aysnc USB device, then I am all ears. I admit to not knowing the exact inner workings of the aysnc code (very few people do). But if it works as advertised, then FLAC decoding should not be an issue with its timing.

I agree that networked solutions can provide better isolation that direct connections. Remember, I am not talking about audio streams in general, but the difference between FLAC and WAV files. I am not willing to say that computers routinely make computational errors when compressing and decompressing FLAC files and therefore WAV files are better. If people think they hear a difference, that is up to them. But I have yet to hear a detailed explanation of why that happens that makes sense.

Time to get ready for Thanksgiving.
I just did an A/B comparison with Flacs and Wav of the same tracks in the same playlist, allowing me to switch. I heard no discernable difference.
When I first started with integrating computer audio with my audio system, I used an analog stereo to rca Y IC (audioquest) from my old laptop headphone jack to aux input on my preamp at teh time. I recorded several CDs of music with Real player. I since ripped these back to my current music server setup and play via Squeezebox Touch along with the rest. The sound quality after all that is still quite good but there is some noticeable deficiencies mostly in dynamics, compared to other very good recordings. Still "hi fi" I would say and quite listenable (nothing offensive, mostly just a bit of omission). I say its much better than most home cassette recordings I have heard over the years but not current SOTA. So in many cases with computer audio I think the glass is still significantly more than half full even in less than ideal circumstances compared to past options, unless something is flat out just not working properly as designed.
Excellent comments by all, IMO, on an issue that by its nature is highly speculative.
11-23-11: Dtc
I agree that networked solutions can provide better isolation that direct connections. Remember, I am not talking about audio streams in general, but the difference between FLAC and WAV files. I am not willing to say that computers routinely make computational errors when compressing and decompressing FLAC files and therefore WAV files are better. If people think they hear a difference, that is up to them. But I have yet to hear a detailed explanation of why that happens that makes sense.
What about my hypothesis, that differences in the processing that is performed when playing the different formats result in differences in when and how often "Speed Step" and related power conservation features are called into play (unless the user goes through the steps that are necessary to disable those features), in turn resulting in significant differences in computer-generated noise transients, in turn resulting in differences in jitter and/or noise coupling?

Even if an asynchronous USB DAC is being used, conceivably high frequency noise transients riding on the USB signal pair and/or the associated power and ground lines could couple past the DAC's input circuits to internal circuit points, where they could affect timing of the DAC chip itself, and/or subsequent analog circuit points. Galvanic isolation would help in that regard, as you noted, but it is not always employed, and who knows how effective it is in any given situation?

And then there is the possibility, perhaps somewhat more remote but conceivably still possible, of differences in rfi resulting from those format-sensitive noise transients, the rfi perhaps bypassing all of the digital circuits that are involved and coupling onto sensitive analog points elsewhere in the system.
11-22-11: Mapman
But the format itself does not correlate to sound quality in general though. Lots of other crap can go wrong and chances are it does so differently because of different hardware and software processing scenarios for different formats. The devil is all in the details. But not in the source format itself. If processed properly, the results are the same. That can be a big if though.
11-23-11: Mapman
Most general purpose computers have no business being connected directly to your high end audio gear! Think of this [network playback] as a form of isolation, similar to other steps you might take to isolate your rig from potential sources of noise.
Well said! Agreed 100%.

Best regards,
-- Al
Al - First, I agree that proper setup of the PC is necessary, including not letting the CPU performance fluctuate, as you point out. I should have added the caveat the the PC is well set up, which I agree is not always the case.

I agree that there is a potential for noise issues without isolation. That is why various isolation technics are being used. Any extra noise should not affect the left/right data bits. The amount of processing used to decode a FLAC file is really minimal. I just do not see that minor extra processing having much effect. When I compare playing WAV files to playing FLAC files I do not see any noticible fluctuation in CPU usage. It must be there, but it is pretty minimal, at least on my system., where usage is typically under 5%. As I said, if the async USB is doing what is advertised to do, then that noise should not affect the timing. If the noise interferes with the async USB circuits in the converter, then differences are certainly possible due to jitter. So fair enough, it is possible. I am just not sure there is enough noise from unpacking the flac to make an audible difference.

It seems that people using wireless solutions would not see the effects of this noise, unless it goes through the power cords.

I must say, it would be very difficult to actually measure any of these differences on the circuitry of the converters and DACs.

The 35 pound Hubbard squash is cooking. The French Canadian meat pies (Tourtier)are getting started, and the traditional home made vegetable soup for Thanksgiving lunch is just getting going, although we made the stock yesterday. Starting to smell good.
DTC, thanks for the good response, with which I am in essential agreement.

I would just like to make sure it is clear to everyone that under my hypothesis cpu utilization which is low but non-zero may actually be WORSE with respect to noise generation than, for instance, 100% utilization would be. The noise transients I am envisioning are associated with the abrupt SWITCHING of cpu clock rate, and in some cases voltage as well, that unless disabled by the user will occur as processing tasks intermittently start and stop.

That switching involves LARGE changes in cpu current draw, which happen quickly, although I don't know exactly how quickly. Current changes that are both large and fast = large noise transients.

For those who may be interested, utilities such as the Windows-based program CPU-Z allow those changes in clock rate and voltage to be observed as they happen. It should be kept in mind that cpu current draw is highly dependent on clock rate.

Happy Thanksgiving to you and yours!

Best regards,
-- Al
"The amount of processing used to decode a FLAC file is really minimal. I just do not see that minor extra processing having much effect."

I'd say that is a true statement plus the fact that on a general purpose computer for example, there are many processes and threads executing at any particular time, so it is hard for me to tie any adverse effects in general to just that process. Yet another reason to isolate system from general purpose computer as best as possible as an insurance policy at a minimum.
""Malfunction"? Are you saying that the 16/24 bit data for each sample is incorrect? That the data that comes out of a flac decode is different that the wav data?"

Yes, I believe its incorrect when the decoding is done on-the-fly.

Steve N.
"The idea that the FLAC decoder produces wrong numbers is just not credible. People have repeatedly shown that the compression/decompression algorithms works."

Yes they have, but only staticly, never real-time.

Electrical noise in the PC is not the cause of these differences, and jitter is not the cause either. Noise will not change significantly with different tracks or different formats. Jitter is a non-issue with Async USB.

Steve N.
Empirical Audio
EDorr - are you using a USB interface? What player S/W? Are you using a Preamp? What ripper did you use?

All of these must be optimized in order to get stellar results, and to hear these differencees easily.

Steve N.
Empirical Audio
"The noise transients I am envisioning are associated with the abrupt SWITCHING of cpu clock rate, and in some cases voltage as well, that unless disabled by the user will occur as processing tasks intermittently start and stop."

Huh? There are certainly differences in power in the computer when there is highly CPU intensive calculations ocurring, but as noted, we have not seen this in the CPU usage numbers.

Has anyone looked at the CPU usage graphs while running .wav compared to FLAC?

Steve N.
Empirical Audio
If flac or any format encoded + decoded produces different results than original, assuming same resolution applied throughout then there is either a bug or defect in the system somewhere or a decision was made to sacrifice detail rather than implement a robust design. Like I said, things like bugs and defects can and do happen.

But if it is done correctly format encoding to and from at same resolution should produce the same results. IF not the case, please offer a more specific example of why.

ANother reason to go with devices explicitly designed to retain sound quality in a high end system. Real time compression/decompression, encoding/decoding is easily achieved with readily available modern processors. It is not rocket science to do right. But it still has to be done right......
Steve,

I don't necessarily disagree with any of your comments, and I don't assert that the hypothesis I offered is anything more than speculative, but the "huh" in your last response tells me that my hypothesis may not have come across clearly.

The OP in the thread I linked to early in this thread was using a newly purchased Windows 7 laptop, and experiencing severe distortion, and also intermittent skipping, when outputting audio via USB into a DAC. The same setup had worked fine previously, with a different laptop running XP.

The problem was fixed when at my suggestion he changed the power management settings within the Windows 7 control panel such that the MINIMUM (as well as maximum) "processor state" was set to 100%, instead of the default 5%.

That change in effect disables SpeedStep, causing the cpu to run at its maximum speed all the time. As I say, it fixed the OP's problem with distorted USB audio. Therefore it seems to me to be at least a semi-plausible hypothesis that SpeedStep could, with some computers and in some setups and with some DAC's, cause noise and/or jitter issues that would be sensitive to processing requirements, and therefore conceivably to data format. Particularly if those processing requirements load the cpu lightly and therefore intermittently.

Again, that is just a speculative hypothesis, but in the absence of evidence to the contrary one which seems to me to have at least some degree of plausibility.

Best regards,
-- Al
Al - Just another datapoint telling us that the CPU execution is having an effect on sound quality.

This is why I recommend only Mac, and only Amarra 2.3.x. This combo is simply killer. My PCs are not even close.

There are still SQ obstacles even with Mac, but they are minor IMO.

Some improvement can be had by using SSD rather than hard disk for instance. Also better power supply for a Mac Mini seems to help. I dont know why... The mach2music.com works well if you dont want to fuss with it.

Its good enough stock though, that I have not bothered with these tweaks yet. I use a 2009 Mini with Snow Leopard on it.

Its much more important that you use a good USB interface with good clocks.

Steve N.
Empirical Audio
Steve, I don't understand why SSD could be better. Music data on HD does not contain timing information. As for transfer rate required (1.4Mb/s for 16/44.1) it is a tiny fraction of the slowest HD interface. Also, compressed files would transfer approx 2x faster. The only possibility, IMHO, is lower power of SSD and therefore lower electrical noise in general.

Wireless devices like AE not only separate noisy computer from the Audio System but also have own separate Codec and clock. I store music in ALAC since that's exactly the format used to transfer data to AE (no conversion necessary). Jitter on AE digital (Toslink) output is respectable 258ps according to Stereophile measurements, that also confirmed it being bit perfect. On the top of it my Benchmark DAC1 adds big jitter suppression.