Why Do So Many Audiophiles Reject Blind Testing Of Audio Components?


Because it was scientifically proven to be useless more than 60 years ago.

A speech scientist by the name of Irwin Pollack have conducted an experiment in the early 1950s. In a blind ABX listening test, he asked people to distinguish minimal pairs of consonants (like “r” and “l”, or “t” and “p”).

He found out that listeners had no problem telling these consonants apart when they were played back immediately one after the other. But as he increased the pause between the playbacks, the listener’s ability to distinguish between them diminished. Once the time separating the sounds exceeded 10-15 milliseconds (approximately 1/100th of a second), people had a really hard time telling obviously different sounds apart. Their answers became statistically no better than a random guess.

If you are interested in the science of these things, here’s a nice summary:

Categorical and noncategorical modes of speech perception along the voicing continuum

Since then, the experiment was repeated many times (last major update in 2000, Reliability of a dichotic consonant-vowel pairs task using an ABX procedure.)

So reliably recognizing the difference between similar sounds in an ABX environment is impossible. 15ms playback gap, and the listener’s guess becomes no better than random. This happens because humans don't have any meaningful waveform memory. We cannot exactly recall the sound itself, and rely on various mental models for comparison. It takes time and effort to develop these models, thus making us really bad at playing "spot the sonic difference right now and here" game.

Also, please note that the experimenters were using the sounds of speech. Human ears have significantly better resolution and discrimination in the speech spectrum. If a comparison method is not working well with speech, it would not work at all with music.

So the “double blind testing” crowd is worshiping an ABX protocol that was scientifically proven more than 60 years ago to be completely unsuitable for telling similar sounds apart. And they insist all the other methods are “unscientific.”

The irony seems to be lost on them.

Why do so many audiophiles reject blind testing of audio components? - Quora
128x128artemus_5
My only complaint about ABX is that, if the source material does not change, ear fatigue sets in VERY, very quickly.  

I volunteered for an ABX speaker wire test at Klipsch HQ back in '06.  The first five rounds, I was perfect.  5 for 5 identifying the more expensive wire versus the lamp cord.  

My accuracy, as the test continued, began to deteriorate, as my ears desensitized to the source material and it all began to blur together, hearing the same small segment of the same musical passage over and over again.  I finished the test 13/20.  So I barely did better than a coin flip on the last 15.  

Rotating the source material, and also ensuring that the source material is familiar to the listener, can seriously mitigate ear fatigue, making the outcomes more reliable. 

One of the things not discussed in any of these ABX papers is the subjects.  The average person does not care about music nearly as much as, for example, the folks on this forum.  Hell, the average person thinks Bose systems sound great.  If these are your test subjects, of course ABX isn't going to be a useful test on them, when the differences they are looking for are extremely subtle. 
I have no issues with how others choose their components...I would assume they don't care how I choose mine...
My systems have the right loudspeakers, placed perfectly into the room, run by the right amp, with the right source and material. But it takes time to get this right. Change anything and you might be back to square one. Sometimes a few weeks of trying different things is required to get it right. Long term evaluation is the only accepted way to evaluate audio gear. The snapshot of ABX testing is not reliable as most ABX testing results show.  
The only changes in my system that I've been able to detect "instantly" are changes in volume (of at least 1/2 db) and fairly significant changes in tone.

But my system is good enough at this point that it's pretty rare I introduce something new that has this kind of effect. Most changes are more subtle and affect the emotional connection I get with the music as much (or more) than easily identified "audiophile terms".

However, once I've listened to a new component/cable/acoustic treatment/speaker position for a while, I can start to identify aspects of the sound that are different. Once I know what to listen for, it's usually not hard to hear the differences when I switch back. 

But even in cases where the differences are not easily identifiable, if I'm enjoying the music more but don't understand why, that's really all that counts. And the enjoyment part is not always the case - there are times when I'll make a change that I think should be an improvement, but after a while, I find myself wanting to turn the music off even if I can't identify what's wrong. 

These are the reasons I will never make a decision to change something in my system based on an ABX test (unless of course I could switch back and forth over the course of days, but this has never been practical). 
Long term evaluation is the only accepted way to evaluate audio gear. The snapshot of ABX testing is not reliable as most ABX testing results show.
 


Two falsehoods in two sentences. Care to try for 3?