- 316 posts total
My systems have the right loudspeakers, placed perfectly into the room, run by the right amp, with the right source and material. But it takes time to get this right. Change anything and you might be back to square one. Sometimes a few weeks of trying different things is required to get it right. Long term evaluation is the only accepted way to evaluate audio gear. The snapshot of ABX testing is not reliable as most ABX testing results show.
The only changes in my system that I've been able to detect "instantly" are changes in volume (of at least 1/2 db) and fairly significant changes in tone.
But my system is good enough at this point that it's pretty rare I introduce something new that has this kind of effect. Most changes are more subtle and affect the emotional connection I get with the music as much (or more) than easily identified "audiophile terms".
However, once I've listened to a new component/cable/acoustic treatment/speaker position for a while, I can start to identify aspects of the sound that are different. Once I know what to listen for, it's usually not hard to hear the differences when I switch back.
But even in cases where the differences are not easily identifiable, if I'm enjoying the music more but don't understand why, that's really all that counts. And the enjoyment part is not always the case - there are times when I'll make a change that I think should be an improvement, but after a while, I find myself wanting to turn the music off even if I can't identify what's wrong.
These are the reasons I will never make a decision to change something in my system based on an ABX test (unless of course I could switch back and forth over the course of days, but this has never been practical).
I volunteered for an ABX speaker wire test at Klipsch HQ ... My accuracy, as the test continued, began to deteriorate, as my ears desensitized to the source material and it all began to blur together ...I’ve had similar experiences as an ABX subject. I still think blind testing has value, even though it’s not likely to be of much use to audiophiles.
Here’s another scholarly, objective evaluation that explores the frailty of blind testing in audio (referenced in the Stereophile article):
" The conventional .05 significance level used to analyze typical listening tests can produce a much larger risk of concluding that audible differences are inaudible than concluding that inaudible differences are audible ... resulting in strong systematic bias against those who believe differences are clearly audible between well designed components that are spectrally equated and not overdriven."
- 316 posts total