Why is Double Blind Testing Controversial?


I noticed that the concept of "double blind testing" of cables is a controversial topic. Why? A/B switching seems like the only definitive way of determining how one cable compares to another, or any other component such as speakers, for example. While A/B testing (and particularly double blind testing, where you don't know which cable is A or B) does not show the long term listenability of a cable or other component, it does show the specific and immediate differences between the two. It shows the differences, if at all, how slight they are, how important, etc. It seems obvious that without knowing which cable you are listening to, you eliminate bias and preconceived notions as well. So, why is this a controversial notion?
moto_man
Drubin's right about double-blind: it means that nobody in the room knows which is which. And researchers use it because they've learned that there are all sorts of ways that someone can subconsciously indicate which is which to whoever is actually doing the comparing. If you want to be absolutely sure that there's no outside influence (intentional or not) and that you're making your decisions based only on the sound, double-blind is essential.

That said, the main reason DBTs are controversial is that they tend to produce results that are at odds with the received wisdom of audiophilia.
Blind testing serves no useful purpose. It presumes that by switching cables in and out of ONE system, that you will uncover something fundamental about the cables. I think not.
I think that double blind testing is essential. I have actually fooled myself. Upon receiving something new in the mail, I immediately hook it up, and am "astounded" by how much better it sounds that what it replaced. After a prolonged listen, and especially if I have my wife switch the component in and out, which is not double blind, but single blind, I find myself hitting it about 50/50, which means that I can't tell the difference. When we purchase some expensive tweak we badly want not to have lost our money that we justify it by things like, "less listener fatigue" or once long term break in has taken place it will fall into place. I've seen cables described as "a night and day difference" Well, while you're at work have someone switch one of them with out telling you which one, or even if nothing has been done. If it's night and day you'll spot it immediately.
It's amazing that anyone would find a totally objective and neutral testing method controversial.

Isn't it ironic that an uncolored and totally neutral audio system is the primary goal of audiophiles? Why are these qualities good for audio systems, but not for the methods used to test them?
The main reason why I am not a fan of A/B testing methods is they use only short bursts of music. I find I need to live with a new component for at least a few days to get its measure. This is because, what can sound "right" in a brief listen, can prove to fail to convey the emotion in music, and this judgement requires more extended listening, at least for these ears.

A straight A/B test will allow you to identify obvious differences, for sure - such as "A has more bass extension than B" - but that does not mean A is better than B when musical enjoyment is the goal.

I find that A/B testing tends to obscure many musically meaningful differences. You may decide I am deluded about these differences, and that all differences can be detected in a brief listen - there we will have to agree to disagree.