I have nothing against double blind testing. The only thing I question is the listening time period. Initial impressions may change over a longer period of time. Two weeks would probably be more appropriate than a couple of hours. My bet is that listening impressions would be dramatically different. In any event it would make an interesting test.
185 responses Add your response
My take on the article was that J.A. was being deliberately obtuse because he has no intention of ever doing db tests. It also seems unreasonable to argue that any difference which can not be detected unless you know which component you are listening to is a sonic difference.
I think the obvious problem that J.A. preferred to ignore is that they just did a lousy job with the db test.
If 'toe tapping' or emotional response is the right measurement criteria & if it takes more than a quick a-b switch to guage the measurement criteria, that is aok. You just have to set up the test to measure the right criteria & allow time for the measurements to occur.
I.E. - It is not that there is any problem with db tests, it is that a poorly designed test will not give accurate information. IMHO.
Is it just me or is the word 'synergy" the magic word that enables anyone to justify ANY componet, even if a blind test yeilds results that suggest the peice of equipment wasnt worth the money?
I know that everything has to come together to get your toes' tappin, but it seems the word Synergy is often used as a safe word....and all opinions and bets are off.
It reminds me of the not so distant past when pornography was trying to be defined and the conclusion was "I cant define it, but I know it when I see it".....IMO that is way to vague. I guess it all boils down to each person and the sound they hear, to each his own, but as long as the word synergy is used...alot of folks should refrain from putting down anothers choices, (tubes-vs-solid state, digital-vs-vinyl..and so on, because synergy wins every time.
What seems to be beyond audiophiles is that the only criteria of blind testing is that the participant has no information but the presented experience. Those who think blind testing is conceptually flawed have to answer a question: If what is desired is an unbiased review of sound quality, how does product information promote that?
Since "synergy" (I hate that word) is a factor in any stereo/component review, why bring it up as a factor for blind testing? The same situation exists with time. How is time a factor for bind testing but for not "sighted" testing?
I hate to ring this bell, but the drugs everyone takes...blind testing. Like a million psychology experiments...blind testing. Scientists made eliminating bias work for them - audiophiles haven't, but still some think they know better.
I am somewhat unhappy that I spoke of J.A. in my post as he brings along a lot of baggage. Many of you who have posted above seem sincerely to believe that better conceived db tests would yield recommendations of some components or cables. My reading of what I have seen posted is that many of those advocating db testing expect a conclusion that says there are no differences and thus buy the cheapest. This seems to have been J.A.'s experience in the 3 amp comparison, but in my limited experience such comparisons with db do yield a recommendation, as in the Bozak instance.
Fundamentally, I have no confidence in same/different comparisons in db with too small a sample and with too much dependence on statistical significance tests. A conclusion that all amps are the same or that all cables are the same is just to at odds with my experience to be acceptable. Perhaps when you randomly assign some to the drug and others to the placebo, double blind testing makes research design sense. But I do not concede that db testing is the fundamental essence of the scientific method. Experimentally, a control group design makes sense but double blind testing seldom is necessary. Often it takes great originality to cope with subjects knowing they are being experimented on. The Hawthorne Electric study is the best example of this.
I also really wonder how A, B, and C comparisons of amps, etc. using double blind would be done and reported. How would the random sample be drawn , and where would they assemble? And would we need to assess the relationship between more qualified listeners and others?
There are some reviewers whose opinions I am responsive to as they have previously said things consistent with what I hear. With double blind testing there would be no reviewers I presume.
I agree "synergy" is an overused word, but for Blind Testing, what would be your reference amp, preamp, source, speakers, wire, ect.? Would the reference be what the manufacturer prefers, you prefer or I prefer? In the world of science there are set standards, but what are the set standards in the Audio world? We can measure db, distortion, ect., but in the Audio world there is not a perfect standard for what sounds the best to you or I. A HONEST reviewer would be much appreciated in this dishonest world we live in.
Tbg: The main question of your post seems to be, Do objectivists like Arny Krueger extol blind tests only because they like the results? The short answer is no. Arny K. and his ilk did not invent blind tests as a weapon to use against the high-end industry. In fact, they did not invent blind tests at all. Blind listening tests were developed much earlier by perceptual psychologists, and they are the basis for a huge proportion of what we know about human hearing perception (what frequencies we can hear, how quiet a sound we can hear, how masking works to hide some sounds when we hear others, etc.). Blind tests aren’t the only source of our knowledge about those things, but they are an essential part of the research base in the field.
Folks in the audio field, like Arny, started using blind tests because of a paradox: Measurements suggested that many components should be sonically indistinguishable, and yet audio buffs claimed to be able to distinguish them. At the time, no one really knew what the results of those first blind tests would be. They might have confirmed the differences, which would have forced us to look more closely at what we were measuring, and to find some explanation for those confirmed differences. As it turned out, the blind tests confirmed what perceptual psychologists would have predicted: When two components measured differently enough, listeners could distinguish them in blind tests; when the measurements were more similar (typically, when neither measured above known thresholds of human perception), listeners could not distinguish them.
Do all blind tests result in a “no difference” conclusion? Of course not, and you’ve cited a couple of examples yourself. Your preamp test, for one. (Even hardcore objectivists agree that many preamps can sound different.) Arny’s PCABX amp tests, for another. (Note, however, that Arny typically gets these positive results by running the signal through an amp multiple times, in order to exaggerate the sonic signature of the amp; I don’t believe he gets positive results when he compares two decently made solid state amps directly, as most of us would do.)
Your comments on statistical significance and random samples miss an important point. If you want to know what an entire population can hear, then you must use a random sample of that population in your test. But that’s not what we want to know here. What we want to know here is, can anybody at all hear these differences? For that, all we need to is find a single test subject when can hear a difference consistently (i.e., with statistical significance). Find ANYBODY who can tell two amps apart 15 times out of 20 in a blind test (same-different, ABX, whatever), and I’ll agree that those two amps are sonically distinguishable.
Which leads to a final point. You say you are a scientist. In that case, you know that quibbling with other scientists’ evidence does not advance the field one iota. What advances the field is producing your own evidence—evidence that meets the test of reliability and repeatability, something a sighted listening comparison can never do. That’s why objectivists are always asking, Where’s your evidence? It’s not about who’s right. It’s about getting at better understanding. If you have some real evidence, then you will add to our knowledge.
Pabelson, you added greatly to my historic understanding of double blind testing. Can you please give citations for the instances where same/different tests yield differences? I think something is fundamentally wrong with the research design unless there are such instances, including just single run throughs of the signal.
I am quite uncomfortable with the idea that finding a single person who can hear differences 15 out of 20 times would be convincing. I do not know how you can set a level here. Why 15 out of 20?
All of the instances were I participated in same/different db testings were too quick and there is too high a probability of the respondent guessing. I also felt that the testing was unrealistic of the listening experience. By contrast the A, B, C etc. comparison using double blind was more analogous to the listening experience. As I said, because of this, I would be interested in such testings. Here I would again suggest the hypothesis that could be tested as to whether there were differences among those with long experience in working with music.
Advancing the field. Yes, that would be nice. I have seen quality components, IMHO, be ignored because of name brand manufactures cache. I have little question that the field has advanced greatly during the 40 years that I have been involved, especially digital. Someone has suggested that manufacturers use double blind testing all the time, but in my experience, they do not. There is also the voicing of components by such notable designers as Kondo, etc. I presently am overwhelmed by the Shindo Labs 301 turntable. All of this is without the aid of double blind testing.
I have no doubt that some proponents of dbt are sincere as I am sure that the overwhelming number of instances where the small sample are unable to hear a difference leads some to embrase db because it fits their preconceived judgments, especially if they cannot afford more expensive gear.
I also still say that reviews would be very curious with dbt. Would you start with 100 amps being compared and then each month add another? Would anyone buy such a magazine or use it for judging what they will buy? Would manufacturers concede that product D is indeed better and withdraw their amps?
Tbg: If these tests didn't yield positive results, they'd be useless for research. Just because they don't yield positive results when you want them to doesn't make them invalid. A good example of a mix of positive and negative tests is the ABX cable tests that Stereo Review did more than 20 years ago. Of the 6 comparisons they did, 5 had positive results; only 1 was negative. (The one negative, however, used similar cables and had subjects listen to music rather than noise. In most of the other 5 cases, the measured differences were much greater; in one, they listened to noise rather than music--it's easier to hear level and frequency differences with full-spectrum noise than with music.)
I presumed you knew statistics. 15 out of 20 is the 95% confidence level, which means that we can be 95% sure that the listener really heard a difference, and wasn't just guessing lucky. The 95% threshold is a reasonable one in this case.
I suspect the tests you did involved multiple listeners listening at the same time. It's better to use one subject at a time, and to let the subject control the switching. But the Stereo Review tests used multiple listeners at once, and got plenty of positive results. Subjectivists often object that ABX tests use quick switching between components, but there's solid research showing that this approach actually works better--it's easier to hear differences when you can switch immediately between the two. I know subjectivist audiophiles consider that heresy, but the research is pretty clear.
Some manufacturers use DBTs, others don't. It makes no sense for components where differences are undeniable (microphones, turntables, cartridges, and speakers are good examples). As for "voicing" of amps and cables, people who claim to do that without DBTs are either fooling themselves or trying to fool you.
Almost nobody has a preconceived notion that things sound the same. Many objectivists used to be subjectivists till they started looking into things, and perhaps did some testing of their own.
As for reviews, a high-end magazine that used DBTs couldn't survive. Advertisers would pull out, and readers would revolt. Better to give the people what they want.
Pabelson, I must admit that I had not known of the Stereo Review's db tests. Out of curiosity I will have to look them up. Are there others?
I teach statistics. Apart from making judgments about the population from a random sample, the concept of a confidence interval has no meaning. We never can make the conclusion, "...that the listener really heard a difference, and wasn't just guessing lucky." With a random sample of sufficient size, you can get a confidence level of .05 which might be that your experimental group's mean response was right 15 out of 20 times. This is why I ask about this number in the absence of a random sample. 15 out of 20 may impress you, but it has no basis in statistics.
I also do not understand the notion that db testing is unneeded for, "components where differences are undeniable." Undeniable by whom?
I grow less convinced that db testing has any potential for sheading light in the evaluation of stereo equipment.
Tvad, were good db testing procedures, I would think we would have to assess whether some were better evaluators than others. As I said earlier, I still think review magazines would be boring and that most audiophiles would ignor the results, if any were positive.
Tvad, I find your last post preposterous. You state that you only care about closing your eyes and getting lost in the notes and about tapping your toes to the rhythm. If that's what our hobby is about then anyone with an iPod on the subway is an audiophile. I have a friend who regularly gets down like that with his Bose Wave player. I even suspect the people in the deeply tinted window SUV with the fancy wheels that was absolutely booming "urban youth music" were lost in the music and tapping their toes. I should have rolled down my window and said hello to my audiophile brothers.
Your espousal of unfettered radical subjectivism is precisely where a large part of our hobby has gone wrong. By dismissing any pretense of fidelity to the source material you have made all systems effectively equal because someone somewhere will think any system sounds great. Beyond what makes someone feel good there actually are objective standards for judging whether a piece of equipment faithfully reproduces an input signal. We can argue about exactly what these standards are, but it would be foolish to ignore them.
BTW, if getting lost/toe tapping is a high priority, a bottle of good Scotch is a more effective system upgrade than any cable change.
Onhwy61, your comments read like a racist expressing his disdain for the infusion of impurities into the master plan of audiophilia. I don't if you intended them to be so exclusive, but they struck me that way. It would be fair enough to say that you do a hobby one way, and allow others to walk their own paths. But if I'm hearing you accurately, then I'll personally opt for a scotch with my music. Of course, not while I'm out bumpin' with the brothers in my SUV.
I mentioned this in another thread not too long ago. In blind taste testing Pepsi usually wins. When the brands are known Coke almost always wins. I think this means that Coke comes with a plethora of baggage (at least more than Pepsi) that affects objectivity to the extent that it can affect our perceptions. Can this be true of cable testing, or anything else for that matter? The odd thing is that most people do prefer Coke because we don't buy it in a blind test. To me at least, there are significant implications for audio here. If I know I'm listening to a Valhalla does it change the perception I would have had if I thought it was a Cardas or if I didn't know the brand at all? In court the least reliable evidence is frequently that of eye witnesses. For instance, even though a group of people witness the same event their perceptions of the event usually vary. I think that objectivity can be extremely difficult to achieve because we heve so many more factors wired in. Another instance I find humorous is when an audio componant tests one way with sophisticated instruments (admittedly this can be less than objective, depending on the application and methodology used and the biases of the human tester) and the human perception is directly opposite. This seems yo happen more with tube equipment for some reason. Then there's that school of thought that the simple fact that something is being tested can affect the outcome of the test. Just some thoughts.
Tvad, I applaud your dedication to this hobby and I truly wish you derive an enormous degree of personal satisfaction for being a practicing audiophile, but I still strongly disagree with you on a key issue. Tapping your toes and grooving to the music is great, but even non-audiophiles tap their toes. As I see it audiophiles are about listening to music reproduced with a high degree of fidelity to the source material. In your 6/13 post you state that you are not interested in fidelity, only whether it makes you feel good. It's real easy to put together a system that sounds good. Pump up the bass, give it a big syrupy midrange and roll off the high end and even well schooled audiophiles will be tempted. It's even easier putting together an "accurate" system with vanishingly low distortion and ruler flat frequency response. What makes our hobby challenging is putting together an accurate system that also sounds good. Just tapping your toes won't get you there.
Boa2, how do you know that I'm not one of the brothers in the SUV?
I even suspect the people in the deeply tinted window SUV with the fancy wheels that was absolutely booming "urban youth music" were lost in the music and tapping their toes. I should have rolled down my window and said hello to my audiophile brothers.
When I read this. I was actually picturing somewhere in suburbia! :-)
I said I don't believe I can reproduce the live event. Indeed, this is what I believeOf course you can't. You can only reproduce what the recording process produced and stored on the medium used...
Add to that, the imperfections & losses due to the recording process, the imperfections & losses due to the storage medium and the imperfections & losses due to the reproduction system.
In all of our rantings, we are addressing the last of these (the repro system)
At its best, a reproduction system aims at coming close to the original, i.e. what's on the RECORDED medium (not the live event); this seems to me a reasonable target for us audiophiles.
For the live event, you go to the concert hall.
Tbg: For someone who "teaches statistics," you express a rather narrow perspective on the field. Think about how you would use statistics to determine whether a coin is fair. (You do agree that you can use statistics to do this, don't you?) The problem of determining whether a certain subject can hear a difference between two components is precisely the same. Do his results suggest that he was just guessing which was which (the equivalent of flipping a fair coin), or that he could indeed hear a difference (flipping an unbalanced coin). At any rate, it really doesn't matter whether you think statistics is applicable here. People who actually study hearing and do listening tests use statistics for this purpose every day of the week.
I would define undeniable differences as those for which measurements would lead us to predict such differences. If there are measured characteristics of two components that are above the known threshold of human detection, then there's no real need to do a DBT to determine whether they sound different. For example, if one amp has a THD of 0.1%, and the other is at 3%, we can safely assume that they are audibly different. Transducers typically measure differently enough that we can assume they sound different. Ditto many (but not all) tube amps. Solid state amps, unless they are underpowered for the speakers they are driving or have a non-flat frequency response (perhaps due to an impedance mismatch) generally do not.
Before I get tagged with the "measurements are everything" slur, let me say that these measurements can only predict WHETHER two components will sound different. If they do sound different, the measurements cannot tell us (at least not very well) which you will prefer, or even in what ways they will sound different to you.
For more info on DBTs, see the ABX home page, mirrored here:
Palelson, perhaps we just have a language difference. I would certainly concede that for a coin to be heads 15 out of twenty tosses is improbable. This probability is at the root of statistical inference which, of course, seeks to assess support for a hypothesis in the population from a sample. There is always the possibility that the sample is unrepresentative and that we might wrongly reject the null hypothesis when it is actually true.
I just think the proper hypothesis should be that a sample of people can hear a difference between cables or amps. The null hypothesis is that they cannot.
It would be very difficult with a sample of one to achieve statistical significance, so you are apt to accept the null hypothesis. However, a sample of 25,000 would assure you statistical significance.
I am only concerned that the choice of the sample size may be determined by what the researcher's intended finding might be. I think it is a far more interesting hypothesis to suggest that those with "better ears" would do better. I don't think most audiophile would be convinced or should be convinced that all amps or wires sound the same.
As I recall, statistics can be very useful.
Stat 101....Intro to Statistics
Stat 102....Statistic Applications (How to fool others using statistics).
Stat 201....Advanced Statistics (How to fool yourself using statistics).
Just kidding. In my work with balistic missile inertial guidance systems, such as the estimation of CEP (circular error probability) based on a couple of hundred modeled error sources, I have been exposed to the most arcane forms of statistics. One must always remain aware of the risk of fooling yourself, and be able to laugh about it.
I just think the proper hypothesis should be that a sample of people can hear a difference between cables or amps.
Well, that's one possible hypothesis. Another possible hypothesis is that one particular individual can hear a difference. That's the equivalent of testing the fairness of one particular coin. Note that the sample size isn't one. It's the number of listening trials/coin flips.
I am only concerned that the choice of the sample size may be determined by what the researcher's intended finding might be.
The choice of sample size isn't what's critical here. The statistical significance is. Granted, larger samples reduce the possibility of false negatives, but it's not as if there have never ever been any ABX tests with large sample sizes. The Stereo Review cables test had a sample size of 165. The possibility of a false negative is very low with a sample that big. (Since you teach statistics, I'll let you do the math.)
And if you think the reason these tests come up negative so often is sample size, you as a "scientist" ought to know how to respond: Do your own experiment. Complaining about other people's data isn't science.
I think it is a far more interesting hypothesis to suggest that those with "better ears" would do better.
Then test it. The SR panel was a pretty audio-savvy bunch, as I recall.
I don't think most audiophile would be convinced or should be convinced that all amps or wires sound the same.
Are you saying they're all close-minded?
Pabelson, frankly I don't care enough about this question to expend the time necessary to do such work. I am more concerned with find a great loud speaker.
I just do not understand the expectation that all individuals are the same in these tests. It is not statistical significance, it is improbability that you are talking about.
How do you know when you wrongfully reject the null hypothesis?
Tbg: If all you care about is finding a great speaker, why'd you start this thread???
All individuals are not the same. I never said they were. I think you're hung up on the idea of a hypothesis about what a majority of people can hear (in which case it would be necessary to test a random sample of all people). But the more common question in audio is, can anybody hear it? To answer that question in the affirmative, all you have to do is find *one* person who can hear a difference between two components. That's why testing a single individual can be appropriate. (Just remember that, in a single-person test, the null hypothesis relates to that single person; if he flunks, you can't conclude anything about anyone else.)
Here's a good example of the kind of testing that researchers do:
Note that one of their 36 subjects got a statistically significant result. In a panel that large, this can easily happen by chance. The check this, they tested that individual again, and she got a random result, suggesting that her initial success was merely a statistical fluke.
I started the thread because I am curious about those who doubt others' abilities to hear the benefits of some components and wires. As many proponents can point to few examples of DBT and nevertheless seem confident of the results, I assumed that they saw DBT as endorsing their personal beliefs. Furthermore, my personal experiences with DBT same/different setups has been that I too could not be confident that my responses were anything other than random. But my experiences with single blind tests with several components which were compared have been more favorable with a substantial consensus on the surprisingly best component.
Speakers have always been a problem for me. Some are better in some regards and others in other areas. I suspect that within the limits of what we can afford, all of us picks our poison.
I did read you reference article and found it very interesting a troublesome as I use a Murata super tweeter with only comes in a 15k Hz and extends to 100k Hz. I am 66 and have only limited hearing above 15k Hz, yet in a demonstration I heard the benefits of the super tweeter, even though there was little sound and no music coming from the super tweeter when the main speakers were turned off. Everyone else in the demonstration heard the difference also. I know that the common response by advocates of DBT is that we were influenced by knowing when they were on.
I must admit that I am confident of what I heard and troubled by my not hearing a difference in a DBT. Were this my area of research rather than my hobby, I would no doubt focus on the task at hand for subjects in DBTs as well as the testing apparatus. My confidence is still in human ears, and I suspect that this is where we differ. I guess it is a question of the validity of the test.
For a sincere DBTer, such as yourself, I am not being truculent. For those embracing DBT as simple self-endorsement, I am dismissive.
For those embracing DBT as simple self-endorsement, I am dismissive.
No objectivists of my acquaintance (and I am acquainted with some fairly prominent ones), "embrace DBT as simple self-endorsement." A number of them, myself included, were subjectivists until we heard something that just didn't make sense to us. I know of one guy (whose name you would recognize) who was switching between two components and had zeroed in on what he was sure were the audible differences between them. Then he discovered that the switch wasn't working! He'd been listening to the same component the whole time, and the differences, while quite "obvious," turned out to be imaginary. He compared them again, blind this time, and couldn't hear a difference. He stopped doing sighted comparisons that day.
Research psychologists did not adopt blind testing because it gave them the results they wanted. They adopted it because it was the only way to get reliable results at all. Audio experts who rely on blind testing do so for the same reason.
Final thought: No one has to use blind comparisons if they don't want to. (Truth be told, while I've done a few, I certainly don't use them when I'm shopping for audio equipment.) Maybe that supertweeter really doesn't make a difference, but if you think it does, and you're happy with it, that's just fine. Just don't get into a technical argument with those guys from NHK!
Tbg...I also can "hear" the effect of tweeters/supertweeters operating well above the measured bandwidth of my 67-year old ears. (I first noticed this general effect, at higher frequencies, when I was much younger). My explanation is that the ear senses RATE-of-change of pressure, as well as change of pressure. The high rate of change of a 20 KHz signal can be sensed, even if the smoothly changing pressure of a 14KHz signal is inaudible. The experience we share is common. Have you heard any other explanation?
the ear senses RATE-of-change of pressure (...)Have you heard any other explanationWell, 1) about 20yrs ago a french prof (forgot the name) claimed findings that the bones contribute to our perception of very high frequencies. 2) There seems to be a case for the interaural mechanism working together -- not ONE ear alone, but both being excited.
OTOH, it's also been established that the audibility of PURE tones diminishes with age in the higher frequencies. So here, we're talking about "sound in context": i.e. say, harmonics of an instrument -- where the fundamental & certain harmonics are well within our pure tone hearing range and some of the related info is outside an individual's "official" (pure tone) audible range.
The strange thing is that our ears work as a low pass; so, some people speculate that it's the COMMON interaural excitation that does the trick...
For this to happen (let's ignore the possible contribution of the bone structure for now) would'nt it mean that our interaural "mechanism" is situated in the DIRECT path (sweet spot) of those frequencies (remember, our acuity falls dramatically, ~20-30db, up there). If so, then moving our head slightly would eliminate this perception.
So, let's assume a super high frequency transducer with excellent dispersion characteristics and thereby eliminate the need for that narrow sweet spot (a Murata is quite good, btw).
It is my contention (but I have no concrete evidence) that three things are happening in conjunction:
a) the high frequency sound is loud enough to overcome our reduced acuity up high (at -60db perception our ear would basically reject it)
b) the sounds in our "official" audible frequency range are rendered more palpable (for wont of a better word) because the super transducer's distortion points (upper resonance) have moved very far away (it's ~100kHz for a Murata) -- hence "perception" of positive effects. This still relates to our "official" range of hearing.
b) there is a combined excitation of aural and other, structural, mechanisms that indicate the presence of high frequencies -- that we cannot, however, qualify or explain (our hearing is a defense and guidance mechanism geared towards perceiving and locating).
Even at B there is a dilemma: in a small experiment in France some subjects were asked to put one ear close to a super tweet and declare whether they perceive anything. Inconclusive (some did, some didn't, no pattern. BTW, I did a similar thing & did perceive energy or lack of it with some DELAY however when the tweet STOPPED producing sound -- joining Eldartford's idea).
Subjects were then asked to move away from the transducer & listen normally (stereo), just by casually sitting on a couch in front of the speakers as one would do at home. Everyone "heard" the supertweet playing. Amazingly, only the s-tweet was connected (at 16kHz -- very high up for sound out of other context).
I find this fascinating.
Gregm, I do not know how many out there experienced the Murata demonstration at CES 2004, but it was a great deal like what you describe. Initially, the speakers played a passage. Then the super tweeters were used and the passage replayed. The ten people in the audience all expressed a preference for the use of the super tweeters. There was much conversation but ultimately someone asked to hear the super tweeter only. The demonstrator said, we already were hearing it.
When we all refocused on the sound, all that we could hear was an occasional spit, tiz, snap. There was no music at all. The Muratas come in at 15k Hz. I left and dragged several friends back for a second demonstration with exactly the same results.
Would there be any benefit to having this done single or double blind? I don't think so. Do we need to have an understanding for how we hear such high frequency information, without which it might be a placebo or Hawthorne Electric phenomenon? I don't.
But this experience is quite at odds with the article that Pabelson cited. What is going on? I certainly don't know, save to suggest that there is a difference in what is being asked of subjects in the two tests.
I teach a course on the philosophy of color and color perception. One of the things I do is show color chips that are pairwise indistinguishable. I show a green chip together with another green chip that is indistinguishable. Then, I take away the first chip and show a third green chip that is indistinguishable from the second. And then I toss the second chip and introduce a fourth chip, indistinguishable from the third. At this point, I bring back the first green chip and compare it with the fourth. The fourth chip now looks bluish by contrast, and is easily distinguished from the original. How does that happen? We don't notice tiny differences, but they add up to noticable differences. We can be walked, step-wise, from any color to any other color without ever noticing a difference, provided our steps are small enough!
Same for sound, I bet. That's why I don't understand the obsession with pair-wise double-blind testing of individual components. Comparing two amps, alone, may not yield a discriminable difference. Likewise, two preamps might be pairwise indiscriminable. But the amp-pre-amp combos (there will be four possibilities) may be *noticably* different from one another. I bet this happens, but the tests are all about isolating one component and distinguishing it from a competitor, which is exactly wrong!
The same goes for wire and cable. It may be difficult to discern the result of swapping out one standard power cord or set of ic's or speaker cables. But replace all of them together and then test the completely upgraded set against the stock setup and see what you've got. At least, I'd love to see double-blind testing that is holistic like this. I'd take the results very seriously.
From the holistic tests, you can work backward to see what is contributing to good sound, just as you can eventually align all color chips in the proper order, if presented with the whole lot of them. But what needs to be compared in the first place are large chunks of the system. Even if amp/pre-amp combos couldn't be distinguished, perhaps amp/pre-amp combos with different cabling could be (even though none of the three elements used distinguishable products!). I want to see this done. Double blind.
In short: unnoticable difference add up to *very* noticable differences. Why this non-additive nature of comparison isn't at the forefront of the subjectivist/objectivist debate is a complete mystery to me.
Troy: Psychoacoustics is well aware of the possibility that A does not necessarily equal C. That hardly constitutes a reason to question the efficacy of DBTs.
And you are quite correct that changing both your speaker cables and interconnect(s) simultaneously might make a difference, when changing just one or the other would not. But assuming you use proper level-matching in your cable/wire comparisons, there probably won't be an audible difference, no matter how many ICs you've switched in the chain. (And if you don't use proper level-matching in your cable/wire comparisons, you will soon be parted from your money, as the proverb goes.)
You might be interested to know that Stereo Review ran an article in June 1998, "To Tweak Or Not to Tweak," by Tom Nousaine, who did listening tests comparing two very different whole systems. (The only similarities were the CD player and the speakers, but in one system the CD player fed an outboard DAC.) The two systems cost $1700 and $6400. The listening panel could hear no difference between the two systems, despite differences in DACs, preamps (one a tube pre), amps, ICs, and speaker cables.
So, contrary to your assertions, this whole question has been studied, and there is nothing new under the sun.
My point was not to call into question the efficacy of blind testing. I am quite in favor of it. Even when only one element of a system is varied, the results are interesting, and valuable. For instance, if I can pairwise distinguish speakers (blindly) of $1K and $2K, but not be able to distinguish similarly priced amps, or powercords, or what have you, then my money is best spent on speakers. Likewise, if preamps are more easily distinguishable than amps, I'll put my money there. A site that's interesting in this regard is:
I never said DBT is ineffective. It's just that *most* testing ignores the phenomenon that I cited: sameness of sound is intransitive, i.e., a=b,b=c, but not a=c. If the question is whether a certain component contributes to the optimal audio system, this phenomenon can't be ignored.
Of course scientists studying psychoacoustics are already aware of the phenomenon. I don't think I'm making a contribution to the science here. But the test you cite above is an exception, and for the most part, A/B comparisons are done while swapping single components, not large parts of the system. This is fine, when you *do* discover differences. Because then you know they're significant. But when you don't find differences, it's indeterminate whether there are no differences to be found OR the differences won't show up until other similar adjustments are made elsewhere in the system.
But I am *very much* in favor of blind testing, even in the pair-wise fashion. For instance, I want to know what the minimum amount of money is that I could spend to match the performance of a $20K amp in DBT. Getting *that* close to a 20K amp would be good enough for me, even if the differences between my amp and it will show up with, say, simultaneously swapping a $1K preamp with a $20K preamp. So where's that point of auditorily near-enough for amps?
I've also learned from DBT where I want to spend my extremely limited cash: speakers first, then room treatment, then source/preamp, then amp, then ic's and such. I'll invest in things that make pair-wise (blind) audible differences over (blind) inaudible differences any day.
Still, for other people here, who are after the very best in sound, only holistic testing matters. Their question (not mine) is whether quality cabling makes any auditory difference at all, in the very best of systems. Same for amps.
Take a system like Albert Porter's. Blindfold Mr. Porter. If you could swap out all the Purist in his system and put in Radio Shack, and *also* replace his amps with the cheapest amps that have roughly similar specs, without his being able to tell, that would be very surprising. But I haven't seen tests like that... the one you mention above excepted.
In theory, I like the idea of double blind testing, but it has some limitations as others have already discussed. Why not play with some other forms of evaluating equipment?
My first inclination would be to create a set of categories; such as dynamics, rythm and pace, range, detail, etc.. You could have a group of people listen and rate according to these attributes on a scale of perhaps 1 to 5. You could improve the data by having the participants not talk to one another before completing their ratings, by hiding the equipment from them during the audition, and by giving them a reference audition where pre-determined ratings are provided from which the rater could pivot up or down across the attributes.
Yet another improvement would be to take each rating category and pre-define its attributes. For example, ratings for "detail" as a category could be pre-defined as: 1. I can't even differentiate the instruments and everything sounds like a single tone. 2. I can make out different instruments, but they don't sound natural and I cannot hear their subtle sounds or noises. 3. Instruments are well differentiated and I can hear individual details such as fingers on the fret boards and the sound of the bow on the violin string. Well, you get the picture. The idea is to pre-define a rating scale based on characteristics of the sound. Notice terms such as lush or analytical are absent because they don't themselves really define the attribute. They are subjective conclusions. Conceivably, a blend of categories and their attributes could communicate an analysis of the sound of a piece of equipment, setting aside our conflicting definitions about what sounds 'best', which is very subjective. Further, such a grid of attributes, when completed by a large number of people, could be statistically evaluated for consistency. Again, it wouldn't tell you whether the equipment is good or bad, but if a large number of people gave "detail" a rating of #2 and you had a low deviation around that rating, you might get a good idea of what that equipment sounds like and decide for yourself whether those attributes are desireable to you or not. Such a system would also, assuming their were enough participants over time, flush out the characteristics of equipment irrespective of what other equipment it was used with by relying upon a large volume of anecdotal evidence. In theory, the characteristics of a piece of equipment should remain consistent across setups or at least across similar price points.
Lastly, by moving toward a system of pre-defined judgements one could create some common language to rating attributes. Have you noticed that reviewers tend to use the same vocabularly whether evaluating a $500 piece of gear or a $20,000 piece of gear. So, the review becomes judgemental and loses its ability to really place the piece of gear in the spectrum of its possible attributes.
It's not a double blind study, but large doses of anecdotal evidence when statistically evaluated can yield good trend data.
Just an idea for discussion. If you made it this far, thanks for reading my rant :).
My apologies. I took you for the typical DBT-basher. As for amps, assuming you are talking about solid-state amps designed to have flat frequency response, I seriously doubt it matters (in a DBT) what preamp you use, or how expensive it is. If it has the power to drive your speakers, it will sound like any other good solid state amp with enough power to drive your speakers. Or so the bulk of the research suggests.
To your final point, I'm not sure what's in Mr. Porter's system, but the Nousaine experiment at least suggests that he would NOT notice such a swap, assuming you could finesse the level-matching issues. That's not to say that Mr. Porter's system is not right for Mr. Porter--merely that it might be possible for someone else to achieve a Porter-like sound for somewhat less money. And swapping out amps and cables is one thing; I wouldn't even dream of touching his turntable!
I find this a very interesting topic.
On one hand, it is somewhat accepted that the perfect component imposes no sonic qualites of it's own on the passing signal, but yet voicing of components is often referred to - particularly in the case of cables.
So, if a component is purposely voiced, then the reproduction cannot be true to the source can it? Further, if the differences are so obvious as many anecdotally state, it should be no problem to pass BT, DBT, or ABX tests...
I have a huge problem with the concept of DBT, with regards to trying to determine the differences or lack there of, with audio products. Maybe I'm just slow but, I often have to live a piece of gear for awhile before I can really tell what it can and cannot do.
DBT is great for something like a new medicine. However, it would be worthless if you gave the subjects one pill, one time. The studies take place over a period of time. And that is the problem with DBT in audio. You sit a group of people in front of the setup. They listen to a couple of songs, you switch whatever component and then play a couple of songs. That just doesn't work. The differences are often very subtle and can't be heard a first.
Which, of course, is the dilemma of making a new purchase. You have to base your decision on short listening periods.
The concept of a DBT for a audio component is great. But, I have yet to see how a test would be set up that would be of any value. Looking a test results based on swapping components after short listening periods would never influence my buying decisions. I wouldn't care how large the audience was or how many times it was repeated. Anymore than I would trust a new drug that was conducted with a one pill dose.
Agaffer, I agree. I have participated in DBTs several times and have found hearing differences in such short term to be difficult, even though after a long term listening to several of the units, I clearly preferred one.
I think the real question is why do short-term comparisons with others yield "no difference" results while other circumstances yield "great difference" results. Advocates of DBT say, of course, that this reveals the placebo effect in the more open circumstances where people know what unit is being played. I think there are other hypotheses, however. Double blind tests over a long term with no one else present in private homes would exclude most alternative hypotheses.
The real issue, however, is whether any or many of us care what these results might be. If we like it, we buy it. If not, we don't. This is the bottom line. DBT assumes that we have to justify our purchases to others as in science; we do not have to do so.
DBT as done in audio has significant methodological issues that virtually invalidate any results obtained. With improper experimental design methodology, any statistics generated are suspect. Regularly compounding the statistical issues is sample size, usually quite small, meaning that the power of any statistics generated, even if significant, is quite small, again meaning that the results are not all too meaningful. Add to this the criticism that DBT, as done so far in audio, might be introducing its own set of artifacts that skew results, and we have quite a muddle.
I'm not at all opposed to DBT, but if it is to be used, it should be with a tight and valid experimental design that allows statistics with some power to be generated. Until this happens, DBT in audio is only an epithet for the supposed rationalists to hurl at the supposed (and deluded) subjectivists. Advocates of DBT have a valid axe to grind, but I have yet to see them produce a scientifically valid design (and I am not claiming an encyclopedic knowledge of all DBT testing that has been done in audio).
More interestingly, though, what do the DBT advocates hope to show? More often than not, it seems to be that there is not any way to differntiate component A (say, the $2.5K Shudda Wudda Mega monster power cord) from component B (stock PC)or component group A (say, tube power amps)from component group B (transistor power amps). Now read a typical subjectivist review waxing rhapsodically on things like, soundstage width and height, instrumental placement, micro and macrodynamics, bass definition across the sepctrum, midrange clarity, treble smoothness, "sounding real," etc., etc. Can any DBT address these issues? How would it be done?
You might peruse my posts of 8/13/05 and 8/14/05 about a power cord DBT session, carried out, I think, by a group that were sincere but terriby flawed in how they approached what they were trying to do to get an idea of how an often cited DBT looks when we begin to examine critically what was done.