Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
So, Rouvin, if you don't think all those DBTs with negative results are any good, why don't you do one "right"? Who knows, maybe you'd get a positive result, and prove all those objectivists wrong.

If the problem is with test implementation, then show us the way to do the tests right, and let's see if you get the results you hope for. I'm not holding my breath.
Pabelson, interesting challenge, but let’s look at what you’ve said in your various posts in this thread. I’ve pasted them without dates, but I’m sure that you know what you’ve said so far.
"What advances the field is producing your own evidence—evidence that meets the test of reliability and repeatability, something a sighted listening comparison can never do. That’s why objectivists are always asking, Where’s your evidence?"
"A good example of a mix of positive and negative tests is the ABX cable tests that Stereo Review did more than 20 years ago. Of the 6 comparisons they did, 5 had positive results; only 1 was negative."
"It's better to use one subject at a time, and to let the subject control the switching."
"Many objectivists used to be subjectivists till they started looking into things, and perhaps did some testing of their own."

You cite the ABX home page, a page that shows that differences can be heard. Yet I recognize that the differences when heard were between components that were quite different and usually meeting the standard you’ve indicated as much better specs will sound better.

Once you decide something does sound different, is this what you buy? Is different better? You say:
"Find ANYBODY who can tell two amps apart 15 times out of 20 in a blind test (same-different, ABX, whatever), and I’ll agree that those two amps are sonically distinguishable."
Does that make you want to have this amp? Is that your standard?

One of the tests you cite was in 1998 with two systems that were quite different in more than price. Does that lend credence to the DBT argument? On the one hand you point to all the same but one component with one listener with repeated tests but then cite something quite different to impugn subjectivists – not that it’s all that hard to do. You also cite a number of times that DBT has indicated that there is a difference. Which is it? Is there “proof” of hearing differences that has been established by DBT? It certainly appears that there is from the stuff you have cited. By your argument, if this has been done once, the subjectivists have demonstrated their point. I don’t agree, and you really don't appear to , either.

My points were two, and I do not feel that they have been addressed by your challenge. One, that most DBT tests as done in audio have readily questionable methods – methods that invalidate any statistical testing, as well as sample sizes that are way too small for valid statistics. Those tests you cite in which differences were found do look valid, but I haven’t taken the time to go into them more deeply. Two, and the far more important point to me, do the DBT tests done or any that might be done really address the stuff of subjective reviews? I just don’t see how this can be done, and I’m not going to try to accept your challenge , “If you know so much ...” Instead, if you know so much about science and psychoacoustics, and you do appear to have at least a passing knowledge to me, why would you issue such a meaningless, conversation stopper challenge? Experiments with faulty experimental design are refused for journal or other publication all the time by reviewers who do not have to respond to such challenges. The flaws they point out are sufficient.

Finally, I’ve been involved in this more than long enough to have heard many costly systems in homes and showrooms that either sounded awful to my ears or were unacceptable to me one way or another. The best I’ve heard have never been the most costly but have consistently been in houses with carefully set up sound rooms built especially for that purpose from designs provided by psychoacoustic objectivists. This makes me suspect that what we have is far better than we know, a point inherent in many "objectivist" arguments. My home does not even come close to that standard in my listening room (and a very substantial majority of pictures I see of various systems in rooms around the net also seem to fall pretty short). The DBT test setups I have seen have never been in that type of room, either. What effect this would have on a methodologically sound DBT would be interesting. Wouldn’t it?
Rouvin: Let me take your two points in order. First:

One, that most DBT tests as done in audio have readily questionable methods – methods that invalidate any statistical testing, as well as sample sizes that are way too small for valid statistics.

Then why is it that all published DBTs involving consumer audio equipment report results that match what we would predict based on measurable differences? For badly implemented tests, they've yielded remarkably consistent results, both positive and negative. If the reason some tests were negative was because they were done badly, why hasn't anyone ever repeated those tests properly and gotten a positive result instead? (I'll tell you why--because they can't.)

Two, and the far more important point to me, do the DBT tests done or any that might be done really address the stuff of subjective reviews?

DBTs address a prior question: Are two components audibly distinguishable at all? If they aren't, then a subjective review comparing those components is an exercise in creative writing. You seem to be making the a priori assumption that if a subjective reviewer says two components sound different, then that is correct and DBTs ought to be able to confirm that. That's faith, not science. If I ran an audio magazine, I wouldn't let anyone write a subjective review of a component unless he could demonstrate that he can tell it apart from something else without knowing which is which. Would you really trust a subjective reviewer who couldn't do that?
Pabelson,
I think we may be closer than you think on this issue but you seem to want it both ways, a difficulty I see repeatedly in "objectivist" arguments. You say:
" For badly implemented tests, they've yielded remarkably consistent results, both positive and negative." --
all the while insisting on scientific assessment.

Methodologically unsound experiments yeild no meaningful results. The pattern of meaningless results does not matter. Your argument in this regard is emotionally appealing, but it is incorrect.

Moreover, the notion that "DBTs address a prior question: Are two components audibly distinguishable at all?" is also suspect absent appropriate methodology. I notice in your posts that you address reliability and repeatibility, important factors without any doubt. Yet you have never spoken to the issue I have validity, and this is the crux of our difference. Flawed methodology can yield repeatable results reliably, but it is still not valid.

And, of course, as you have noted, many DBT's have shown that some components are distinguishable.

The issue beyond methodology, I suspect, is that there are some people who can often reliably distinguish between components. They are outliers, well outside the norm, several standard deviations beyond the mean, even among the self-designated "golden eared." When any testing is done on a group basis, these folks vanish in the group statistics. You can assail this argument on many grounds. It is indefensible except for the virtual certainty that there is a standard distribution in the populatiuon in acuity of hearing.

So, my position remeains that there is surely a place for DBT testing, but even after all the methodological and sampling issues were addressed, I'm still unsure how it fits into the types of reviews most audiophoiles want.

In your hypothetical magazine, after DBT establishes that the Mega Whopper is distinguishable from El Thumper Grande, how would either be described? Would there be a DBT for each characteristic?

Freud had a book on religion entitled "Future of an Illusion" and you may well feel that this is where all of this ultimately is. I'm not sure that I have an answer to that, but this may well be why Ausio Asylum has devclared itself a DBT free zone.
Rouvin, bingo! Validity is the missing concern with DBTs. I also entire subscribe to your question about where DBTing fits into the reviews that audiophiles want. As I have said, I cannot imagine a DBT audio magazine.

I am troubled by your comments that some DBTing has given positive results. Can you please cite these examples?