Blind Power Cord Test & results


Secrets of Home Theater and High Fidelity teamed up with the Bay Area Audiophile Society (BAAS) to conduct a blind AC power cord test. Here is the url:
http://www.hometheaterhifi.com/volume_11_4/feature-article-blind-test-power-cords-12-2004.html

I suppose you can interpret these results to your follow your own point of view, but to me they reinforce my thoughts that aftermarket AC cords are "audiophile snakeoil"
maximum_analog
The first time I sat down to compare an "audiophile" power cord to a generic one, I was 100% sure I would find no difference. The idea seemed absurd to me. But listening to certain cords, such as those from Shunyata, quickly demolished that notion. Furthermore, I heard the same things that I subsequently read about in reviews.

This is to me more relevant type of blind testing than sitting through short selections. I was blind to the possibility of differences, and I was blind to the types of differences that others were hearing on these same cables.

I don't kwow whether the cord that was used for testing in this article has a substantial sonic signature or not. I am certain that I could prepare a test between two cords where listeners would hear little or no difference, because there isn't any apparent sonic difference between the cords. (I'm referring to sonic differences, not price differences ; - )

Art
I've tried various pc in the past and most do change the sound one way or another. A/B doesn't work for me, it usually requires atleast 3-4 days for the differences to by fully noticeable. I have just in the past month went through three different pcs on my source. The first 3-4 days they all sounded very similar, with the only slight noticeable difference in the bass area. After 4 days or so, the differences became more apparent and each takes my system to a slightly different direction.
I don't know the technical workings of this (feel free to jump in Sean,) but I think it has something to do with the pc itself being electrically charged up. From my experience and imho, the neuances of each cord would only reveal themselves after this length. Any A/B comparisons that I have done prior to said time often yield inconclusive results. If I had only done A/B comparisons, I may have drawn the conclusion that they make no differences.
I don't need someone elses review, I have done my own. I replaced the AC cords on my DVD player and front projector at the same time with VHAudio cryo'd AC cords and the video was dramatically improved, nothing subtle about it. I was not using any power conditioners at the time. This was on a well calibrated system I was naurally familiar with and it was immediately apparent. Perhaps video is easier to perceive than audio.
Welcone to Statistics 101, and so snickering, please.
A not all that close reading of the article will lead anyone familiar with experimental design to conclude that the results are without meaning. I'm sorry I didn't see this thread sooner so my comments could be more current. The sections in quotes were cut and pasted from the article.

1. Reading the article from “Secrets of Home Theater and High Fidelity,” it is apparent that the procedure has many, many steps that might influence the results and that the two groups had different procedures used. The non-comparability of the groups and multiple steps mean that the overall procedure may be flawed. Is there “proof?” No, but there is equally any lack of evidence that this is a valid procedure. Stop, start; un plug, replug;(on multiple components, no less), power down, power up; warm-up; musical selection length determined, how? Half of the participants attended a “training session” the month before. The second trial had musical selections that were longer than the first. Group one (no snickering here, please) had the felt down for the training and felt up for the listening. Group two were felt up both ways. ( I said, “NO SNICKERING!”) Group one ate after the test and greeted Group two. Group two ate before the test with Group one. Lots of experimental manipulation, all of which was quite apparent to the participants. What might this have done? Still, the two groups were different from one another and were different interanlly by virtue of the pre-trainning. At a minimum, the data from the two should not have been aggregated.

“Participants were 80% correct in their responses to the selection from the Berlioz Requiem. Manny calls this “very close to the threshold between chance and perception. None of the other selections produced responses higher than 60%. This phenomenon correlates with John Atkinson’s experience that his participants fared best on massed choral music. If any of us were mad enough to conduct another blind test of this nature, I would choose audiophile recordings of massed choral music for at least 50% of the musical selections. It would be interesting to discover if it would make a difference.”

2. The procedure apparently produced at least one condition that had uniformly positive results, but Manny says it doesn’t matter. How did Manny reach this conclusion?

“In post-test discussion, several of us noted that we had great difficulty remembering what A had sounded like by the time we got through with X. Several participants said that the way they dealt with this phenomenon was by ignoring A entirely and simply comparing B to X without giving thought to A.”

3. Procedures may well have skewed the results.

“In many cases, statistically significant differences could be discerned by participants. In others, no differences could be discerned.”

4. He does not make it clear how he determined this. Even earlier in the review, he noted:

“...that the very procedure of a blind listening test can conceal small but real subjective differences....”

5. Hmmm.....

"But, no, you have to take all the data together. You can't just pick out the numbers that suit your hypothesis. This would be statistically invalid. Same thing with just looking at one music selection."

6. But, if the procedures have many elements that compromise the overall validity, this conclusion is unsubstantiated.

The fact there was some evidence of statistically significant differences some of the time suggests that something may have been going on. Lumping all of the data together is not necessarily a good statistical procedure, particularly with so may manipulations going on and the differences between the two groups and individual participants by having attended some pre-training. With any group, it could be that there is one individual who can detect differences, or one type of music that makes this more detectable. No one is really sure of what to make of statistical outliers (those many standard deviations from the mean), but citing group statistics does not address the issue, particularly with small groups.

"But, we can't do that and claim good science."

7. Calling it science does not make it science.
These procedures may or may not be flawed. What is clear though, is that there is very little statistical power in such procedures -- two small groups with a large number of experimental conditions.

My conclusion is that nothing can be learned from this test as structured. Using statistical procedures to analyze poorly run experiments cannot redeem the experiments. Lots of experiments designed by far more accomplished folks are found to be flawed. This would never be published by anything other than an on-line audiophile publication.

Buy a better power cord and decide for yourself. Get it from a source that allows a trial period.

Rouvin
What a damn shame they didn't consult you before the test. How could they be so stupid? Thanks for setting us all straight.