Analog vs. digital segment on PBS


The show "Wired Science" on PBS this week has a good segment on analog vs. digital with a relatively quick blind panel test on analog vs. digital. I think they replay the show during the week if you can catch it. Nice to see some of the hobby getting some primetime attention, if PBS can be considered primetime of course! They have a couple recording engineers speaking about the merits of each and a blind listening test between a recording group (whose music they use for the test) and some unbiased recording engineers.
Also some info on frozen brains... either way it's a great show for general technology every week.
jimmy2615

Showing 1 response by rouvin

The results stated at the end of the testing, 20 brief samples/group, if I remember correctly, were: Engineer group correct 55% of the time; Musician group correct 53% of the time -- just slightly above chance, 50%. These percentages seem difficult to interpret or parse. They appeared to be summed for each of the two groups. The difficulty is determining what constituted a correct response in computing the percentages. If a sample were digital and one member of a pair responded digital and the other analog, how would this be counted? For instance: Is this a group miss because there was one correct and one incorrect, or is it two responses, one correct, the other incorrect? Was there a differential between members of each pair in percentages identified correctly (one engineer right 60% and the other 50% = 55% for the engineer group)? If one group member were wrong more than 50% of the time would this be an inverse correlation exceeding chance? (Wrong 65% of the time could be interpreted as being able to differentiate digital and analog even though misidentified; and would negatively affect the reported outcome "correct" statistics.)Was there a difference in identifying digital or analog material correctly?

My experience in reading about these issues is that technical matters about how the testing is done, such as some of the possible problems about the setup and administration of the testing on PBS identified in some of the posts, and how statistics are generated, organized and analyzed are often ignored in favor of a debate among the believers and doubters in each camp. If there are problems in any step of the procedures, the results cannot be meaningfully understood.

Given all of the issues identified in the procedure and the number of Rumsfeldian unknown unknowns can anything be concluded from this segment?