Does 'Accuracy' Matter or exist ?


In the realms of audiophilia the word 'accuracy' is much-used. The word is problematical for me.

In optics there was once coined a descriptor known as the ' wobbly stack', signifying a number of inter-dependent variables, and I believe the term has meaning to us audiophiles.

The first wobble is the recording, obviously. How to record (there are many microphones to choose from...), what kind of room to record in (an anechoic recording studio, live environment etc), where to place the chosen microphones, how to equalize the sound,
and, without doubt, the mindsets of all involved. This is a shaky beginning. And the ears and preferences of the engineers/artists involved, and of course the equipment used to monitor the sound: these too exert a powerful front-end influence. Next comes the
mixing (possibly using a different set of speakers to monitor), again (and of course) using personal preferences to make the final adjustments. My thesis would be that many of these 'adjustments' (EQ, reverb etc) again exert a powerful influence.

Maybe not the best start for 'accuracy', but certainly all under the heading of The Creative Process....

And then the playback equipment we all have and love.....turntables, arms, cartridges, digital devices, cables, and last but never least, speakers. Most, if not all, of these pieces of equipment have a specific sonic signature, regardless of the manufacturers' claims for the Absolute Sound. Each and every choice we make is dictated by what? Four things (excluding price): our own audio preferences, our already-existing equipment, most-importantly, our favorite recordings (wobble, wobble), and perhaps aesthetics.

Things are getting pretty arbitrary by this point. The stack of variables is teetering.

And let us not forget about the room we listen in, and the signature this imposes on everything (for as long as we keep the room...)

Is there any doubt why there's so much choice in playback equipment? To read reports and opinions on equipment can leave one in a state of stupefaction; so much that is available promises 'accuracy' - and yet sounds unique?

Out there is a veritable minefield of differing recordings. I have long since come to the conclusion
that some recordings favor specific playback equipment - at least it seems so to me. The best we can do is soldier on, dealing
with this wobby stack of variables, occasionally changing a bit here and there as our tastes change (and, as our Significant Others know, how we suffer.....).

Regardless, I wouldn't change a thing - apart from avoiding the 'accuracy' word. I'm not sure if it means very much to me any more.
I've enjoyed every one of the (many, many) systems I've ever had: for each one there have been some recordings that have stood out as being
simply Very Special, and these have lodged deep in the old memory banks.

But I wonder how many of them have been Accurate........
57s4me
I keep thinking that some of you guys keep missing the point; with all due respect.

The easily identified, inherent, signature sound of Carnegie Hall (or any other hall) swamps any deviation caused by humidity, number of attendees, etc. It really is not important to focus on those as relates to accuracy in a recording; although even those minor can, in fact, be heard on good recordings. The bigger, and important question should be: is the recording faithfull to that inherent quality constant? Is the equipment able to pass that information on (accuracy; or some part thereof)? To dismiss the relevance of a standard on the basis of inevitable subtle variability is silly.
The main hall at Carnegie seats 2,800 people. Am I to believe that you think whether the hall is empty or full of concert goers has only a subtle impact on the sound within the hall? If you do believe such, then I respectfully disagree. The presence of concert-goers will effect hall volume levels and the tonality of reflected sound. It's not a subtle effect.

I am not dismissing the relevance of a standard, instead I'm pointing out a severe limitation of that standard as commonly used.
This is very interesting; is the ideal of 'accuracy' at one and the same time quite attainable, but actually impossible (to all intents) to verify?
If so, we are left in a position of 'having the faith'.

Absolute proof of accuracy seems dependent on a multitude of variables (too many to consider simultaneously), even, as Onhwy61 points out, the number of seats filled in the venue. The case seems made.

I for one find this to be perfectly acceptable: to not have faith in what we hear would seem to be no more nor less than destructive: if the choice is to fret or not, I for one will choose not.
On the other hand I am exceedingly grateful to those who do......
Ohnwy61, please read my comment again: ***The easily identified, inherent, signature sound of Carnegie Hall (or any other hall) swamps any deviation caused by humidity, number of attendees, etc.***

I know how much some dislike the idea of absolutes, but this is an absolute fact. I never said that the sound is not changed by wether the hall is full or not; of course it does. What I am saying is that the change does not in any way cause the listener to not be able to identify the hall as being Carnegie; and easily. The hall's inherent sound is much more powerful than any change caused by the number of people in the the seats. From that standpoint, yes it's subtle. I have performed on that stage upwards of one hundred times in all my years as a musician. Almost everyone of those occasions involved a dress rehearsal or soundcheck prior to the performance, so I was able to hear the hall empty, and then full, during the span of a few hours at most. I can say unequivocally that not once have I felt that the difference in sound was anywhere near the difference in sound between two different halls. Moreover, if one could quantify this sort of thing, I assure you that the differences we are talking about are less than the differences we, as audiophiles, agonize over when choosing interconnects; in which case we could be talking about significant changes in tonal balance, and amount of detail heard. The difference is there to be sure, and important, but not to the point that the Carnegie sound could not be used as a reference. That was my point. Think of it this way: If your spouse has a slight cold, and she leaves you a message on your voice mail, are you all of sudden unable to tell it was he/she calling? You are intimately familiar with the sound of her voice; it remains a valid reference.
Hi Onwhy61 - thanks for explaining! While the variables you speak of are not insignificant, I must agree with Frogman - all halls have an "easily identified, inherent, signature sound" which will still be there despite these variables - and the same goes for the specific timbral qualities of every human voice and every acoustic instrument. Far too many recording engineers do not make recordings that are very accurate in Frogman's sense nowadays, and this is to me and most of my fellow musicians a much bigger issue than whether or not the playback equipment can then pass that info on. It certainly can't if it isn't there on the recording in the first place. I personally think that too many audiophiles blame the equipment when in fact they are listening to a bad recording job. But that's getting off topic.