The invention of measurements and perception

This is going to be pretty airy-fairy. Sorry.

Let’s talk about how measurements get invented, and how this limits us.

One of the great works of engineering, science, and data is finding signals in the noise. What matters? Why? How much?

My background is in computer science, and a little in electrical engineering. So the question of what to measure to make systems (audio and computer) "better" is always on my mind.

What’s often missing in measurements is "pleasure" or "satisfaction."

I believe in math. I believe in statistics, but I also understand the limitations. That is, we can measure an attribute, like "interrupts per second" or "inflamatory markers" or Total Harmonic Distortion plus noise (THD+N)

However, measuring them, and understanding outcome and desirability are VERY different. Those companies who can do this excel at creating business value. For instance, like it or not, Bose and Harman excel (in their own ways) at finding this out. What some one will pay for, vs. how low a distortion figure is measured is VERY different.

What is my point?

Specs are good, I like specs, I like measurements, and they keep makers from cheating (more or less) but there must be a link between measurements and listener preferences before we can attribute desirability, listener preference, or economic viability.

What is that link? That link is you. That link is you listening in a chair, free of ideas like price, reviews or buzz. That link is you listening for no one but yourself and buying what you want to listen to the most.


Showing 1 response by vt4c

Lot of nice posts here.
Measurements are needed for development, quality control etc. It would be impossible to design and produce equipment based only on auditory results.
Math is philosophy which is applied in measurements. Measurements quantify a set of known errors. Noise is assumed to be random. Depth of human perception have not as far as know been quantified. Thus measurements will tell what is wrong with equipment, not what is right, but is a good starting point. 
Digital music formats introduced a new set of errors old systems were not adapted to.
Test signals are usually simple to make the math easy. That obviously does not paint a complete picture. How do you extract deviations in stochastic signals where complex intermods will happen?
Pseudo random noise with a swept -130dB notch source signal measured with a corresponding analyzer rejecting everything but the swept notch?
(I thought about that back in the early 70's but had no way to design and build it.)
How much, or more importantly exactly what, is tolerable to a critical ear?
Why not introduce quantifiable errors abd and guage their perceptability?