IM Distortion, Speakers and the Death of Science


One topic that often comes up is perception vs. measurements.

"If you can't measure it with common, existing measurements it isn't real."

This idea is and always will be flawed. Mind you, maybe what you perceive is not worth $1, but this is not how science works. I'm reminded of how many doctors and scientists fought against modernizing polio interventions, and how only recently did the treatment for stomach ulcers change radically due to the curiosity of a pair of forensic scientists.

Perception precedes measurement.  In between perception and measurement is (always) transference to visual data.  Lets take an example.

You are working on phone technology shortly after Bell invents the telephone. You hear one type of transducer sounds better than another.  Why is that?  Well, you have to figure out some way to see it (literally), via a scope, a charting pen, something that tells you in an objective way why they are different, that allows you to set a standard or goal and move towards it.

This person probably did not set out to measure all possible things. Maybe the first thing they decide to measure is distortion, or perhaps frequency response. After visualizing the raw data the scientist then has to decide what the units are, and how to express differences. Lets say it is distortion. In theory, there could have been a lot of different ways to measure distortion.  Such as Vrms - Vrms (expected) /Hz. Depending on the engineer's need at the time, that might have been a perfectly valid way to measure the output.

But here's the issue. This may work for this engineer solving this time, and we may even add it to the cannon of common measurements, but we are by no means done.

So, when exactly are we done?? At 1? 2? 5?  30?  The answer is we are not.  There are several common measurements for speakers for instance which I believe should be done more by reviewers:

- Compression
- Intermodulation ( IM ) Distortion
- Distortion

and yet, we do not. IM distortion is kind of interesting because I had heard about it before from M&K's literature, but it reappeared for me in the blog of Roger Russel ( http://www.roger-russell.com ) formerly from McIntosh. I can't find the blog post, but apparently they used IM distortion measurements to compare the audibility of woofer changes quite successfully.

Here's a great example of a new measurement being used and attributed to a sonic characteristic. Imagine the before and after.  Before using IM, maybe only distortion would have been used. They were of course measuring impedance and frequency response, and simple harmonic distortion, but Roger and his partner could hear something different not expressed in these measurements, so, they invent the use of it here. That invention is, in my mind, actual audio science.

The opposite of science would have been to say "frequency, impedance, and distortion" are the 3 characteristics which are audible, forever. Nelson pass working with the distortion profile, comparing the audible results and saying "this is an important feature" is also science. He's throwing out the normal distortion ratings and creating a whole new set of target behavior based on his experiments.  Given the market acceptance of his very expensive products I'd say he's been damn good at this.

What is my point to all of this?  Measurements in the consumer literature have become complacent. We've become far too willing to accept the limits of measurements from the 1980's and fail to develop new standard ways of testing. As a result of this we have devolved into camps who say that 1980's measures are all we need, those who eschew measurements and very little being done to show us new ways of looking at complex behaviors. Some areas where I believe measurements should be improved:

  • The effects of vibration on ss equipment
  • Capacitor technology
  • Interaction of linear amps with cables and speaker impedance.

We have become far too happy with this stale condition, and, for the consumers, science is dead.
erik_squires

Showing 48 responses by andy2

I recall Einstein's thought experiments ... before his time, it's kind of hard to find equipment like oscilloscopes, DMM, atom smasher and stuffs.  All he had was his mind and his own thought experiments.  Now with the advent of new technology, people now are looking down at it like some kind of taboo.  
Yes, there are measurements and they are valid, but nobody that I’ve known of could figure out how to measure our "hearing".

For example, there is no measurement I’ve known of that can tell how good a woofer is by just looking at the freq. and phase plots.

MC is a controversial figure, but he said something I would agree. He said that Mercedes have spent millions of dollars to design all sort of sensors but at the end they have to rely on Lewis Hamilton to tell them what’s going on with the car. This does not mean all the measurements made by Mercedes was not valid, it’s just that there is a human element that cannot be measured.

Same with speaker design. After all the simulations, fine tuning the freq. and phase plot, one still have to sit down and listen and judge with your own ears. One cannot judge a pair of speaker with the freq and phase plot.

I don’t mean to disregard measurements. On the other hands, quite the opposite. All the improvements in cables, drivers, capacitors, inductors would not have happened without the advance in measurement equipment and software.

I find it amusing that people are pitting the "measurement" vs. "hearing" as some type of a fight.  It's like saying which is better - apples or oranges.
As you narrow down the scope of the problem to some specific variables, then measurements can be used to quantify the "goodness" of that something.  For example, it may be hard to judge how good a pair of speaker with measurement, but if the device under test is a single capacitor or inductor, then it's easier to come up with a set of measurement to quantify the performance of said component.

But then at the end, I am afraid one still has to listen - God forbids.  
"Linear Time Invariant systems are important because we could solve them" Richard Feynman.

Once we get into the non-linear things, it can get complicated real fast.  


Measuring process, and listening experience in a controlled environment, are not opposites things... They are complementary...
I agree. 

There are still things that are difficult to grasp such as cable breaking-in, an in which case, people are demanding "measurement".  The problem is the equipment required to measure the "breakin-in" phenomenon can be quite expensive that very few cable companies can afford.  

doesn’t occur in any meaningful way and hence cannot be measured.
Actually, when it comes to speaker drivers (such as woofer, tweeter ....), it has been shown that break-in does change Thiele-Small parameters.

As for breaking in cables, it may require more sophisticated and expensive equipment to measure. Not only that, it’s probably not trivial to measure so it does take some knowledge. These two reasons are probably why you don’t see any publish data.


The point is to measure some quantifiable objective - before and after. As for equipment, I would need:
1. A really good analog oscilloscope that can measure jitter in time domain.

2. A really good phase noise analyzer to measure in frequency domain

3. A really good vector network analyzer to measure the freq. and phase response. The excitation signal can be varied in amplitude if you want to see how the cable response with different input amplitude.

These would measure the before and after and then comparing the result. Here is a link from Troels, in which he modified a small woofer freq. response and listen for the affect on soundstage and detail.
http://www.troelsgravesen.dk/W12CY003.htm

I suspect the pre-break-in measurement will be relatively "peaky" vs. the after break-in.  

But at the end, as I said above, one still has to listen.
Gentle readers, I implore you, how would you measure depth or height of soundstage, transparency, separation of instruments, perceived resolution, bass articulation, naturalness of high frequencies, air, presence, and warmth? Hel-loo! Is there a soundstage meter? Is there a glare meter?
The point is not to measure those that are viewed as too subjective.  The point is to measure quantifiable objective parameters.

Cut me some slack, Jack.
Everybody does when you post :-)
First you need a very low noise signal generator with jitter in the femto second range.

Here are some basic measurements one can take:

1. Sinewave sweep from 10Hz to 50KHz measure jitter in time domain at each freq. increment with an oscilloscope.
2. Square wave sweep from 10KHz to 50KHz then measure jitter in time domain at each freq. increment with an oscilloscope.  
3. Measure phase noise jitter from 1KHz to 20KHz at 1KHz increment.   This measurement is done in frequency domain so you can look for any peak or dip in the frequency so you can compare before and after break-in.
4. With a network analyzer, measure the freq. and phase response. The network analzyer will do a sweep so you don’t have to do that manually as in step 1, 2,3.
The brain can detect angle to about 1-2 degrees. A computer can do it 100 times better. A computer can detect tones that are 1/1000 of an octave apart with ease, even 10 times that. The ear/brain, not even close.
That is missing the point.  Sure objectively the computer can do quite many things better and faster than a human mind, but no computer that I know of can interpret musical reproduction the way the ear-brain can.

What you said is the equivalent of saying a computer is "smarter" than a human because it can perform mathematical operations such as addition, subtraction ... millions of times faster than a human brain.  

A computer can perform addition much faster than any person on earth but that does not mean the computer is smarter or even better.

If one takes the non-spiritual view then there is nothing about the human brain that cannot be replicated.
That is not true. Can you replicate Beethoven or Einstein? If you’re right, then we probably have a bunch of Einstein running around already.  That is such a simple-minded point of view that I have to scratch my head.

Again, you seem to be confused between things that can objectively replicated vs. things that belong to the conscious mind that cannot be replicated.

If you are faithfully recreating the signal that is engineering and science.
That is not possible even with the current equipment. You can come close but nothing in this world that can replicate the original performance 100%. I mean you can come close but not 100%. Currently you got 24bit/192KHz, SACD, DSD and so on but they all have their own compromises.

You seem to be putting yourself into a corner that you cannot get out of :-)
When you understand AI learning and architecture you realize in many ways that aspects are much like evolution and happening in real time. And yes, more creative that Beethoven and an even farther reaching mind than Einstein.
Remind me of 1980’s science fiction flicks :-) Watching too much of those can get you in trouble.

Fyi, 24/192 has 0 compromises within the limits of human hearing.
There is so much limitations even with today digital technology but talking about it would require talking about digital engineer that you may not be familiar with.  
Can heaudio123 AI create for me a chef?  I am tired of my own cooking lols.
FYI, even at the cost of some of the high-end dCS stuffs, I believe their Sigma Delta Ring DAC technology only has 5 bits on the output, because anymore bits would overload the hardware to point of being impractical to implement on the hardware.

Yes, you read that correctly - only 5 bits.  That should wake up some people who think technology will replace human.
 5 bit is more than adequate
No it's not.  They are using 5 bits due to hardware limitation and the over-complexity of the dynamic element matching network.  I don't think you know what you're talking about.

I am still waiting for Einstein or Beethoven coming out of your @$$ lols.

At least you now understand 5 bits. Still in shock? Didn’t expect that did you?
Until you extend your concept of what a "computer" is, you will maintain this simple and erroneous view of both what is possible with AI, and this view that the human mind is somehow "special". The human brain is just a biological implementation of what amounts to a computer, but if you have a hard concept of a computer only being digital logic then you will not understand where AI will go.
It's true that at least in theory, a human brain is made up of the same stuffs such as electrons, neutrons, protons ... just like a computer so in that sense a brain is not "special" nor does a brain is endowed with any "divine" properties.

But it's a stretch to say AI will replace human.  

In a given very specific task, an AI could replace a human, but in very limited and controlled environment.

For example, an AI can land a Boeing 747, but it could only be done in very limited and controlled situation.  When you add cross-wind, turbulence, and multitude of other external factors, no sane pilot will allow an AI to take over the airplane.  


All it seems to illustrate is a lack of understanding of how DACs work.
You seem to be saying all sort of things just randomly without knowing what you’re talking about. Oh, may be they had the wrong AI programmed in your head lols.  Sigma Delta can use however many bits to your heart content, but it will overload your hardware the more bits you use.

Here’s a link to educate about dCS before sprouting nonsense. That should shock you to know dDC only uses 5 bits.
https://www.diyaudio.com/forums/digital-source/71888-ring-dac.html
https://www.diyaudio.com/forums/digital-source/308923-difference-dcs-ringdac-vs-typical-sigma-delta-dem.html


Apparently AI can read the wiki page a lot faster than the human brain :-)
Penrose had no evidence, no proof, nothing
I suppose you do?  At least Penrose does not plagiarize the Wiki page :-)


I see in addition to not knowing how sigma-delta DACs work, not knowing much about AI, and not knowing how common mode inductors work, you also don’t know what the word plagiarize means, but like the other 3 that won’t stop you from posting about it.
Looks like you got caught on the DAC stuffs, so now you’re just blowing hot air lols. I mean "one bit Sigma Delta" probably from the Wiki page you read lols.

So obvious!


Anyway, I am still waiting for AI to cook for me.  I suppose I may have to wait for a pretty long time.
I notice that Elon Musk has stopped talking about Level 5 self driving.  I think it's a lot more complicated to replace a human behind the wheel than he had thought.

I've also heard people telling me software now can land an airplane.  Sure, it can be done in an ideal and controlled environment, but when you factor in real world weather condition, it's still a human pilot landing the airplane.  If you were a passenger and your life in on the line, would you trust the software?  (737 MAX comes to mind?)

Another example is reading brain scan for sign of cancer.  Sure there are AI that are claimed to be able to, but at the end, a real doctor has to look over and sign off.  

AI can assist a human for some very specific tasks, but at the end, a human has the final say.  



Can AI watch a youtube vid and make me a plate of clams linguine?  Or do I have to have everything prepared and AI will just have to mix it up at the end?
OK, so far it seems like AI can make burger and sushi (albeit with human behind the scene preparing all the ingredients before hand).
Based on someone here, AI has replaced so many professions that the last time I checked, the US economy unemployment rate was at 3.5%.

I personally don't know of any general practitioner Phd that got replaced by AI, certainly not dentists.  

Tasks like looking up the Wiki page will certainly be replaced by AI :-)

The last time I checked, Ford assembly lines are full of human beings putting together some pretty basic stuffs. If anyone, they would be the first ones to go, not chefs nor doctors nor pilots.

Back in the 1960's and 50's, there were tons of drawings of flying cars promising the futures will be full of cars flying and relieving the traffic congestion.  Does it sound familiar?  The more things change, the more they stay the same.  But I promise this time it will be different :-)

Can anyone name one GP doctor that got replaced by AI?  I would love to have a job that I can say anything but don't have to back up with any evidences.

Anyway, this morning I called my health insurance help center, and I was greeted with what was an AI automated helper.  I got so frustrated I almost slammed my phone.  Luckily, I was later got hooked up to a real person.  
That was because of the virus. The unemployment was at around 3.5-3.6% for the good part of last year, before the virus.
2019 Unemployment by Month
  • January: The unemployment rate rose to 4.0%
  • February and March: The rate fell to 3.8%
  • April and May: The rate fell to 3.6%.
  • June, July, August: The rate rose to 3.7%
  • September: The rate fell to 3.5%
  • October: It rose to 3.6%
  • November and December: It returned to 3.5%
2020 Unemployment by Month
  • January: The unemployment rate rose to 3.6%
  • February: The rate fell to 3.5%
  • March: The rate rose to 4.4%



Oh my God! Massive job loses because of AI. Unemployment at 3.5%.

This is getting hysterical.  Who would have thought flipping burgers will be replaced by AI?



but its inherent problem is similar to what medical researchers call "evidence-based medicine," resulting in a backward-looking bias that precludes real innovation.

It's refreshing to realize that even with the advent of massive data analysis tools like HSS, the randomness of the marketplace still can't be controlled.

I think AI will complement human beings, not replacing.  Like my example of brain scan, AI will help doctor better diagnose early sign of cancer, but the doctor will have the final say in the matter.  
able to learn from mistakes when provided with new information
That's not the same as "creativity".  Even if AI can learn, it can only learn within the confine of the given "algorithm".  An AI that was programmed to do brain scan, can't learn how to cook since cooking was not part of the original program.  

That's the main difference with AI and human.  AI learning is not the same as human learning.

Ever since computers came out of the 1980’s there’ve been talks of computers replacing humans. I've heard the exact same things back then.  It’s definitely been a very very long tunnel :-)

Fancy talks cannot produce real world evidences.
A lot of time I would "mute" the youtube clip, and use the function "CC" for the subtitles, but I do notice there are quite a few errors.  Maybe in realtime it's more difficult.  Dictation services could afford to rewind or forward the clip so that may reduce the error rate.

Do you know of any court that has replaced human typist?
AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.
It's hard to separate AI and algorithm.  Maybe AI is a special case of algorithm but still an algorithm nevertheless, albeit very complex algorithm.  And like all algorithms, it's evidence based, and maybe that is it's limitation.  Human on the other hands, can create something unique, hence capable of creativity and that's different from AI.  


If you continue to use such a narrow definition of computer OR artificial intelligence, you will never understand it. If I called it artificial life instead of artificial intelligence, would it be easier for you to understand?
Actually I think you got backward.  You can call it by whatever name, it's still computer AI based on algorithm.  



  • Simple algorithms on a compute
  • Digital "nets" directly on an IC with or without intentional noise (and often at low precision ... sort of like our brain), and if you want to play "word" games, that breaks your use of the word algorithm.
  • Analog nets (which really becomes almost exactly like our brain), and again, totally breaks YOUR use of the world "algorithm"

Just because it is implemented in hardware, does not mean it's not an algorithm, but it still does not resemble a human brain.  Human brain is capable of being re-generative, and able to reconfigure itself.  

and often at low precision ... sort of like our brain
That is not true at all.  Human and animal brains are not "low precision".  Some animal brains are capable of detecting noise or smell of extremely small level.  Just before you don't sense it, does not mean the subconscious mind does not process it.  There have been studies that show some birds can navigate by using electron quantum entanglement.

In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron or to the target effector cell.
That is definitely not "low precision".  

Think about it.  If your conscious brain is constantly awared of what your brain is processing, you would be driven to insanity. 

Anyway, I'll leave with this quote:
"Millions of monkeys won't be able to produce a work of Shakespeare by just randomly pounding on the keyboards".
Heaudio123 seems to be the only one on earth thinking he got our brain all figured out while everybody else on earth all scratching their heads trying to figure out.  Logically if you got something all figured out, then it means you should be able to duplicate it.

Something as basic as to why "sleep" is so important, but nobody can agree on it or even understand why.
Along the same line, no machine on earth that can create a living cell.  I mean we can understand genetic code and gene duplication and we can use electron magnifying glass to peer deep into the cell structure, still nobody can duplicate a living cell in the lab.  Maybe we should start with this instead of trying to duplicate the brain.
I think we are arguing about "precision" vs. "accuracy".  If I am sensitive to 1mV input, then my precision is 1mV.

If the brain is sensitive to one neuron, whatever its charge might be, then the brain is precise to that one unit of charge (coming from the neuron).

Now how accurate I am in counting how many "1mV" or how many unit of charge from the neuron, is something a bit different. 



I understand "in theory", but I mean if someone can point me to a place on Earth that can actually do it - that is able to make a cell that can multiply itself.

"In theory", since human are made of the same stuffs such as electrons, neutrons, protons, so in theory, anybody can produce a human being.

There is a difference "in theory" vs. "real world" evidence.
To say a neuron is a "machine" is too simplistic. A living cell can replicate itself. As far as I know, no lab on this world can create a living cell. And no machine in this world can replicate itself - that is able to be re-generative.

To say that "since something is made out of the same electrons, neutrons, protons then it is a machine", then it means that every single thing on this universe is a machine.  It seems too easy an argument.

I’ll accept defeat if someone can point me to evidence that "a machine" can replicate itself, or somewhere on this Earth that people able to make a "living cell".
I mean literally can replicate such as a living cell replicating itself.  

Maybe the word "replicate" is a wrong chosen word.  I mean "replicate" in a sense of a cell reproducing itself.  It may be able to do in theory, but I mean in a sense of a "cell" reproducing itself.  No lab on this Earth has been able to build a "cell" that can reproduce itself.  
What's interesting is that we have electron microscope that can look at a cell atom by atom, therefore we can know everything we need to know in theory what constitute a cell, the basic architecture of a cell, but nobody has been able to create a "living cell" in a lab.

As for the brain, it's millions times more complex (the word million is one big understatement), so it may be awhile before we can "understand" how the brain works.  I am not even sure it's possible.  


As I said in my previous post, AI will augment human reasoning and decision making process, but I doubt it will replace it.    
Anyone here trusts your life with a 747 will land on its own?  Or reading your brain scan?  Raise your hands.
IM distortion seems to be the only really important one as far as sound quality goes.
It is one of many variables.  Two drivers, one with lower IM does not necessarily really mean it is the better, although having low IM is important.  Another important variable is the material of the driver such as a paper, metallic, ceramic ... and so on.  Most modern drivers have fairly good IM so the determining factor may be what material the driver is made of. 

For example, a very good paper cone is still a paper cone and cannot compete with more exotic material such as ceramic.