Is harmonic accuracy and timbre important at all?


Disclaimer: I am not Richard Hardesty in disguise. But I have reached similar ground after many years of listening and equipment swapping and upgrading and would enjoy discourse from a position that is simply not discussed enough here.

I feel a strong need to get on a soap box here, albeit friendly, and I don't mind a rigorous discussion on this topic. My hope is that, increasingly, manufacturers will take notice of this important aspect of music reproduction. I also know that it takes time, talent, money and dedication to accomplish accuracy of timbre in speaker design and that "shamanism" and "snake oil," along with major bux spent on fine cabinetry that may do little to improve the sound, exists everywhere in this industry.

I fully acknowledge that Dunlavy and Meadowlark, a least for now, are gone, and that only Vandersteen and Thiel survive amidst a sea of harmonically inaccurate, and frequently far more expensive, speakers.

Can you help me understand why anyone would want to hear timbre and harmonic content that is anything but as accurate as possible upon transducing the signal fed by the partnering amplifier? It seems to me if you skew the sonic results in any direction away from the goal of timbral accuracy, then you add, or even subtract, any number of poorly understood and potentially chaotic independent and uncontrollable variables to listening enjoyment.

I mean, why would you want to hear only some of the harmonic content of a clarinet or any other instrument that is contained on the recording? Why would you not want the speaker, which we all agree is the critical motor that conveys the musical content at the final stage of music reproduction, to provide you with as much as possible by minimizing harmonic conent loss due to phase errors, intentionally imparted by the speaker designer?

Why anyone would choose a speaker that does this intentionally, by design, and that is the key issue here, is something I simply cannot fathom, unless most simply do not understand what they're missing.

By intentional, I mean inverting the midrange or other drivers in phase in an ill-fated attempt to counter the deleterious effects that inexpensive, high-order crossovers impart upon the harmonic content of timbre. This simply removes harmonic content. None of these manufacurers has ever had the cojones to say that Jim Thiel, Richard Vandersteen or John Dunlavy were wrong about this fundamental design goal. And none of them ever tries to counter the fact that they intentionally manufacture speakers they know, by their own hand, are sonically inaccurate, while all the all the same in many cases charging unsuspecting so-called audiophiles outlandish summs of money.

Also, the use of multiple drivers assigned identical function which has clearly been shown to smear phase and creates lobing, destroying essentially the point source nature of instruments played in space that give spatial, time and phasing so important to timbre rendering.

I truly belive that as we all get better at listening and enjoying all the music there is on recordings, both digital and analog, of both good and bad recording quality, these things become ever more important. If you learn to hear them, they certainly do matter. But to be fair, this also requires spending time with speakers that, by design, demonstrably present as much harmonic phase accuracy that timbre is built upon, at the current level of the state of the art.

Why would anyone want a speaker to alter that signal coming from the amp by removing some harmonics while retaining or even augmenting others?

And just why in heck does JMLab, Wilson, Pipedreams and many others have to charge such large $um$ at the top of their product lines (cabinetry with Ferrari paint jobs?) to not even care to address nor even attempt to achieve this? So, in the end I have to conclude that extremely expensive, inaccurate timbre is preferred by some hobbyists called audiophiles? I find that simply fascinating. Perhaps the process of accurate timbre appreciation is just a matter of time...but in the end, more will find, as I did, that it does matter.
stevecham

Showing 3 responses by mauiaudioman

Much of the music we listen to today is recorded in studio's using multi-track recording equipment. Microphones are chosen by their ability to best capture the sound of the instrument or voice being recorded. It's important to note here that much of what timbre is is the harmonics of the instrument, in other words, the harmonic makeup of the instrument....some harmonics being louder than others but all reaching your ears at precisely the right time. Microphones do not hear like the ear hears. That too, is important to understand. A microphone consists of a very delicate diaphragm suspended in air that moves back and forth due to air pressure changes (sound waves). That diaphragm is connected to one of several different electromagnetic mechanisms that converts the motion of the diaphragm to an alternating electrical current. That current flows in a cable connected to the mixing console, and becomes the basis for the audio signal that we will process, record, and ultimately send to a loudspeaker, where it will be converted back to sound. The ear, on the other hand, is a very complicated device conceptually. It consists of a very delicate diaphragm suspended in air that moves back and forth due to air pressure changes (sound waves). That diaphragm is connected, via a fairly elaborate mechanical linkage, to a remarkable organ called the basilar membrane. At the basilar membrane, the mechanical motions are converted to neurological impulses that are sent to our brain. There, along with some other things, those impulses are presented to our conscious mind. In other words, the microphone converts sound into an analogous electrical waveform, while the ear converts it into neurological impulses. The microphone has just one input and one output. We have 2 ears. A big part of what goes on in the brain before the neurological information is presented to our consciousness is the integration of the data from both ears into a single illusion. Each basilar membrane has about 30,000 outputs! Those 30,000 or so nerve endings are spread out across the membrane, so that each nerve ending ends up representing a different frequency, sort of. This is how we can discriminate pitch and harmonies. Visualize the microphone with a filter that divides the incoming signal into 30,000 different sine waves and transmits the loudness (and phase of low frequency signals) of each such sine wave down a separate cable to the console! Visualising that are we? Another important issue has to do with localization -- the ability to discriminate which direction a sound is coming from. The microphone can't detect this at all, while the ear does in several interactive and highly complex ways. As sound enters the outer ear, tiny reflections of the sound bouncing off the pinna (the flap of skin surrounding the ear canal) recombine with the direct signal to create very complex and distinctive interference patterns (comb filtering in the range between 5 and 15 KHz.). Each different angle of arrival of a sound yields its own distinctive and audible pattern, and the brain uses these (actually it happens at the basilar membrane and in the auditory nerve on the way to the brain) to determine which direction any sound element is coming from, from each individual ear. So yes, it matters. I sometimes wonder how many audiophiles have never lived with a time/phase accurate speaker. Like I said in my previous post, once you do, you won't go back to high order crossovers.
Steve, good post. The answer to your question is simple: Most speaker designers don't know the physics behind designing a time/phase accurate speaker. It's much easier to claim that it's not audable. I suggest that anyone owning a pair of well designed time/phase coherent speakers for 6 months would never be able to go back to what most manufacturers claim as "hi end" again. I have been listening to Green Mountain Audio speakers for the last 3 years and I can now hear the crossover in every non time/phase coherent speaker I hear.
Live sound direct from an instrument to our ears does not have delay that changes with frequency superimposed on its original response. It is an artifact of speaker physics. We would not tolerate such phase smear in our consoles, mixing boards, amplifiers, pre-amps or any other piece of gear. As speaker technology improves, the remaining clues that we are listening to speakers, such as distortion, horn signature and other artifacts, are reduced. Phase delay is a subtle but critical clue to our ears, and its reduction puts us closer to the real thing. All other things being equal, the speaker with the flattest phase response sounds the closest to being there live. Every time. To claim that our rooms cause problems therefore we should accept phase shifts on the order of one full cycle from our loudspeakers does nothing but fuel the fire for the designers that lack the knowledge to build a time/phase coherent product.