Thiel Owners


Guys-

I just scored a sweet pair of CS 2.4SE loudspeakers. Anyone else currently or previously owned this model?
Owners of the CS 2.4 or CS 2.7 are free to chime in as well. Thiel are excellent w/ both tubed or solid-state gear!

Keep me posted & Happy Listening!
jafant
Some are saying (on that thread) that perhaps somewhere down the line Jim realized it was of no sonic consequence, but kept doing the time/phase coherent design because Thiel had already built a reputation marketing that characteristic. I think that is nonsense.

John Atkinson at Stereophile once said that if everything else being equal, he did notice that speakers with time/phase coherent have and advantage in soundstage presentation. The difficult part is how to determine whether a pair of speakers is superior to another pair of speakers because of its time/phase coherent or something else. For example, the CS2.4 may have better soundstage vs. another pair of speakers but maybe because it is just a better design with better driver integration and not because of the time/phase coherent aspect. How can you 100% sure the CS2.4 is better because of its time/phase or something else? Maybe the CS2.4 superiority comes from its coax driver and the quality of the xover? So you end up comparing apples to oranges.

The proponents of time/phase always point out to the "step response". But then if "step response" is so important, then you would think that non time/phase coherent speakers shouldn’t be able to reproduce music at all period, since in theory, if you can’t replicate the actual input electrical signal, then in theory, the output is all wrong and therefore what you hear should be all garbage. But obviously, non-coherent speakers can reproduce music just fine, therefore it is a contradiction, and therefore the "step response" is not a valid criterion, right?

I’ve been thinking about this but nothing came to fruition. I have a couple of explanations but really it could be anyone’s guess.

First, maybe our hearing is very tolerant. Even with non-coherent speakers, if it comes close to reproducing music, our hearing won’t really care much. But if the speakers happen to be coherent, then it would be icing on the cake. It’s like baking a cake. Anyone can bake a cake and most of the time, any cake would be fine, but if a really nice coherent cake comes a long, it would wake up our taste bud.

Secondly, and this one may be related to the first, is that the step response in theory has infinite frequency bandwidth, but our hearing is only limited to 20KHz. I won’t go into the mathematical details about the infinite bandwidth stuffs but you could look up. So the step response is not a valid "test" for our hearing since our hearing won’t care much for any high frequency content. I would imagine that if we human being has supersonic hearing capability all the way to the MHz range, then I am sure we could clearly hear differences between coherent vs. non-coherent speakers and the step response would be valid. Of course if a pair of speakers are just plain garbage then well anyone can tell :-)

Anyway, I’ll try to capture a step response in the next the post to illustrate the bandwidth limited theory. Looking at a simulation step response from one my design, it is consistent with what I said above with respect to our hearing bandwidth limited.

Regardless of time/phase or not, I DO see an advantage in first order design vs. higher order based on various listening experience. First order filter is the only filter that does not have phase distortion.


Prof, Andy and all - lotta stuff to chew here. We approached these matters a few months ago and got into trouble. I suggested that study was in order, not intending to disparage anyone - it is all quite subtle and worthy of more depth than we can enter here.
Prof: Toole's statement is false, and it carries lots of baggage. A: The basis of his mistake is that Jim candidly stated that it would be foolish for Thiel to approach the market with anything other than phase coherence. Note the difference in Toole's inference. B: It is nonsense. But Toole has a professional investment in the non-importance of phase coherence.
Andy: You state it well "Maybe our hearing is very tolerant". It is. It is more than that: hearing is a synthetic activity, we create the heard experience via very complex mechanisms. In a fiendish twist, the more sophisticated the listener, the less phase coherence matters, because s/he can create the heard experience despite the incoherent content.
As Andy alludes, the non-believers point to bandwidth limitations at 20kHz max to nullify the importance of waveform integrity. My study of audio and auditory neurology reveals that multiple parallel tracks decode the auditory stimulus, and the whole body is involved including the ears, mastoid process, sinus cavities, solar plexus and skin envelope - all working together to sense, decode and decide on the nature of incoming sound. The right and left ears transmit to different parts of the brain for different kinds of processing and the entirety is eventually reconciled into an aural image - what we think we heard. It is all very fascinating and far from completely understood science. I have been blessed to know some outstanding Otorhinolaryngologists as part of my learning. Audio engineers, even the best, barely scratch the surface.

One circumstance in play is that the temporal domain is not limited to the 20kHz frequency domain limit. Onset transient form and integrity which we can reliably hear, translate to wave-forms in the 200kHz range - that's 10x the frequency domain limit. Such variables are routinely ignored or dismissed by many audio scientists and engineers, in great part because they are inconvenient. The effort and knowledge to design and engineer a product (Thiel speaker) which honors time and phase along with the traditional domains, is orders of magnitude more complex than the generally accepted models would require.
Andy, your closing statement is true. "First order filter . . . does not have phase distortion". Again, we got in trouble over phase distortion earlier. First order is correct on all fronts. All other forms, such as 4th order linear phase, possess forms of phase distortion including pre-ringing and other anomalies. Those distortions can all be managed and valid products are designed with such work-arounds, the ear-brain is a magnificent synthetic filter. It has been said here before: the kinds of care required to produce a speaker which honors phase/time is by necessity a very thoroughly engineered speaker. Many subtle problems which can be ignored in non-coherent speakers become very obvious when phase coherence is introduced, because the auditory mind considers those sounds to be real rather than electronic facsimiles.

Andy, I think the step response may be the most useful tool in the kit. With knowledge, it contains the whole envelope, including frequency response.
My study of audio and auditory neurology reveals that multiple parallel tracks decode the auditory stimulus, and the whole body is involved including the ears, mastoid process, sinus cavities, solar plexus and skin envelope - all working together to sense, decode and decide on the nature of incoming sound.
Hi Tom,

Do we know if our ear drum can vibrate at much higher frequency than 20KHz?  In order for out brain to process higher frequency, I guess at least mechanically, our ear drum is not the bottleneck which is something that can easily be determined.  We evolve from primitive animals and I am pretty sure they all possess ability to hear at much higher frequencies because is critical for their survival, but as we evolve it is not as critical for us so I guess our ability to process high frequency is no longer there.

One circumstance in play is that the temporal domain is not limited to the 20kHz frequency domain limit. Onset transient form and integrity which we can reliably hear, translate to wave-forms in the 200kHz range

That is an interesting claim.  Theoretically I suppose that's possible but how to prove it I can see it could be problematic.  I am no longer as young as I used to be, but when I play a 15KHz tone, I swear I could not hear it :-)  But music is more than just a single sinewave tone, so I guess it cannot be used as a proof.  Raise your hand if you can hear a 20KHz tone.  God blesses you :-)


But let remove our hearing aside and look at thing objectively.  Let's say if you were to design a speaker that acts purely as a transducer - that is it required to convert an electrical signal to acoustic sound pressure.  Usually you would come up with a spec that say something like:
My transducer can work from 0 - 200KHz or 2MHz or some frequencies with a certain harmonic distortion.  So you would have to be able to show data to prove the spec.  What you would do is playing various sinewave tone from 0 - 200KHz or to 2MHz and measure the sound pressure at various sinewave frequencies including distortion.

My guess is the higher the frequency, the higher the transducer will show distortion and phase shift, and up to a certain frequency, the distortion will get so large that the transducer will no longer able to produce a clean sinewave.  So with this method, you could objectively compare two different transducer.  

The problem with step response is it has such a wide range of frequency bandwidth that it is not easy to be used to compare or to characterize.

Back to speaker design, I would suspect a true time coherent speaker will be able to produce higher frequency tone vs non-coherent speaker with less distortion.  And of course, as we go higher and higher frequency above 20KHz, the distortion on average will get higher and higher for any speakers.

Back to Tom's claim that we can actually process signal as high as 200KHz, and as I have said in my previous post that the higher frequency that human can process, the more likely we can hear the difference in coherent speaker.
Andy - there's too much to chew on here. But I can comment a little.No I do not think we hear tone above 20kHz. And I know that dogs do, and that Natasha hears bats talk and that fish sense 50kHz signals.
David Blackmer (DBX founder) and others have demonstrated that we can detect the presence or absence of 40kHz tones when riding on audio frequency tones. We also know from auditory research that impulses are processed in the time domain. In other words a crack or snap is perceived directly as a crack or snap with directional and other information that is not tonal. That impulse is further decoded in the brain, to "hear" its component frequencies much like a Fourier Transform,.

I am not claiming that a coherent speaker plays higher tonally than an incoherent speaker, merely that the temporal content is processed and "heard". Some individuals are quite sensitive and others completely insensitive to this temporal / impulse information. My suspicion is that Thiel customers probably fall in the time-sensitive camp more often than normal. 
My upper limit is now 4kHz, dropping at 12dB/ octave. So I'm down more than 24dB at 20K. However, I can hear the artifacts of different digital filters working in the range of 20K and above. My point is that the sonic characteristic of tonality is only one aspect of hearing and does not define the limits of auditory input. In my opinion, which is in good company albeit in the distinct minority.
(A fascinating observation is when playing with the back-firing second speakers a couple weeks ago: I could tell more about the various digital filters when playing the filter changes from the rear-firing speakers than when playing from the front-firing speakers. Also, polarity reverse of the rear-firing speakers did not change my ability to perceive which filter was in use. Go figure!)

Perhaps more to the point in speaker design, we at Thiel systematically discovered the auditory - emotional - holistic importance of accurate phase/time component in the musical signal. In particular, the absence of phase distortion lifts a mental veil which allows the audio brain to see more thoroughly to the essence of the sound. Sound processing is processor (brain) intensive, and removing the big demand of reconstructing time/phase information in a scrambled signal frees the brainpower to perceive other subtleties of the signal (in my considered opinion.) That effect might be called psychoacoustic, but it is nonetheless real given the fact of auditory processing system limitation.

My present work on lifting a veil for the Renaissance revitalizations makes use of this insight. I would not even hear the veil on a higher order system. But I can on this minimum phase system, and I can hear considerable detail and make and test constructive hypothesis, all well below intelligibility on a high order system.