Why do amps sound different?


Hi folks, can anyone tell me why amps sound different? I know this is a very trivial question, but it isn't so trivial as I previously thought. For example: an amp can sound "warm", while the other can sound "lean" and a bit "cooler". These amps measure the same on the test bench, but why do they sound different? What causes the "warm" characteristic if the amp has pretty good measurements and frequency characteristics? It is certainly not measurable high frequency roll off, otherwise the amp sucks. Maybe one of the experts among us can elucidate this issue a bit. Thank you.

Chris
dazzdax
A very likely reason two amps can sound different, yet measure similarly on the test bench, is that the measurements typically taken are not done under the real load an amplifier will encounter with a dynamic waveform. Secondly, I'm not sure the typical bench measurements are measuring the right things regarding how an amplifier will sound. For a simple example, what if THD is basically the same, but the distribution of even and odd harmonic distortion is different among the various amps. Or what if various parameters, alone perhaps not significant, combine in ways that alter our brain's interpretation of the sound?

I think it's a not so simple a question you've asked. In my opinion, the best amplifier manufacturers have a basic measurement model they've developed/trust that allows them to accept or reject a given design before putting much additional time and effort in developing it. I would also guess that any manufacturer that claims to develop a design soley from ear, beginning to end, is probably not being truthful, spent a lifetime getting one design "right", or will not have much consistency in future designs. My point is that an audio system is a complex system, much like the weather. We can know quite a bit about the "averages", but it takes a very complex model, with alot of measured inputs, to have a reasonable chance of predicting the weather with any degree of accuracy.
TIA - I am most certainly not any kind of expert, but I did sleep at a Holiday Inn Express (once...)

For starters, if they do sound different then they do not measure exactly the same - period. You may be referring to a single aggregate measure (amps, etc.) when you say that they're the same but if they sound different then the detailed measurements really are different. All amplifiers vary from the ideal of "a straight wire with gain" - it's in exactly how and where they vary that leads to perceived differences.

For one reasonably detailed explanation you could try this:
http://sound.westhost.com/amp-sound.htm
Post removed 
I would bet that they don't really measure the same on the test bench as you say, if they do the tests are not comprehensive enough. Specs can't be looked at individually. When people say that two amps measure the same, they often look only at WPC and THD and, for the very advanced, s/n ratio.
I am far from an expert, but from what I have read, measurements aren't always a barometer of sound. As a subscriber to Stereophile, many times the measurements indicated on some high priced amps are not really up to snuff, yet despite the measurements, the reviewer will swear on a stack of bibles that the amplifier sounded excellent. I think every company has it's own unique sound bias and will seek to design it in their product. The quality of parts and even the type of wiring (copper vs silver) will also determine the sound quality. For instance, even with the same measurments, tube amps almost always present a different sound quality than solid state.
We need to bring back Julian Hirsch from the grave to recite all his articles in High Fidelity magazine that covered that subject. Do you remember Julian? He was of the test bench school.
Generally if there is an audible difference then a difference can be measured. Specifications are however measured under conditions that are not typical of when a power amplifier is used to drive a complex load (speaker).....hence the possibility for different sound. In general, good pre-amps should sound exactly the same - if they sound different then one may be suspect or there is a mismatch between rated input and output levels and/or impedance with other connected components. Often a distinguishing feature between a good preamp and an excellent one is better channel separation, immunity to noise (from dirty power) and lower noise floor.

Note that the other way round is not true....often a difference can be measured that is inaudible when listening to music.
Cost. Every amplifier is designed to a price point. Since there's no such thing as a perfect amplifier, each manufacturer strives for the best sound within a budget (assuming that's the design criteria). Everything inside the amplifier imparts distortion - there's no getting around it - so they compromise as best they can. Forget bench numbers (see: Tube Amps), the sound of an amplifier is mostly based on what the designer hears as he's voicing the amp. And since no two people will neccesarily concur what is the best way of doing things, there's will always be differences - whether it's amplifiers or pizza.
Amps are like ice cream flavors. I agree with most threads here but, amps embellish certain freq. ranges to tickle our ears. We like what we like! Most really nuetral amps don't sell that well.
I'm thoroughy convinced that we don't know how to measure some things that really count- and that's why amps sound different at similar specs.

What are those "things"? I have no idea.

My bet is that the perception of reproduced music is extrememly complex, and we humans are still not as smart as we believe ourselves to be.
We need to bring back Julian Hirsch from the grave to recite all his articles in High Fidelity magazine that covered that subject. Do you remember Julian? He was of the test bench school.

Nah, leave him there. It's threads like this one that demonstrate how few people understand what the test bench tells us and doesn't tell us.
Danlib hit the nail on the head. Two amplifiers can measure nearly identically in all of the usual tests and yet sound significantly different. Why? Because amps are filled with passive parts like resistors and capacitors and wire that all sound different even when rated at the same value.

Anyone who has actually listened to different brands of resistors or capacitors of the exact same value knows this. I believe dialectrics have a noticable effect on sound and they are everywhere including the circuit boards.

Even the solder used can affect sound quality. Recently a popular high end manufacturer changed to lead free solder and all of their gear actually started to sound different (in this case better, thankfully).

Amps all measure differently, except our measurements are actually poor even to this day of incredibly sensitive and accurate instrumentation. Why? It's really quite simple, if you know something about electronic instrumentation limitations.

What are the instruments we use to measure amps? Signal generators (sine and square wave), oscilloscopes for time domain, and precision audio band spectrum analyzers. A few systems are all digital and have many bits of precision to see noise and harmonic distortion and IM distortion to >140 dB dynamic range.

But is that enough to see it all? I will contend no, not a chance. For measuring time domain signals like square wave response, we have typically only 8-10 bits of resolution compared to about 20 bits of audible resolution in our hearing. Woefully inadequate to see what's going on in one small case, let alone a large number of cases of signals to observe in the time domain.

Spectrum analyzers can see more of what's going on, with greater dynamic range than most people's hearing. With one big caveat: it can only measure a repetitive waveform that is nonvarying over "infinite time", something no musical waveform ever produces. Sure there's the fourier transform that can show that waveform accurately in the time domain that the scopes cannot, but it only allows for non-dynamic conditions.

Granularity of a signal, dithering of a signal, and random events all caused by an amp, and those do happen all the time, are "averaged away" by the precision spectrum anaylzer, because it only zeros in on the repetitive waveform, and tends to ignore the non-repetitive pieces of it.

Even still, some harmonics and IMD signals are more damaging than other signals and we still are guessing how those varying combinations will make it sound. There are sonic signatures of distortion from minor parts that measure almost perfect, and how can we possibly hear that, swamped out by immense other distortions? I once asked a similar question: how much of the chemical that ruins a fine bottle of wine by making it taste "corked" is required? The answer: 5 parts per billion, or 0.0000005% distortion from that impurity. Other chemicals have much less impact, even if you add some vinegar, or sulphuric acid, or worse.

The only way to start examining the errors of the "perfect sound forever" CD was to have more than 16 bits and 44.1 Ksamples/sec. DVD-audio succeeded in that, by digitally recording dynamic musical waveforms at 24 bits and 192 Ksamples/sec and comparing that to the CD data. They are not perfectly equivalent. The moral of this story: Seeing real time signals out of amps to the extent humans can hear requires a very high sample rate and measurement width (bits resolution) and watching over some time period from a source signal that is not a simple repetitive waveform.

But how to really compare the differences that do show up? How do you really interpret it? Still, there remains only one way - listen to the results in the context of the whole system. All audio designers that fail to do this are being oversimple and too trusting in their limited measurements and ignorant of the true nature of those measurements.

I've worked for a leading test and measurement company as a test engineer for 24 years now, and I think I know what I'm talking about on that subject.

Kurt
consider tube amplifiers. replacing the tube with a different manufacture and different value, e.g., replacing a 12ax7 with a 12au7, where permitted, will possibly alter the sound. what about something as simple as damping factor ?
Kurt, thanks for your input. A highly reputable Swiss manufacturer of audio amplifiers relies on the test bench results. According to this manufacturer a power amp should measure perfectly, until then it is flawed. When you do test bench measurements of this Swiss manufacturer's power amps, you will like to see excellent (almost textbook perfect) results. That's why they sound extremely good and better than 98% of all power amps.

Chris
These amps measure the same on the test bench, but why do they sound different?

They measure the same? What did you measure?
Kurt,

I like your wine example. In amplifiers, it is TIM, IMD and high order odd harmonics that can make it sound "corked" even if the distortion levels are extremely small - would you agree?

Perhaps there are other "contaminants" that in extremely small doses can affect the sound audibly...what "contaminants" do you look for?
Dazzdax,

That is still just one group of people's opinions. Transistor amps have always "measured better" than tube amps, as defined by standard bench tests devised by engineers. That is not sufficient to prove they are better, it's a belief that it proves it does.

Now this swiss amp has heavy competition from amps that measure much worse than perfect, for the title of best amp. It might well be in the top 2%, but one of the amps given title of best amp include Lamm's medium power zero NFB high distortion 6C33C low damping factor triode SET. How could that happen? Because it's still subjective.

I happen to own a battery powered Class T (form of Class D) amp from Red Wine Audio. I used to be devoted to tube amps, but this is better in many ways for a moderate cost amp driving medium sensitive speakers. It's as modern a design as there is for amps, and measures well and sounds excellent. It has less personality than tube amps, yet more transparency than most transistor class A or AB amps and most tube amps as well.

Since there is no perfect source and speakers, why should it be "correct" to own a perfect measuring SS amplifier? What if it processes the signal a little that offers the illusion of richer instrument harmonics that got bleached out in the recording process? This could be considered better, not defective.

Until source and speakers become perfect, I will never be pursuing the perfect "straight wire with gain" electronics to go with them. I will search the best complementary electronics to go with them, within my budget. A lot of that comes from my custom built tube preamp where the parts and circuitry were all selected for overall sound quality, not bench measurement perfection.

So where is the value in bench measurements for amps? Simple: factory quality control. An automated way to see if the completed product is within expected tolerances to ensure uniformity of them going out the door without the more expensive and impractical method of listening tests for all.

Kurt
Shadorne,

The levels of distortion that can ruin an amp can be so small that measurement is impractical, especially for a guy without very expensive modern test equipment. And like I was trying to say, isolating the distortions in special cases are nearly impossible. In wine tasting, it's trivial to add 5 ppb of the diluted chemical that makes it taste corked. There are many chemicals to worry about in wine, and the levels that are needed to ruin the wine are all different. The chemicals can be isolated and added individually to find out this exact level needed to stay under to be safe. In audio, we don't have those numbers. We can't isolate the contaminants without changing something else with it. All distortion claims are scientifically invalid as a result.

For example, back in the early 1960's it was declared that THD levels under 2% were inaudible. Then in the 1990's it was stated that it better be under 0.1%. The difference was the different content of the average amp's harmonic spectra, between the 1960's tube amps and the 1990's transistor amps. And that was just the tip of the iceberg.

I can hear the difference in resistors in an amp. What are the distortion levels caused by resistors? Almost nil, unmeasureable except to the best equipment available, like down -120 dB. That's about 0.0001% distortion. But its a different kind of distortion. A lot of it is some HF ringing from spirally wound (inductive or possibly capacitive) laser trimmed resistors, some also from magnetic nickel or steel construction in it with hysteresis distortion.

Capacitors have more impact, especially coupling caps. The different dielectrics produce different levels and different types of distortion: dielectric saturation that bends the linearity of the charge/discharge cycle, different dielectric absorption distortions under dynamic time domain conditions, dielectric hysteresis distortion, and frequency dependent ESR and ESL shifts.

The greatest distortion generator in tube amps are the magnetics of interstage and output transformers. The main one being the large saturation and hysteresis distortion. Then there's the imperfections down in the microscopic level at the magnetic domains. Some domains don't respond well to small signals and low level detail might be obscured at low volume levels. Nickel is better than silicon steel for the low signals and should be used for anything before the output transformer.

Those kinds of distortions often don't really show up well in repetitive waveform measurements. And if they do, they just ride on top of a bigger and more recognized distortion, or it looks like it's all from one known source.

Again, isolating the audible small distortions that ruin the sound is a near impossibility. But some distortions are small and very annoying in limiting performance, one of the worst offenders being the distortions of different capacitor dielectric material. Yet for high level distortions that are seemingly more benign are the magnetic transformer distortions.

In transistor amps, the worst offenders are the transistors themselves IMO. Lots of high order distortion that need plenty of NFB to try to get rid of. And the typical vertical MOSFET has huge modulating input capacitance loading, which has shown to be a big negative to the sound. Luckily we now have lateral MOSFETs that go a long way to solving that problem, somewhat more expensive and hard to find, but are featured heavily in Ayre amps.

And now to the most controversial topic: wire and connectors. Does it have a distortion? If so, can I prove it? The answer is yes. The cell phone companies found the problem for the first time and measured it for the first time with the most expensive test setups. It turns out that transmitter/receiver stations for cell phones have to have remarkable low distortion in the RF cabling and connectors in order to work. One channel might be transmitting 100 watts out while the adjacent channel is receiving only 10 microwatts on the same cable. That's a problem if there's IMD on the cable, just about any IMD.

So they set out to measure the distortion of the cables since it appeared it was not good enough. They were right. The distortion measurement is called Third Order Intercept, where the third order harmonic would reach the fundamental at a theoretical output. It was discovered they needed +130 dBm TOI to get the job done and they saw it wasn't reaching it. To fix it, they re-designed the connectors by silver plating them. Then the distortion of the cabling systems went down to an acceptable level. You can buy silver plated RF connectors now for this problem.

Anyway, it seems some people can hear connector and cabling distortions well. And there's some evidence to back up their claims.

Kurt
Although alluded to before, it seems that the amount of negative feedback is a big issue.

2 amplifiers can easily have the same bandwidth, but one running with feedback and one without. The one without will likely sound more relaxed, since it lacks the global feedback which enhances odd-ordered harmonics which in turn behave as loudness cues. We are not talking a lot, like Kurt says- hundredths of a percent is all it takes to make the difference.

A further complication is the idea of 'constant voltage' output, which is the same as doubling the output power as the load impedance is cut in half. Some speakers are designed to expect this (B&W 802). Other amplifiers are designed with the idea of "constant power" in mind- that is that power does not change regardless of the load (tube amps are good examples of this). Such amplifiers, sometimes referred to as 'current source' amplifiers, have a higher output impedance and an entirely different class of speakers exist to accommodate them (Sound Labs and horns for example).

In fact two paradigms of design and measurement exist in audio today:

http://www.atma-sphere.com/papers/paradigm_paper2.html

This is about more than just simply matching components, but that is what you have to do. Normally one paradigm will take over in a field of endeavor but that did not happen in audio because the 'prior art' (tubes) did not go away like they were supposed to- too many people like them.
Another interesting thread with great contributions by everyone.

Is someone able to explain the difference between "current paradigm" and "voltage paradigm" amplifiers?
Amfibius, take a look:

http://www.atma-sphere.com/papers/paradigm_paper2.html

'Current source' is a Voltage paradigm term for amplifiers that have a high enough output impedance such that they exhibit (or at least try to exhibit), constant power with respect to load. Usually this refers to a tube amplifier, but not universally.

"Voltage source' is again a Voltage paradigm term for an amplifier that can express a constant voltage regardless of load. Another way to put this is that power is doubled as the load impedance is halved (or halved as the load impedance doubles). Usually this refers to a transistor amplifier, but again, not universally- there are always exceptions because this is not about tube/solid state.
I never look to specs other than output power to make an amplifier purchase since specs are seldom an indicator how an amp will sound. Most quality amps are designed by ear, the mfgr applying different circuit topologies, trying different electronics parts & wiring in critical areas of the circuit to reach a desired sound before production begins. You end up with the designer's opinion of what sounds good.
Phd, one of the distinctions of the Power paradigm vs the Voltage paradigm is under the Voltage paradigm, the specs often have no meaning as you already know. Under the Power paradigm, the specs correlate with what you hear.
Atmasphere, you are correct, that distinction has been clearly made at the link you provided. Thankyou, I found it very interesting and informative!
While I enjoy and value Atmasphere's posts on the subject, I will take issue with the major point in the paper he presented. I don't see that these two paradigms exist at all . . . except in a hypothetical world where there is a simple, binary choice in available loudspeakers: Apogees and Lowthers.

If you look at the symbiotic evolution of amplifier and speaker designs over the past eighty years or so, it's commonly accepted that an increasing abundance of amplifier power enabled loudspeaker designers to trade efficiency for other factors, such as smaller cabinet size and improved linearity. But it has been the loudspeaker designers that have, in turn, been consistently demanding more "current impervious" performance from the amplifiers. This is why the hallowed amplifier designs of the pre-war era were triode designs: yes, for linearity, but just as importantly, for lower output impedance. Even an Altec VOT system and an Altec 604 duplex monitor would have presented very different impedance curves to the amplifier. And in either case, a flat frequency response from a linear amplifier was highly desired.

Even seventy years ago, loudspeaker designers were working with a voltage-source model, not a current-source model. While the reasons for it are my own speculation, they seem pretty obvious. First, high-frequency transducers almost always have a huge efficiency advantage over low-frequency ones. Second, advances in transducer technology are mostly advances in materials (diaphragm materials and suspensions, magnetic materials), and mathematical modeling (horns and lenses). Designing loudspeakers and crossovers to effectively take advantage of what the transducers have to offer is extraordinarily easier, and achieves better results, when working from a voltage-source model.

The presence/absence of multiple impedance taps on amplifiers, for this discussion, is a non-sequitur. If one wanted to design a conventional transformer-coupled tube amp that put out 50 watts into 16 ohms, 100 watts into 8 ohms, 200 watts into 4 ohms, etc. from a single output tap, it could be done . . . there would simply be huge tradeoffs in terms of efficiency and performance into a given impedance. Very similar tradeoffs also exist in solid-state amplifier design . . . the difference is one of cost and benefit. If you already have an output transformer, then adding additional taps usually makes sense. If you don't . . . then it's of course bit harder and costlier.

My point is that there really is no "Current Paradigm". The interface between high-fidelity amplifiers and their respective speaker systems have ALWAYS been based on a voltage model. (The term "high-fidelity" is meant to simplify the discussion by excluding things such as field-coil speakers and 70V distribution systems, not a snub to anybody's amplifier design.) And high-fidelity amplifiers have always been expected to have reasonably "current impervious" operation. What "reasonably" means in absolute terms is a debate that has been around many years longer than solid-state amplifiers . . . but if an amplifier's output is intended for a "4-ohm" load, then I would expect it to be fairly "current impervious" over the range of current that a "nominal 4-ohm" loudspeaker would require, plus some extra for good measure. Most good conventional tube amps achieve this.

I maintain that a high output impedance, for a high-fidelity audio power amplifier, is ALWAYS a liability, period. Now it may be that some of these amplifiers have other performance aspects that outweigh it, and some speakers are tolerant of it (and a few even subjectively improved). But this idea that there's one branch of the speaker-design profession that optimizes their products to work with amplifiers that have high output impedances? I don't buy it. If there is, then exactly what is the output impedance that they're expecting?
Kirkus, How did the loudspeaker designers gain enough leverage to make demands on amplifier designers?
Well, Cyclonicman, legend has it that James Lansing, immediately prior to his untimely death, wrapped a piece of Alnico V in a largish bath towel and "went postal" on the electronics staff at Altec . . .

But seriously, they did it by designing speakers that people wanted to buy, and that were more demanding loads for the amplifier. 40 years ago, virtually all amplifiers had 16-ohm output taps, and today, an amplifier's performance into a 16-ohm load isn't even a footnote. I guessing this is because, er, how many modern 16-ohm hi-fi speakers can you think of?

A great example is the Apogee full-range ribbons I alluded to. The two things that people remember about them are that they sounded amazing, and that they blew up amps. I have heard from a few sources about how these loudspeakers influenced Mark Levinson's amplifier designs . . . I'm not so sure that the timeline works out for that to be true, but the Apogees definately had a huge influence on the current output capability of "flagship" solid-state amps of the 1980s and 1990s.
Hi Kirkus, I'm not the one who has created these paradigms; they simply are what is. And for the record, you would be hard pressed to build a tube amp of the type you describe! Even if you ignore the taps of the output transformer, most tube amps will exhibit the constant power quality anyway. The taps are there to allow optimized loading on the tubes- its not the other way around.

You are correct in that the Voltage Paradigm was being developed about 60 years ago- during the 50s and 60s... **almost** 60 years ago. That bit of history probably needs to be in the paper so thanks for pointing that out. I don't like to think that the 1960s are that distant yet :)

FWIW the Apogees and Lowthers are both Power paradigm technology. If you want a better comparison, compare the B&W 802s (needs a 'voltage source' amplifier) to the Lowther (needs constant power).

Apogees are in the Power paradigm as their impedance curve has very little to do with resonance in a box and so does not exhibit the classic impedance curve of such a device. Being a nearly resistive load, zero feedback tube amps work great with them if they can deal with the impedance (some Apogees are a simple 4 ohm load, others as you know are quite a bit lower, but other than that they are easy to drive)- a set of ZEROs provide the access for that.

Paul Bolin (at the time with TAS) reviewed a set of zero feedback triode amplifiers (and gave them a Golden Ear Award) using the Apogees for his speakers. Prior to that another TAS reviewer ran his 1 ohm Apogee Full Ranges with a zero feedback triode amplifier (which made 100 watts) and gave good marks to it as well. I had the opportunity to hear that setup, and the 100 watts seemed to be plenty of power- they were at once very relaxed, detailed and with plenty of authority on the bottom end. A fabulous speaker!

I've tried to school myself as best I can about this subject, and I appreciate your input- the more this issue gets airtime I think the better for the art.

Hi Atmasphere . . . my main point is that hi-fi speaker designers simply do not consider an amplifier to be anything other than a voltage source, and that they never have. Further, it seems obvious to me that amplifiers have historically been intended to operate as voltage sources. And please believe that I'm not categorically critizing amplifiers that deviate from this practice, but I believe that a high output impedance, as an intentional, acceptable goal, is a completely modern phenomonon that is unrelated to what all but a very few speaker designs are anticipating.

The impedance at which an amplifier produces maximum power output, again, is completely non-sequitur. When I completed the restoration on the Marantz Model 2s currently in my system, I measured the output impedance at about 0.18 ohms from the 4-ohm taps - for all intents and purposes, a voltage source. This was the only tap I measured, but let's say that the 8-ohm taps have about 0.4 ohm output impedance. I would guess that my "4-ohm" Mezzo Utopias (typical reflex cabinet) would range from about 4-15 ohms. The modification of the speaker impedance on the voltage response of the amplifier would thus be about 0.3dB from the 4 ohm taps, and about 0.6dB from the 8-ohm taps . . . very little difference between the two. My point is that even if the load is mismatched and grossly affects the maximum power output, these 1950s-era amplifiers behave overwhelmingly as a voltage source, NOT a power source or a current source - if they're operating below clipping.

If I was to look for evidence that loudspeaker designers viewed an amplifier as a current source, here's what I would expect to find: Filter values and woofer conjugates in crossover networks that are calculated with the expectation of a high source impedance. Parallel resonant networks inside crossovers to dampen the impedance peak(s) from the cabinet/port. Standard models for calculating woofer responses from Thiele/Small parameters, that include a high source impedance. A specification from a speaker manufacturer that reads something like "recommended amp output impedance: 2-6 ohms". If I've been living under a rock, please tell me, but I've NEVER seen any of the above.

I chose the Apogee as an example of voltage-source thinking because I remember it being a very capacitive load, not simply low-impedance; maybe my memory fails me. But it doesn't surprise me that a capacitive speaker could sound nice from a high output impedance SET amp, for a couple of reasons. First, there's nothing like a high output impedance to keep an amplifier within its optimum current range . . . in the same way as a series resistor! Ditto for avoiding stability issues that many amps exhibit into capacitive loads. And third, I could easily see a capacitive load causing a resonant peak in the output transformer that might kinda offset the Ohm's-law HF rolloff. But again, I don't think the Apogee designers were anticipating these conditions.

Anyway, I find this interesting because there are so many "high-end" speakers out there that leave me scratching my head as to why they don't sound good to me at all, and I wonder if this is the way they're "supposed" to sound.
Kirkus, the Acoustic Research AR-1 is a good example of a speaker that was designed with intention to be used by a 'current source' amplifier. They recommended an amplifier with an output impedance of 7 ohms. Sure enough, it actually does sound better with such a thing.

The AR-1 was the first production acoustic suspension loudspeaker.

When speakers were first created, so you didn't have to wear headphones, there was nothing out there that was practical except horns. This was a long time ago- 1910s and 1920s. The only amplifiers around were triode class A zero feedback. They were the only game in town. Around WW2 the idea of negative feedback was developed, and the debate around it at that time was the 'listener fatigue' that often resulted.

The effects of odd-ordered harmonics were not understood but the effects of them were.

After the war the feedback debate continued. In the meantime, loudspeakers continued to be built that expected a fairly high output impedance out of the amplifier. Many of those speakers (Altec, JBL, Klipsch, EV, Lowther, Quad) are collectable and sought after today.

Feedback began gaining ground in the 1950s with the main proponents being Marantz, McIntosh, Fisher and Electro-Voice. EV and Fisher in particular were cognizant of some of the underlying issues and often recommended variable current feedback as opposed to voltage feedback. Variable, on account of it does not work the same way, depending on the intention of the speaker designer.

In 50-60 years since, we are still facing the same issues. How a voltage source does not work with some speakers is a transistor amplifier on an ESL- Sound Lab for example. Sound Labs have an impedance curve based on a capacitive nature. When a transistor amplifier with high feedback (voltage source) is put on a speaker like this, the highs are too pronounced and there is no bass. The speaker has an impedance over 50 ohms in the bass. Put a tube amp (which tries to make constant power) on this load and all of a sudden the speaker is making bass.

The highly reactive nature of horns is another technology that does not work so well with 'voltage source' amplifiers. Often the back EMF produced by the speaker gets into the feedback loop of the amp, causing excess harmonic generation- certainly not a lot, but enough so that horns get the reputation of being harsh and honky. Anyone who is running a 'current source' (zero feedback) amplifier on horns knows this reputation is ill-deserved.

Another sign that the 2 paradigms exist is amplifier specification. How often have audiophiles experienced the phenomena of the specs saying nothing about how the amp sounds? In fact, sometimes a negative correlation is perceived (higher distortion on paper--> better sound).

Speaker designers have been designing for 'current source' amplifiers for a long time. If anything, there are more of them now then there were 50 years ago (there are more tube amplifier manufacturers in the US now then there were in 1958...). So this issue is very much with us.

At the crux of the paradigm debate are the rules of human hearing. On the one hand (Voltage Paradigm), the only rules respected are human limits of hearing (20Hz-20Khz) and decibels, primarily resulting in a set of inaudible benchmarks that have little to do with how we hear. OTOH (Power Paradigm) the RHH (Rules of Human Hearing) are the *only* thing that matters, eschewing the bench measurements as having no meaning if they mean nothing to the human ear.

This is at the root of the tube/transistor debate and the objectivist/subjectivist debate. Its not that it does not exist- it **is** that it won't go away quietly, no pun intended... :)
Atmasphere, I owned AR-3s (identical to AR-1 except for mid & tweet) for many years, in fact I have some of the dog-eared original documentation right here . . . the only thing I see about a recommendation for the amplifier is "25 watts minimum per channel". In addition, for the frequency-response graphs, the Y-axis is labelled "OUTPUT IN DB (INPUT 6.3v)". Voltage source. QED.

I'm not familiar with the details on the Sound Labs, but sure, let's look at ESLs . . . how was the Quad II amplifier designed? Similar (low) output impedance to my Marantzes, and I think it's a pretty safe bet that they were originally designed with ESLs in mind.

And I totally lost you on the back-EMF from horns thing. Are you really suggesting that the inertia from, say, even a JBL 375 compression driver (huge diaphragm) could possibly generate any measureable back EMF? And then make it back through a couple of crossovers (N7000 and N500 in the case of Hartsfield & Paragon) to the amplifier? Ludicrous. Look at those crossover schematics and reverse the math, and it's pretty plain that they assume a constant input-voltage vs. frequency relationship.

I am in absolute agreement with you that there exist a great many bright-sounding solid-state amps with thin-sounding bass - and omigod, one of these on a pair of Klipshorns is seriously painful. And we're probably in agreement that simply raising the output impedance by sticking a resistor in series won't really help one bit. So okay, the sound is still bad because of transistors, feedback, the devil, etc . . . quite possibly. All of those to me are completely separate issues, each that deserves careful, systematic analysis.

The association of characteristics such as high output impedance, zero loop feedback, DHTs, single-ended output stages, single-driver full-range, L/C phono equalization, etc. etc. with each other is artificial . . . it stems from modern audio credo, not history or engineering. After all, the people who designed the classic audio gear were NOT triode purists, no-feedback believers, horn affectioniados, single-ended snobs, or whatever. They were simply using the resources they had to address what they felt were the biggest weaknesses of the audio chain.

We're lucky that so much of what they accomplished is applicable in a modern hi-fi context . . . but I think it's a bit of an insult to their work to assume that their philosophy fits neatly into one side or the other of a 21st-century audiophile belief paradigm.
Hi Kirkus, I have a set of AR-3s myself- I use them for monitors. They are power hungry but they like low feedback amps just fine.

Actually, the idea of putting a resistor in series with a transistor amplifier is a good one. Nelson Pass suggests that in an article he wrote about a year ago. This simulates a high output impedance amplifier quite nicely, and mellows out a lot of horn systems when used with transistors.

If you think about high efficiency horns, one thing that should be obvious is how much tighter the voice coil gaps are. Take a look next chance you get. Apparently you don't believe it but yes, the back EMF they produce by their very nature **has** to be higher- they have greater efficiency, going the other way they will have more output. Any voltage that is not part of the output of the amplifier is something that the amplifier is supposed to correct if it has feedback.

And yes, you are correct, in the old days designers were simply working with what they had. What they had were amplifiers with high output impedance. Amps like that are still around today. Sure you can build a tube amplifier with a lot of feedback, but then again that amplifier will likely sound harsh. This is all about the difference between designing to meet the rules of human hearing as opposed to designing for arbitrary rules that exist only on paper.

Again, look to Nelson Pass- read his articles- as one who began wondering over ten years ago why people would not give up their tube amps. He started building zero feedback transistor amplifiers and given the right (power paradigm) speaker they are some of the best-sounding transistor amps around.

I've heard many Quad systems in my day from the 57 and 63 on. So long as the amplifier can deal with the low impedance at high frequencies, and amplifier that otherwise plays constant power on the speaker will also be the one that makes it play bass.

In recent years Quad has followed Martin Logan in trying to develop low impedance ESLs so transistor amplifiers will work better with them, but in order to get the speakers to not be too bright, the amplifier driving them is usually tube-based.

Its important to understand that this is not a tube/transistor conversation, and also in the intervening 50 some-odd years that the 'prior art' has continued to advance. So think about a designer that worked with what was available 70 years ago, then think about the raft of modern designers that have looked back at that earlier art to see what there was that might have been lost.

We started making triode zero feedback amplifiers in the 1970s and 80s, and Cary Audio began in earnest about 1990. Today zero feedback amps are prolific. What happened? There was an acknowledgment amongst designers that a measurement is not important if you can't hear it, that that if you can hear it maybe we should find a way to measure it.

As I pointed out in the article, for one sufficiently grounded in a paradigm, anything outside that paradigm is either hearsay or does not exist. So I expect challenge on this issue- its part of the definition! It also points to some of the fundamental and longest-lasting debates that have existed in audio over the last 20 years.
Okay, so I am a little jealous of your AR-3s. Mine went away during one of the audio-gear purgings that accompanied a cross-country move. I do have fond memories of the way they sounded in a bedroom system running off of a cheap Knight 6-watt tube amp, which uses 6GW8s in P-P and no NFB (with tone controls set flat). But they really came alive when I moved them to the main room and ran them with Mac MC75s . . . it's in this setting that I felt I had an idea how they were "supposed" to sound.

But FWIW, it's interesting that both the ARs and the Macs are gone, but I still have the Knight . . . it's running a pair of B&O CX100s in an office system. When I told this to the man who designed the CX100s, he of course looked at me like I had five heads . . .

Again, it's not that wonderful sound can't be obtained from amps with high output impedances, I just feel that it greatly increases the chances that when paired with loudspeaker X or Y, the sound will be less a "realization" of the loudspeaker's sound, and more of an "interpretation".

An analogy would be a performance of solo Bach . . . there are many shades of grey between a fresh, modern performance and one fraught with tacky rubato. And there is indeed so much room for opinion . . . but to dislike a "deviant" approach (i.e. Glenn Gould, Modern Jazz Quartet) is in my book a fundamentally more defensible position than to dislike a highly compentent scholarly approach (i.e. John Holloway). Ah, but what determines what's "deviant"? It's not simply the approach that's less in vogue, it's the performance that deviates more from what is found in the written score.

And I think that our point of fundamental disagreement is this: I feel that in defining the amp/speaker relationship, "the written score" is the voltage at the speaker terminals. And just like Bach, to deviate from "the score" isn't fundamentally bad (I like MJQ but don't like Glenn Gould), it just puts the amplifier on shakier ground.

Nelson Pass is one who has stood on this shakier ground for many years . . . but he manages to stay there because of the fundamental competency of his designs. There exist far more designs that have ventured onto the same shaky ground, and without a level of design competence to hold them up . . . and those amplifiers sink right through to join the Phase Linear 400s in the landfills, which is where they belong. I also have the impression that many owners of Pass' amplifier designs are willing to choose their speakers to make the amplifiers perform at their best, which is consistent with the traditional view of a amplifier with a high output impedance.

But I ramble. What I'd really like to do is conduct some measurements to determine how much back EMF comes from some 1950s loudspeaker drivers. And I just happen to have some prime specimens lying around waiting for installation - a pair of JBL 375 compression drivers, and four 15" JBL D130s - all just expertly rebuilt. As far as high sensitivity, small magnetic gap designs go, it doesn't get much better than this.

So I'd like your input on the test methodology. The 375s are easy - I'll feed it with a square wave (maybe 2KC) from a very high source impedance, like 600 ohms ;). If there is significant back EMF, it should manifest itself as ringing when viewing the voltage at the speaker terminals on a 'scope. I even have a N7000 and a N500 crossover networks to see their effect when placed in series. Sound good?

The D130s will be a bit harder, I'm thinking that I can set a pair of them face to face, and couple their dust caps together with a piece of memory foam (low time constant). I can then drive one and measure the back EMF from the other. I can flip the around the driving/driven connections to roughly calibrate the amount of input voltage that corresponds to a given cone velocity (null out the foam coupling), and then calculate the ratio of input voltage to back-EMF voltage in dB. I would do this at the hypothetical port-tuning frequency for a D130 in a reflex cabinet, where the effect should be the most pronounced in a real speaker. I would also use a couple of different loading resistors, to simulate the amplifier output impedance. What do you think?
Kirkus, Your technique for the D130s should do the job. I'm not so sure about the other but why not give it a try?

I had one of those Knight 6 watt amplifiers too. FWIW the AR-3s really like power; I have a zero-feedback Dyna ST-70 that barely has enough power to make them go. They work OK in a smaller room though.

There are a number of tube amplifiers that have held on to their position like Nelson Pass' amplifiers. Western Electric 211 SETs for example- still worth a pile of cash after 6-7 decades!

I think the thing to get about this is that there has been an evolution. In the 1950s and 1960s, it appeared that the Voltage Paradigm was the way to go (certainly it made a good story for selling transistor amplifiers and cheaper speakers), but evolution has continued, especially tube research has continued. Tubes are not capable of the 'constant voltage' ideal- by rights they should not sound so good, but in fact they do. That does suggest that maybe the constant voltage model might have some holes. In fact the holes are the rules of human hearing: for the most part tubes adhere more closely to those rules than transistors.

Why did you keep your Knight?
Atma, why do you say tubes obey the rules of human hearing more than ss? I've heard that said a couple of times recently but I'm not sure why that would be. If a SS amp outputs the same waveform as is input how is that not obeying the rules of human hearing? I agree what's done in the recording studio often doesn't obey the natural laws of sound though!

regards, David
Hi Atmasphere . . . I'll mock up the speakers this afternoon and see what happens. It has since occured to me that I will need to in both cases null out the effect of voice coil inductance(s) on the measurements. I think that by calculating the EMF as a power ratio rather than a voltage ratio, it will remove the inductive kick from the voice coil from the equation.

Also, I don't think I have a suitable piece of foam that would couple the D130s together without introducing a lot of extra mass . . . so I think I'll just tape the edges and have them couple with air pressure. I'll make the measurements at the free air resonance frequency, which should be at the lowest point of modulation of the air pressure between the two cones.

For the 375s, I'm going to start by measuring the difference in power input, and change in input waveform, between having a lens on the driver, and having the throat plugged. That should easily separate the effects of the air loading from the diaphragm mechanical damping.

The fact that I'm using power calculations rather than voltage calculations is interesting per our previous conversation. I'll have to chew on that . . . actually, I might start a new thread for the results.

There are three reasons why I've kept the little Knight KG-240. First, it's really useful for a secondary system - it's very small and compact, doesn't put out too much heat, reliable, and sounds fairly decent. Second, I have really come to appreciate the engineering behind it - it's definately a flawed piece, but it was sold for $30 in kit form, and it uses every cent of that in a well-balanced manner to perform as well as it can. Third (and most importantly), it was my father's . . . he bought it at a time when he could afford very little, and soldered it together himself on the kitchen table. He used it for over ten years as the only stereo in the house . . . and it's been used on and off for another three decades. I'd say he got his $30 worth.
Kirkus, sounds like he did!

Since a feedback signal is one of voltage, to satisfy the test might be easier than you think. Just place a speaker with a test tone coming out of it about 1 foot in front of the speaker under test and measure the AC voltage that results at the speaker terminals.

Wireless200, Tubes (triodes in particular) are the most linear amplification known to man. There are some semiconductors that are as linear in some portions of their curve, but not overall. Tubes also have a 'space charge' effect, again particularly noticeable with triodes, that prevents immediate saturation at full output. This limits the production of odd-ordered harmonics.

Anyone with an oscilloscope can view the clipping characteristic of any tube amp and see that the clipped waveform has rounded rather than sharp corners- this is a lack of odd-ordered content at clipping.

Due to the linear characteristics, its possible to build tube amplifiers that employ no negative feedback. Global feedback enhances the loudness cues (5th, 7th and 9th harmonics) that the human ear uses- in effect adding 'harshness'. The addition is slight, but our ear/brain system is such that even hundredths of a percent are detectable. Audiophiles use words like 'hard', 'harsh', 'brittle', 'clinical', 'chalky' and others to describe this effect.

So the trick is to avoid techniques that increase distortion, and to do so while avoiding global feedback. That results in an amplifier that can be very detailed while also being very relaxed.
Post removed 
Tvad, it can be (usually not clipping though), but there are other things that can do that that I would think would be more likely. Resonance excited by volume in the system is where I usually start when looking to kill sibilance. Cartridge setup, driver resonance, odd microphonics and cables are a few of the things that I have found to be more common.

Amplifiers and preamplifiers be guilty too so you have to be suspicious of everything.
Post removed 
In my audio experience, the main issue with amps is the unpredictability of how the output characteristics of an amp will affect the acoustical result of the loudspeaker. An effective way to reduce impact of an amp’s sound signature and get consistent and very satisfying results even with cheap amps is to let it drive directly the easy load of the speaker, removing any crossover components downstream of the amp. The influence of cable and connector impedance will then also vanish to almost inaudible levels. You then have to invest in multiple amps and do the equalizing and crossover upstream, but after having heard the results of doing so, you will probably never want to go back to playing the insane, frustrating and never ending quest of finding the "perfect" amp that will blend best in an inherently imperfect system architecture.
/patrick
@path73 John Otvos did that with his Waveform speaker. He used a custom-made three-way active x-over made by Bryston and three Bryston stereo amps (one per driver). No internal passive x-overs! Excellent SQ! And this was 30+ years ago! Now apparently unknown by the younger audiophiles! Remember this name: WAVEFORM!
Most electronics (amps, preamps, DACs) have been essentially sounding alike for the last 20+ years! Contrary to what the "golden ear" crowd claims! The marketplace dictates good sound and weeds out poor designs (except for some tube gear!). You want a better "sound"? Well then, buy a BETTER pair of speakers!
The speaker is THE overwhelming influence on SQ - NOT the amp, preamp, DAC, interconnect, speaker wire, fuse ...
@atmasphere  Paul Klipsch opined that what the World needs is a good 5 watt amp! Along with his K-Horns, of course! I believe he favored the Brook 2A3 PP amps! 
I have some SE amps: 2A3, 45, 6BG6. They do sound nice in use with my Heresy's!
@roberjerman,
The first 5 watts or so it what most amps don't get right. With many push-pull amps, the distortion below 5 watts is rising as power is decreased! Not all amps do this though- a lot depends on the topology of the amplifier.