First, I wouldn't rely on an SPL reading with the meter set to "slow." The necessary power will typically depend on the amplitude of very brief musical peaks, which might even be too brief for the meter to fully capture when set to "fast."
Also, I believe that a rough rule of thumb for the frequency above and below which music often requires roughly equal amounts of power is 350 Hz. And while I recall from your past posts that you will be using an electronic crossover ahead of the power amps, so that most bass content will be kept out of the 2A3 amp, depending on the crossover point you choose the mid-range driver may still therefore often have to provide SPL's not a great deal lower than the total contribution from all of the drivers that is necessary on those brief peaks. Perhaps just 3 db or so lower, which in power terms is a factor of 2.
See my post here
for a description of how to approximately calculate maximum SPL at a given listening distance, as a function of amplifier power and speaker efficiency. Keep in mind, though, that this methodology neglects room effects, and also neglects thermal or other forms of compression that may occur in the driver at high volumes. It also assumes, of course, that the efficiency or sensitivity specification of the driver is accurate.
Based on that methodology and on those assumptions, I calculate that the 2 watt amplifier and two (left and right) 94 db/W drivers will be able to produce a maximum volume of around 90 db at a 10 foot listening distance, roughly corresponding to a total SPL produced by all drivers in the area of perhaps 93 db or so. That probably figures to be sufficient for a considerable majority of recordings, but I would not feel comfortable that it would be sufficient for some recordings having particularly wide dynamic range (i.e., large differences in volume between the loudest notes and the softest notes). And of course the lower the crossover point you are intending to use between the midrange drivers and the low frequency drivers, the greater that concern would be.
Finally, keep in mind a point Ralph/Atmasphere has stated many times, that SET amps do not sound their best when asked to provide more than just a small fraction (perhaps 20 or 25%) of their rated power capability.
Good luck. Best regards,
As usual, great insightful answer!
I do keep in mind Ralph's comments about the 25% or so. That was an insightful comment as well. Keeping this in mind I will run the numbers assuming 50% of amp rating as max working power, thinking at max power I would allow distortion to be higher than ideal, but far from the amps rated max power. So a 2A3 amp rated at 4W+4W, I will use 2+2W.
Running the math from the post you linked to:
1)Compute how many db greater than 1 watt the amp's specifiedi power rating is, based on the relation db = 10log(P1/P2), where "log" is the base 10 logarithm.
In this case 10log(2 watts/1 watt) = 3 db.
2)The driver is rated at 94 db/1 watt/1 meter. Therefore 3 watts will produce 94+3= 97 db at 1 meter.
3)Add 3 db for the second speaker.
97 + 3 = 100 db at 1 meter.
The listening position is indeed centered between the speakers. Maybe 3 dB gain is conservative in this case?
4)Calculate the reduction in SPL at distances greater than 1 meter as 20log(1 meter/distance in meters). In my case 2.4m. 20log(1/2.4) = approximately -8 db. 100 - 8 = 92db.
The room is made of brick, 5m x 5m x 2.4m high. The speakers are placed 1m from the front wall and 1.4m from the side walls. Is it safe to assume 3dB room gain? That would put me to 95dB.
But then, what does this mean? Is this 95 dB peak or RMS?
BTW, the crossover to the bass will be in the 350 to 400Hz region.
Thanks a lot!
Thanks for the nice words, Lewinski. I agree with all of your comments, aside from a minor typo in no. 2 (you meant to say "2 watts" instead of "3 watts").
Re "is this 95 dB peak or RMS," that would be "peak" in the sense of "maximum," if that makes sense.
The 3 dB added to reflect two speakers is indeed conservative, given your centered listening position, and room effects will help also, with 3 dB perhaps being a conservative assumption as well.
So as I indicated in my previous post I suspect you would do well with most recordings, but not necessarily with all recordings. For example, I have in my collection a goodly number of classical symphonic recordings on audiophile-oriented labels such as Telarc, Sheffield, Reference Recordings, etc., that were subjected to minimal or no dynamic compression when they were engineered and mastered. When those recordings are played at average levels of perhaps 75 db at the listening position, some of them will reach brief peaks in the area of 100 to 105 db (measured at the listening position, with a Radio Shack digital SPL meter set to "fast" and C-weighting).
On the other hand, though, as you've probably seen in past threads here some members report surprisingly good results using low power SETs with speakers that are considerably less efficient than 94 dB. But FWIW my own bias is that I don't want to be marginal when it comes to power.
So the bottom line would seem to be that it comes down to an individual judgment call, with the most significant variable probably being the kinds of recordings that are listened to. Re-doing your SPL measurements with the meter set to "fast," and using recordings you may have which have particularly wide dynamic range, would probably be helpful in making that call.
Last night I only had a chance for a brief listen and set the Radio Shack meter to fast and C-weighting. When measuring 80dB in "slow" I would get about 86dB peaks in "fast". This was on a 30-year old rock recording that I believe was pre pervasive-compression era. I need to do more listening and measuring, but my current guess is my albums are going to generally have dynamic peaks of maybe 10-15dB. Can't be proud of that, but I gues it is what it is :-)
If I re-run the numbers above using 4.5dB gain for the second driver and 4.5dB room gain, for 2W and 94 dB/W I would get 98dB SPL at the listening position. Borderline, it seems.
If the driver was instead 100 dB/W then I would get 104dB, which seems more than enough. Even a 45 amp, at 1W (2Wpc rating), would deliver 101dB. Of course, such midranges are not easy to come across.
I need to go back and pick my poison.
Thanks for the very helpful input!
Follow-up question: how would the above change if I doubled the driver's impedance? Say the above calculations were for an 8-ohm-rated driver. If that manufacturer had the same driver in 16 ohm version, then current would be halved for a given SPL, right?
So if the 94dB/W, 8 ohm driver delivers 98dB SPL with 2W, would the 16 ohm version of the same driver deliver 101dB SPL with the same 2W?
There are a number of variables and unknowns (to me, at least) that enter into your question. And while I therefore don't know what the answer would be I suspect that chances are it would be significantly different than 101 dB (or 100 dB, which you may have meant, that being 94 + 3 + 3).
First, without a specific indication from the manufacturer I wouldn't assume that the per watt efficiency of the doubled impedance driver goes up by 3 dB. Also, I wouldn't assume that the amp has the same power rating into 16 ohms as into 8 ohms, especially if it does not provide a 16 ohm output tap. And if it does not provide a 16 ohm tap, but only provides say an 8 ohm tap, I would not necessarily assume that it can perform at its sonic best when working into 16 ohms.
I did mean 101 dB, but that doesn't mean I had it right :-)
My rationale: if the given amp driving the 8 ohm driver, which is 94 dB/W, delivers 98 dB SPL, then when driving a 16 ohm version the amp will need to deliver half the current and hence twice as much voltage, adding 3 dB to the 98 dB.
As far as the amp goes, you are right I'm making lots of assumptions. I thought amps generally were challenged doubling Wattage as load impedance halved, but not the other way around. Not sure either if SET are the same in this regard.
Thanks for helping me think through this!
In this case it was me who wasn't quite right, about the 98 and 101 dB. When I wrote my last response I didn't re-read your post which preceded the one I was responding to, so I had forgotten where the 98 dB (at the listening position) had come from.
In any event, as you realize, decreasing load impedance will increasingly challenge an amplifier with respect to the correspondingly increased demand for current (everything else being equal). And also thermally in the case of non-class A designs (SETs are class A), due to the temperature rise caused by the increased current passing through their output circuits.
However, if the amp does not have an output transformer, or for a given output tap if it does have an output transformer (as I assume any 2A3 amp does), if the load impedance rises to high values output power capability will be limited by the amp's voltage swing capability (i.e., the maximum amount of voltage it can put out).
With many solid state amps, that will probably cause the maximum amount of power that can be delivered into 16 ohms to be not a great deal more than 1/2 of what can be delivered into 8 ohms, corresponding to there not being a great deal of voltage headroom relative to the voltage capability required for an 8 ohm load. Some amps that are rated to have several dB or more of dynamic headroom, though, would probably do better than that. Although many times large amounts of dynamic headroom may simply reflect that the amp is not robust enough to sustain high output currents continuously.
With a tube amp operated from its 8 ohm tap, as the load impedance increases significantly above 8 ohms max power capability may initially increase, but will eventually also reach a point where voltage swing capability, and perhaps also increased distortion, will limit its output capability. Where that point occurs will depend on the design of the specific amplifier.
If the amp provides a 16 ohm tap, though, I would generally expect power capability from that tap into 16 ohms to be in the same ballpark as power capability into 8 ohms from the 8 ohm tap.
Something that is missed from this conversation:
The driver is measured in sensitivity, not efficiency. As a result the impedance must be taken into account; in this case it results in the efficiency being not quite so high. You can't exchange the two specs unless the impedance of the drive is 8 ohms. So this is going to make the situation worse.
Second, adding more drivers will not change the efficiency at all although it will affect sensitivity adversely if the drivers are in series. However this means little to a tube amps with no feedback!!
By putting the two drivers in series all that will happen is 1/2 the power will be dissipated by each driver, resulting in exactly the same output as if one driver were to be used, assuming the same amount of power in both cases.
Based on what I've seen so far, the 2A3 amp will likely be OK for the tweeter, but for midrange you will either need a more powerful amp or a more efficient driver. If the driver were to be about 105db you might do OK as that would be about the same as if the amp had 10x the power. However if you insist on an SET for that, well there just isn't one that can do that sort of power and no go over 20% so this problem will need some rethinking IMO.
Thanks, Ralph. I don't think, though, that Lewinski was asking about adding more drivers or putting drivers in series. As I read it he was asking about the possibility of choosing a single 16 ohm driver instead of an ostensibly similar 8 ohm driver, and whether doing so would be beneficial with respect to maximum volume capability.
Hi Al- got it. I really don't think so, although it would help the amplifier to have lower distortion. He would still need either greater power or a more efficient driver.
Ralph, sorry I missed your post. And thanks for chiming in!
Al is right: I was considering using a single 16 ohm driver.
Maybe using specific examples helps. The original 94dB/2.83V driver was a B&W FST. PHL Audio makes high sensitivity drivers, and I was considering some of their 6.5" and 8" midranges as well. For example 2520 is an 8" 4 ohm driver spec'd as 100dB/2.83V and the 2530 is the same but 16 ohm.
Also as example, I'm looking into Yamamoto A-08S (45 SET) and A-011 (2A3). These only have one set of speaker connectors, though. I'm looking into what kind of impedance they were designed for, but so far haven't discovered it.
BTW, what's the impact you would expect if I ran one channel on the 5 ohm tweeter and the other channel on a 16 ohm midrange? Would that tax the amp in any way?
Its probably that if the amp has only one output and no taps, that the 5 ohm load would result in less output than the 16 ohm load.
If you use drivers that are 100 db 1 watt/1 meter (PHY?), then you have a chance with the 2A3 amp but I have my doubts about the type 45 amp- realistically they only make about 0.75 watts.
When you say "if the amp has only one output and no taps", what are you referring to by taps? I initially I thought you meant speaker output connectors, but maybe not.
PHL Audio is a French manufacturer, well regarded in EU.
PHY is a French manufacturer too...
'Taps' are the outputs of an output transformer. A tap is a connection into the winding of the transformer that allows for a different impedance. Usually a tube amp has an output for 4 and 8 ohms, possibly also 16. If there is only one output, its probably for 8 ohms.
The 1040 or 1050 seem the best bet in 6.5" drivers; the 2520 or 2530 in teh 8" drivers. Now I suspect that the specs are not accurate- they may be efficiency specs rather than sensitivity as claimed. The reason for this is that the drivers I have mentioned here have the same sensitivity rating while having two different impedances, and the sensitivity rating does depend on impedance somewhat.
What I am getting at is if these driver selection represent the same driver but in two different impedances, the 16 ohm unit should have had a sensitivity rating of 3 db less. Now they are claiming that the 16 versions are 100db, if that is really true the efficiency would 103 db.
This is because efficiency is 1 watt/1 meter regardless of the impedance, while efficiency is 2.83V/1 meter. 2.83volts into 8 ohms is 1 watt, into 16 ohms its only 1/2 watt- a 3 db difference. You might want to contact them and see if they can shed some light on this.
At any rate if the drivers are 100 db then you have a good chance of using them as midrange drivers with your 2A3 amp.
Thank you Ralph.
Let's assume for now the amp only has 8ohm taps (I've asked the manufacturer), and let's assume it's a stereo SET amp and one channel will be used to drive a 5 ohm tweeter (it measures 5 ohm flat across the usable bandwidth) and the other channel will be used to drive one of these midranges. Let's say the 16 ohm driver. Both drivers directly connected to the amp, with the crossover upstream from the amp.
Would you expect any downside in terms of sound performance or expected life of the amp by running different impedance loads on each channel?
In the case of driving a 5 ohm driver, the output of the amp will be lower power and higher distortion. Since its a tweeter, this might not be much of an issue as there is little power that is needed for a tweeter.
The 16 ohm load will also result in reduced power and more distortion, but the loss in power won't be as much as in the 5 ohm case. Also, the distortion will come from the output transformer and really not that much from the power tube; its the other way around in the 5 ohm case.
Since efficiency and power is a big deal in this case I would go with the 8 ohm version of the driver (assumed to be 100db 1 watt 1 meter) if you have a choice. This will keep distortion down and power up.
The life of the power tubes are likely not going to be much different in either case.