Dual Differential / Balanced?


Hey all I’ve got that itch to upgrade power amps, and was wondering how valid the dual differential aka "balanced" monoblock or dual mono design is in terms of increasing fidelity compared to a conventional SE amp. note my preamp is also fully balanced

how much noise is avoided by using a fully balanced system?

right now I use 2 haflers horizontally biamping NHT 3.3. using mogami gold XLR
p4000 200wpc mids/highs p7000 350wpc lows

from what I’ve read it only matters if both the preamp and power amp are both truly balanced

I have a nice Integra Research RDC 7.1 fully balanced pre/pro, it was a collab with BAT, I would go for the matching RDA "BAT" amp but its pretty much unobtanium

So far I’ve looked at classe ca200/201, older threshholds, older ksa krell, as fully balanced monoblocks/ dual mono stereo

I was also told to look at ATI amps, they look very impressive but expensive

I’m looking to spend 1500-2500 preferably used products, I dont have an issue with SE amps I just want to exploit the fact my pre is fully balanced, and perhaps get better sound. If anyone has recommendations for awesome dual differential power amps. the NHT 3.3 are power hungry so at least 150wpc, class A/AB

I’ve also come across the emotiva XPA-1 monoblock, I can get a good deal on one of them I wonder if its worth picking this up and praying for a lone one to come on classifieds on ebay- note this is the older model in the silver chassis 500wpc 8ohm / 1000 4ohm

for context prior to the realization that I should use a fully balanced system I was looking at brystons and mccormack amps.. thanks
nyhifihead
Post removed 
There appears to be quite a bit of overlap in this discussion between the role of balanced equipment interconnections, and circuit topologies that use differential signal paths. This is actually quite understandable -- when one observes many of the design practices in contemporary "balanced" high-end audio gear, it seems that a large percentage of the people who design them are confused about the difference.  But "balanced" interconnects and "balanced" circuit topology are SEPARATE subjects, as they have DIFFERENT reasons for existence.  This post deals with the former; that is, the use of balanced interconnection between equipment.
 
To clarify the basics . . . "balanced connection" (one that usually has an XLR connector in the high-end audio world), what we're talking about is a connection that has two signal conductors, each of which has the same IMPEDANCE to ground, thus the impedance is "balanced" between them.  The fact that there are two signal conductors allows two modes of signal to coexist . . . the mode that pertains to both of the conductors together with respect to ground ("common-mode") and the mode that pertains to the voltage difference between the conductors ("differential-mode").

The overwhelming source of the noise we're trying to reject comes the flow of AC power leakage currents when active electronics are connected together (i.e. preamp to power-amp). With very few exceptions (i.e. Krell CAST), an analog audio signal is defined as the VOLTAGE at the equipment's output.  Of course in the real world, some signal current flows as a function of this signal voltage across the receiving equipment's input impedance and the connecting cable's reactance.  But put simply, in the world of audio interconnection . . . when we say "signal", we're talking about a voltage.

The shortcoming of unbalanced interconnection is primarily the resistance of the shield, as it functions to connect equipment grounds together.  As noise CURRENT flows across the shield resistance, a corresponding noise VOLTAGE appears at the ground of the receiving end.   Very little of this noise current flows through the signal conductor because the signal's input impedance is much higher, and the difference in impedance (hence the term "unbalanced") means this noise current manifests a noise voltage on top of the signal voltage.  In a balanced system, the same noise voltage appears as a result of the shield resistance, but the idea is that it cause identical noise voltage to appear on two signal conductors rather than one, and the signal can be defined as the voltage BETWEEN the conductors rather than the voltage between either conductor and the shield . . . and thus the receiving equipment can tell the difference between the signal and the noise.

But as stated above, in the real world any voltage produced at an input must also result in some current flow, as a result of its input impedance, and as such the balanced connection must have identical impedances between both of its signal conductors and ground for the noise-rejection scheme to work.  Otherwise, a different amount of noise current will flow through one signal line than the other, and the noise voltage will appear as the voltage between the signal conductors, just like the signal.  Put another way . . . the amount to which the impedance is unbalanced is the amount to which it starts to behave like an unbalanced interconnect.

For any balanced interconnect, we can thus predict the exact degree of this behavior from three impedances (common-mode, differential-mode, and differential-mode-imbalance) for each of the source electronics, destination electronics, and interconnecting cable.  The best explanation on this I've found is in Bill Whitlock's paper " Answers to Common Questions about Audio Transformers", available here as AN-002: http://www.jensen-transformers.com/application-notes/.  There are points I'd like to reinforce from this:
1. The balance of signal voltage between the conductors, with respect to ground, DOES NOT MATTER for the noise-rejecting capabilities of a balanced line.  It's the balance of the impedances that's critical.
2. The sensitivity of a system to impedance imbalances is a function of the ratio of the differential-mode (signal) impedance to the common-mode (noise) impedance.  Thus, a balanced input can be made less sensitive to impedance imbalances by increasing the common-mode impedance, or reducing the differential-mode impedance.

A **well designed** balanced interface may be advantageous regardless of whether or not the internal signal paths of the connected components are balanced or not. And a **well designed** balanced internal signal path within a component may be advantageous regardless of whether or not the internal signal paths of other components in the chain are balanced.
The main reason for my diatribe above is that I think many audiophiles would like to have some clarity on what constitutes a "**well designed** balanced interface" . . . so I'll throw my thoughts out there for consideration.

First, an input stage should at least "work well with most stuff people will hook up to it" . . . or better yet, "allow the preceding stuff to work at its best".  For consumer high-end audio, I think this means a signal input impedance of at least 10K through the entire audioband, and for a balanced input, we need to maintain decent noise rejection (maybe min. -40dB) with a source impedance imbalance of at least 5% . . . as this parameter is usually determined by four 1% resistors working in series, plus some cable and connector imbalance.

Second, I think that an input stage must "allow the rest of the unit to work its best under all conditions".  For a balanced input, this means that the input stage shouldn't rely on the rest of the unit (or the equipment that follows) to continue with the task of rejecting the common-mode noise that appears at the input jack.  And if the rest of the unit requires matching differential signal voltages to work properly, then the input stage should be able to provide this regardless of the presence/absence of noise at the input, or whether or not the signal voltage at the input jack is balanced with respect to ground.

It's this second requirement for which an alarming amount of high-end audio gear achieves an epic fail . . . and it's actually much MORE of problem with equipment that advertises itself as being "fully balanced", "differential circuit", or whatever.  I can support Ralph's specific observation on some of ARC's gear, and the fact that they're not alone . . . this is the situation for many, many very well-respected and premium-priced products, both tube and solid-state.

I can only speculate as to why this is the case . . . perhaps many designers become enamored with differential circuit topologies for other reasons, and these circuits require a pair of differential voltages to work.  They then feel that simply wiring the inputs to pins 2 and 3 of an XLR connector has a certain simplicity and elegance about it, but don't fully consider the real-world requirements for a balanced input.  Or they see that other manufacturers have "balanced" designs, and respond in making their own creations by simply stuffing two of everything in the box and thinking that it constitutes an "upgrade".  Or they become smitten with the way their schematics look when they're arranged in a symmetrical fashion about the horizontal axis . . .

Whatever the reason, it's good advice for purchasers of "fully balanced" equipment to do some investigation as to its specific input requirements, and kudos to Al for his time spent in research and assistance on Audiogon forums to this end.
My thanks to you also, Kirk.  Always great to see you posting here, and for us to have the benefit of your invariably brilliant insights.

Speaking of Bill Whitlock, since the interface-related noise that is being discussed may in many cases be caused or contributed to by ground loop effects, you'll probably find pages 31 through 35 of this paper to be of interest.  To whet your interest, its introduction states that "this finally explains what drives 99% of all ground loops!"

This was called to our attention a while back, btw, by member Jea48 (Jim), who as you may be aware is our resident genius when it comes to electrician-type matters.  I'd welcome any comments you may have on what Mr. Whitlock has to say on those pages.

Best regards,
-- Al
  
There are separate case mono blocks and dual mono two channel designs that share a case but it should have two power supplies per channel, one per channel in the preamplifier section and per channel for each power amplifier section in order to be considered a dual mono amplifier, or "true" dual mono. Who makes these terms up either its a "dual mono" or it is not? I form that in a question because I’m not entirely sure what the rules are or if I’m missing some. I know of some 2 channel power amplifiers that are labelled as being dual mono by the manufacturer yet they have but one power transformer.