Hi-end should be about as few compromises as one's budget will allow.
It's a shame (or a conspiracy) that hi-end mags do not educate us on the basics, such as unbalanced circuit designs vs differentially balanced designs and XLR connectors/connections vs XLR connectors/connections and their relative impact on music playback. Why do I mention "conspiracy"? Magazines seem reluctant to bite the hand that feeds them- the majority of manufacturers are still in the dark ages selling unbalanced gear. Why? It seems you can't teach an old dog new tricks.
Hi-end roots are based in unbalanced designs. When the few differentially balanced designs (XLR) first appeared on the market, they were too expensive for most of us. Today, several manufacturers offer XLR designs that are competitively priced with unbalanced designs.
Think about it, sharing the L/R signal on circuit boards and through parts cannot be a good thing. Adding insult to injury is the RCA connector. A system is only as good as it's weakest link and this is the RCA connection. In response, several manufacturers have improved the RCA connector, but to what ultimate result? You can put lipstick on a pig, but it's still a pig.
Reviewers (and I blame this on editors) typically allow balanced components to be reviewed within the confines of an unbalanced system. See The Absolute Sound August issue review of the Raysonic 168. Consequently, we are not informed on the components' ultimate sonic value.
If you are on a quest for best sound, begin to replace your RCA based components with differentially balanced. Most will accommodate RCAs or just buy RCA/XLR adapters until you fully transition.
Well, good points, BUT the value of balanced vs. unbalanced can vary widely from system to system and environment to environment. In an all-analog system, in a low RFI/EMI location with an integrated amp in a starter to mid-level system, the advantage might be pretty small.
OTOH, a system with Class D amplification in close proximity to a CDP with both chassis made out of folded steel, in a high RFI/EMI environment, the difference might be like night and day.
Because of the variability I think there's a reluctance to saying that balanced is the only way to go.
BTW, my system differentially balanced; however, when I used unbalanced interconnects between my CDP and integrated amp for a short while, the sound difference did NOT jump out at me. (I had the same brand of cable). Like I said, it's going to be more critical for some than others.
The biggest cheat is that so many units are for sale that do have 'balanced' inputs/outputs but are not truly balanced at all---they just add a bit of circuits to the system which defeats the purpose of so-called "balanced".
In this situation I stick with RCA as this is what the system was designed for.
"In this situation I stick with RCA as this is what the system was designed for."
Well, there are plenty of "balanced" components that are truly balanced. When the total silence potential of a balanced combined with DC power is heard, then most of become believers. Many systems are NOT "designed for" unbalanced.
Yes, you have to know which are truly balanced and which are adding a differential circuit. And this works both ways. For example the Slim Device Tranporter adds a circuit for unbalanced. The native output right out of the DAC chip is balanced. So going with RCA requires and additional circuit.
The whole point to balanced connections is noise rejection, yes? So what if your listening environment has very low RFI / EMI noise to begin with? And all your cable lengths are short (1M max), and all your cables are shielded?
I don't see the value to balanced interconnects unless you're talking long cable runs of many meters, or high electrical noise environments.
Obviously, listening is a subjective thing, but my unbalanced system has the deepest blackest silence I've heard. Crank the volume on the amp to maximum and you still can't hear *any* noise from the speakers from just 10 cm away!!
Lupinthe3rd and Dave, balanced lines have been shown to be an advantage when the interconnect length is only 6 inches. Their advantages are not based on length, although that **is** an advantage that they have. The real advantage is that the balanced line system was created with the specific intent (which it does very successfully) of eliminating interconnect cable differences and artifacts, in essence, to eliminate anything about the cable that makes it audible in a system.
Another way of looking at this is: if you ever had to audition a cable to see if it sounded right in your system, than you already know what the balanced line system is for and why it can be useful for yourself.
Atmasphere, I did audition balanced vs. balanced of the same brand and model and they sounded the same in my system, but both sounded different from the model that they replaced. So, I don't see how the sound of the interconnect was removed by going balanced. If that were true, then every balanced IC would sound the same, yet they don't.
I'm not even sure what is balanced and what isn't there days. Krell advertises the 400xi as being fully balanced yet the model up FBI is a 'true' fully balanced design, while the 400xi only offers Balanced Circuitry. I think the industry has somewhat sneered at the claims of some manufactures and have more or less put the 'balanced' claim within the wattage category of 'friendly Grey marketing'.
No offense to the balanced proponents, but there seems to me to be an aspect that is not often discussed about balanced systems. To be truly balanced, you must have the equivalent of two systems, which are then compared to each other in some way (subtraction, division, etc.) to provide the output. If either of these systems changes once they are initially calibrated, problems may result. In addition, you are now talking about twice as many components which can add their own unique signature, as well as drift in value as they age. As an analytical chemist by trade, my analogy is a dual-beam optical system for a spectrometer: instead of simply having a single beam of light which is used to interrogate a sample and determine its chemical composition (a single beam system), sometimes it is preferable to use a dual-beam system; the light from the source is split into two beams, a reference beam and a sample beam. The reference beam passes through blank sample (water for example) while the sample beam passes through a solution of the sample. The spectrometer then subtracts the spectrum of the reference beam from that of the sample beam to provide the corrected spectrum. This type of system is excellent at correcting for errors due to background in the solvents or drift of the optical source. It is not always the case that a dual beam system is preferable for your analysis. A dual beam system can subtract the influences of a blank automatically, but that blank is not contained in the same vessel (cuvette in chemistry terms) as the sample, and the reference optical path is not the same as the sample optical path. Also, as mentioned above, the dual path can create problems when components age differently. So to me the argument is not as cut and dry as it may seem at first. If you do not need the error correction afforded by a duplication of component paths, perhaps you are better off without it, as it may introduce unnecessary complication and expense to the system.
To add to my previous post (too quick on the submit button), an additional disadvantage of dual beam systems is that you halve the power of your source to create the twin beams - in some cases that creates an unacceptable loss in sensitivity to low level components. Is there a direct analogy of this phenomenon in audio systems?
Ait, you don't have these issues when dealing with a balanced line system, which normally will use about 25-50% more parts to execute than single-ended; definitely not double!
Dave, The balanced line system **is** a standard. Unfortunately many high end audio manufacturers do not adhere to it. To do it right, the driving circuit should have a low impedance, as often the input has a much lower impedance too, though not always. The significance of the lower impedances is that it swamps out cable construction issues. That makes long distances possible, and reduces the possibility of noise interaction (even in short runs). The twisted pair that runs inside the cable, being differentially driven, is also more immune to noise pickup and capacitances are also more controlled.
I concede that balanced connections have become 'stylish' and there are a number of companies that have installed connectors on their equipment that really don't come close to meeting the standard- the result being very much like what you have described. My point is that if you hear what the standard actually offers- well, there's no going back.
I agre with you, Lush: Too much deliberately confusing info is being set out there by manufacturers. It's why I'll only buy from manufacturers, like Ralph, who will talk with you if needed for you to understand what you're getting.
Atamasphere, are you saying that my Rowland is NOT truly balanced?? That would surprise me greatly, since Rowland is reputed to be one of the first to adopt a balanced configuration in consumer electronics.
I think my Rowland is truly balanced and I can hear a difference between cable brands. Your allegation is that I shouldn't be able to hear a difference in balanced mode.
Here, at least one aspect of balanced approach is discussed - power handling.
Another, from Spectron Audio web site:
"...balanced mode of operation doubles the slew rate and bandwidth by virtue of the out of phase transmission. This also suppresses the noise and buzz originated upstream from the amplifier. The other major advantage of mono balanced mode in Spectron amplifiers is that transmission of both positive and negative signals (in each amplifier) is maintained separately from the amplifier's input to the speakers binding posts. Assuming that the signal path electronics are matched, all of the intrinsic amplifier distortions arrive at the speakers with practically identical amplitude but with opposed polarity and essentially cancel each other. The result is a largely noise and distortion free sound transmission, leading to a spectacular improvement in three-dimensionality and resolution of detail in the music"
Finally, fully balanced electronics will cost you nearly twice then unbalanced of partially balanced
07-30-08: Dcstep Atamasphere, are you saying that my Rowland is NOT truly balanced?? That would surprise me greatly, since Rowland is reputed to be one of the first to adopt a balanced configuration in consumer electronics.
Well, they sure don't make that clear, one way or the other, anywhere on their website or within specific product spec sheets.
Under "Technology" they provide a couple spec sheets from Jensen Transformers on the benefit of input transformers, so it is safe to bet they use those. But, there is absolutely no statement as to whether Rowland uses fully balanced or differential circuity in the preamplifier stages.
I would think that if they they do, they would simply state it as such. But, who knows, maybe they're just modest.
I'm pretty sure Rowland uses differential circuits :) His was one of the first fully differential preamps to follow after ours.
The balanced line system *was* devised to reduce or eliminate interconnect cable differences and problems, but that is not to say that the effects of the cable will be inaudible. It *does* say that they will be *far less* audible that with single ended. However if you are running a preamplifier that has a very low output impedance, this will reduce the effect that even a single-ended cable has on the system. Its a lot harder to do though, with balanced its easier.
Some caveats: some balanced setups (including early balanced Rowland preamps) use(d) dual RCA jacks to execute balanced operation, which makes things trickier. XLRs are the preferred means, having superior contact mechanisms and also keeping the opposing signals in the same vicinity, which reduces noise pickup. Thats why we went with XLRs for the beginning, in a effort to prevent the goal of the design from be subverted. There is no question that that also delayed market acceptance because you had to use a different cable. With the dual-RCA setups, you could run a pair of RCA cables so you didn't have to have a different cable, just more, but that is a far cry from how the standard is set up.
I can't speak to the output impedance of Rowland preamps, but owing to the fact that they are solid state and that Jeff knows his circuits, I am confident that the output impedance is low.
I've used Mogomi cables for years, and compared them to a lot of much more expensive cables with no worries or regrets. I do hear differences, but they are always subtle and while some high end cables are audibly better, the difference is so slight that until now, I would never have written home about them. For a difference of $4000 I can get the same effect just by changing a couple of $25 tubes.
Not having to use an expensive interconnect and being able to run it a long way is a boon. I have the equipment stand 3 feet from my listening chair. A 25-foot run goes to the amps, which are by the speakers, with speaker cables as short as I can get them (about 4 feet; the speaker terminal are up high on the cabinets). I use more exotic cables elsewhere in the system, as not all the other components have the same ability to control the cables as the preamp does, and some are single-ended (this is not a problem for a balanced preamp BTW).
Rafael, I will give it a shot. One immediate advantage of fully differential circuits over single-ended is noise rejection, noise rejection from the power supply, and noise rejection from the input.
Power supply noise that is common to both halves of the differential amplifier is rejected by a ratio, usually measured in db; rejection ratios can be easily over 100db. Common Mode Rejection Ratio (CCMR) is the ratio of noise rejection at the inputs: differential amplifiers only amplify what is *different* between their inputs (inverting and non-inverting), so what we are talking about here is if you have the same signal on both inputs, how much of it will get amplified. It is not uncommon to see CCMR specs of -95db or more. In real world terms that means you could have a 25 run of unshielded wires attached to both inputs and hang on to them with your fingers, and basically not hear a thing through the speakers.
The result is that it is possible to build a quieter circuit with less stages of gain overall. This, despite the fact that differential circuits *have less gain* than the equivalent single ended circuit!
A differential amplifier in theory has 6db less noise per stage of gain as opposed to SE. The parts count tends to be between 25% to 50% higher depending on execution. The types of parts involved, a few resistors and an extra gain device like a transistor or tube section, are not significantly more expensive. If you want to do differential right, what *can* be more expensive is the power supply, as it is helpful to have a bipolor supply with equal plus and minus voltages. This is not a significant transformer cost as it does not require more windings or more current, but it does mean the addition of more power supply rectifiers and another set of filter caps (and regulation if applied).
So the cost of execution winds up only being about 20-25% higher overall, as the chassis and transformer(s) are the primary costs in most audio products and a sort of common denominator.
If your circuit is fully differential throughout, an interesting thing can be observed: since noise is theoretically 6 db lower per stage of gain, the more stages you have, the more pronounced this effect is. In practice, you may not get the full 6 db, so for example in our MP-1, which has a total of three stages of gain from MC phono input to line output, and if we assume less than optimal noise concerns, it will still be a good 12 db quieter than the same circuitry executed single-ended. That fact alone, especially for phono users, should carry some weight.
The idea that you can have less stages of gain means a simpler signal path overall; quite the opposite of the usual assumption of a more complex signal path.
To put this a little clearer: with proper execution, a fully differential preamp or amp will have a simpler signal path than many single-ended counterparts. The bottom line is lower noise and a simpler signal path, for a slight increase in cost.
My CDP, pre -amp and amp are all bal units, I recently switched from all S/E to all bal in the same cable. The improvement was impressive, more than I expected it would be. Inserting bal from CDP to PRE first the sound expanded, became airier, the sound stage increased and the sound seemed "less dense" more alive cleaner and clearer!! I used a SLM to keep the volume at 74 db with S/E and bal. When the second bal cable was inserted from pre-amp to amp there was a substantial increase in bass. I was told each unit sounded better using bal I/C but the amount of change was a surprise!! It was as if the S/E had been choking the system down!!
I was told each unit sounded better using bal I/C but the amount of change was a surprise!! It was as if the S/E had been choking the system down!!
Such a huge performance improvement is unusual - perhaps you had a problem with noise from a ground loop with RCA. Did you carefully check signal levels - sometimes a higher signal or higher volume is normally immediately perceived as a sound improvement (particularly bass is what blossoms at higher volumes due to the way our hearing works...)
Certainly I highly recommend XLR as it means less hassle/problems - better sound more reliably and almost always a lower noise floor - however I would not claim XLR sounds dramatically better than a good working RCA connection - it should sound the same if gear is working properly and no noise/ground loop issues.
Hi Shadorne I don't know enough to disagree with you I only know what I heard, also there was an experienced audiophile with me at the time and he was more than a little surprised at the changes in fact he was bummed he had to run S/E from his Ref 3 to his VS-110 after hearing the improvement in my system. The biggest change was from the CD3 MKll to Calypso. I just bought the CDP from Audphile1 he strongly advised using bal I/C saying the unit sounded better bal. I was told the same thing about the Calypso by an audiophile whose experience I respect. Is it possible these two units that are bal designs and individually improve with the use of bal I/C improve even more together? I kept the sound level at 74 db with S/E Sky and Bal Sky so we were very surprised how much better the bal sounded we both looked at each other in disbelief asking are you hearing what I'm hearing!! I'll A-B tomorrow to recheck, I don't have time today as we're off to the USC-OSU game soon!!! Fight On!!!!!
The principle advantage to using balanced vs unbalanced topology is that unwanted interference is presented common mode and subtacted out. So, if you have noticable interference, the balanced approach should show significant improvement over the unbalanced . If you do not have any such problems then you should not see a difference. I personally use a balanced approach because it is available in my gear and ideally a better setup. However, I have also tried the unbalanced connections. I heard no difference whatsoever. Of course I don't have noise problems. If its available in your gear - I would use it. If it isn't and you have no problems with noise, I would not change anything. A lot of options are out there for a wide variety of problems that can be encountered in audio equipment. However, I have never seen a point in preemptively addressing problems that do not exist. Sometimes upgrading is nonsensical. A friend recently asked my opinion on going to an amp that was "quad balanced" over another amp because of a lower spec'd snr. The snr of his current amp is spec'd at 112 db and he was looking at going to an amp that spec'd at 124 db (there were other advantages to going to the second amp). And he was happy with his current amp. My suggestion was that the only difference would be on paper. Both specs are so good that it just would do nothing but confer bragging rights. Same thing here - if you don't have noise problems, I see no reason to change what you have.
Shadorne you had it right it was a noise problem!!!! There was "glare" with piano in the higher notes, I thought the CD's were poor quality or that was just how "most" CD's were going to sound, depressing!! I used "The Ray Brown Trio Live At The LOA Summer Wind" for the A-B ing. For an unrelated reason I bought a used set(4) of AQ Sorbogel Big Feet and put them under the Calypso just to hear what would happen. The "fuzz" at the peak of the notes disappeared, the notes were clean/clear!! Today I A-B'ed the Bal and the S/E and they were extremely close, only a very slight difference in what appeared to be warmth. Next I placed isolation cones under the CD3, they cut-out a tad more "fuzz" and tightened everything up!!! At this point I puy an 1 1/2" maple board under the amp to further reduce vibrations. Now the Bal and S/E were even closer, a great lesson in the degradation of sound by vibrations. Bal cable was left in from the pre to the amp during the elaluation. The Calypso must be extra sensitive to vibrations the change was so dramatic UH OH here I go again!! Vett93 I didn't mean to imply the signal had another 6db of gain just that it became bassier. Thanks for dealing with something you easily could have blown off, the sharing of everyone's reactions/thoughts kept me thinking about the cables and the importance of dampening vibrations for clearer sound!!!!
For an unrelated reason I bought a used set(4) of AQ Sorbogel Big Feet and put them under the Calypso just to hear what would happen. The "fuzz" at the peak of the notes disappeared, the notes were clean/clear!!
I keep saying this on many threads. Tubes are microphonic! They really ought to be kept in a separate room/area and away from the noise in the listening room. This is what Pink Floyd does and surely we can respect them for knowing what should be done soundwise to get the most from the great sound of tubes.
It is entirely possible (in your previous observations) that the different XLR cables changed the way the components resonated with the sound in the air in your room or transmitted from your speakers to the floor and up your component stand.
A much heavier flexible XLR cable (compared to RCA) might logically have a dampening effect in much the same way as if you place your hand on something. However, now that you have resolved most vibration issues then the difference from RCA to XLR has become far more subtle - as it should when everytyhing is working properly.
Musicnoise - I use balanced ICs because my power amp has only balanced inputs. Balanced cables have shield grounded on both ends (not very good thing) and are used by professionals (other than noise reduction) because of male/female system and connector locking. It is not possible to touch input pin on the amp or the cable because it's female (pins hidden). It is also not possible to unplug by pulling. Immagine few kW PA system with dangling connector that can be touched.
I agree with your statement about "biting the hand" although I think it is more like slr camers. Once you have 4 or 5 canon lenses, you cant switch to nikon...
I disagree that a system is only as good as the weakest link. A system is the sum of all its components. True a crapy pre amp or bad set of cheap speakers will ruin the sound of other great components. But a stock power cord on $200k worth of gear will not make it sound like you paid $5 for the entire system.
I think there are two schools. weakest link and sum of the components. That being said, one set of un-balanced cables will not kill the entire system. if you are a graduate of the "weakest link" school of thought then replace everything you own and go will all balanced connectors. If you are a graduate of the "sum of all the components" school of thought, then make the switch to ballanced when you buy a new piece of gear.
Eliminating common mode interference is the only advantage to balanced connections. If you have no common mode interference, there is no advantage to balanced interconnects, period. I agree that balanced is better than unbalanced on general principle, and given the choice, I will always use balanced. But the reality is that balanced increases cost significantly, it nearly doubles the component cost of a DAC, with dubious benefit in a residential environment.
Pro audio equipment is always balanced because it uses long cable runs (on a performance stage, for example) that can act as an antenna and introduce RFI. And lots of high current electronics and power cords everywhere that can introduce EMI. Also very very high db volume levels where even the tiniest noise will be amplified to audible levels.
For small room hi-fi listening, short cable runs, and moderate residential volume levels, it just doesn't make a big difference in sound quality whether you use balanced or unbalanced.