To bi-wire or not to bi-wire?


I have 2 pairs of floorstanders that have bi-wire capability: Dali Ikon 6 as FL & FR in my 7.1 a/v system; Polk M50 in my 2.1 PC system.

The manual for the Ikons shows how to bi-wire but makes no recommendation that it be done. The Manual for the M50 doesn't say much about anything. So, no guidance from the manufacturers.

I have read both pros and cons re. bi-wire. There appears to be some consensus that success with bi-wire depends on the particular speakers and the amps they are paired with.

In a previous 5.1 system, I had Wilson Cubs for the front 3. I had the L and R Cubs bi-wired and I could not tell any difference in sound compared to the single wired center Cub. They all sounded equally great.

I would be grateful for any advice.
mmarvin19
Sounds like you already have the proper approach and are a rationale consumer; as evidenced by your comment about the Wilson Cubs. Biwiring does not make any sense from an engineering standpoint. The theories used to support biwiring are pretty much junk science, particularly the idea that your amplifier sees a different impedance when driving the same speakers biwired vs non biwired. I have not heard a difference between the same pair of speakers biwired or not. Biamping on the other hand makes a good deal of engineering sense and if you really want to tweak your system for noticable effect, is something to be considered. This of course is more expensive. More importantly, biamping with off the shelf units requires a good deal of research before-hand as to the units and requires some experimenting once tentative choices are made. The ideal way to biamp is to design the amplifier specifically for the driver. Speaker wire is not all that expensive (unless you subscribe to the idea that expensive speaker wire is better) so trying biwiring vs non biwiring seems reasonable. Even if you do subscribe to a theory or expensive speaker cable is better, experimenting with dollar per foot speaker cable for 15 foot runs with decent connectors is inexpensive and the results of that testing should sufficiently inform you as to which choice to make.
Post removed 
Musicnoise...If the woofer and tweeter share a common ground wire back to the amp, then, due to the wire's impedance, each driver will see, at its ground reference, some of the signal intended for the other driver. I don't know how significant this is, but it's the best scientific reason I can think of for biwiring. It also implies that only the ground wire needs to be duplicated.
I have found biwiring to make an obvious difference in my system. However, I do agree with the reports of many people who say that it is system dependent.

I disagree with comments that it is based on junk science. If it is, then why does it work for many people?

As to whether it makes sense from an engineering standpoint, I have no comment because biwiring is based on science, not engineering. They're two different disciplines although obviously related. By the way, from an engineering standpoint, bumblebees can't fly. New engineering students in universities are often presented with this in order to challenge them not to be too dogmatic or closed minded when studying, analyzing and experimenting.

The speaker designer has provided two sets of binding posts for biwiring possibilities so they think it might be effective. Their opinion might be worth something. It may or may not work depending on other variables in your system; however, it's worth an experiment. Even with cheap, inexpensive cables, I have found it to be effective. So you might try borrowing some cables to give it a try. If it works, great. It's a cheap upgrade. If it doesn't, no harm done. It's part of the fun of the hobby.

Biamping should certainly produce an improvement too. However, my opinion is that passive biamping is not cost effective unless it is an intermediary step to active biamping.
Biwiring is explained scientifically on the Vandersteen website. It works, given that you use seperate (not jacketed) cables for the lows and highs.
Biwiring is explained scientifically on the Vandersteen website

A theory is proposed...low frequency high power signals induce noise on high frequency low power signals. It is hardly science though.

Given the low impedances of speaker loads and amplifier outputs it seems unlikely that noise can be induced in speaker wires that would be at all audible.

Of course, there is some ground truth, if you run unshielded line level signal wires next to some AC power cables feeding an airconditioner then it is quite likely you will pick up some noise...but this is because of the very low levels of signal at line level and the fact that termination impedances are around 10K Ohm - meaning that tiny stray induced currents may actually produce audible noise - even allowing you to pick up interference from a radio station or a ham radio perhaps..

A more logical explanation for reported observations may be caused by unecessarily wide bandwidth amplifiers of the sort that amplify flat up to 200 KHz - instabilities in amps of this type (with large amounts of feedback) might be affected by the slight change in cable inductance that biwiring would bring versus a conventional speaker wire. (Why anyone needs an amp flat to 200 KHz for audio reproduction is rather bewildering, however, specifications like this might make a buyer think the amp is "better" than one which rolls off above 20 KHZ - so they sell - the same way that damping factors of 1000 sell...)
I've found the differences to be subtle, even with separate runs. However if you're after the best sound possible and the $ is inconsequential it does make an improvement (IMHO). If you're on a budget, you should evaluate whether that $ could make a bigger impact elsewhere in your system.
As to the idea of the amplifier seeing a different impedance from biwiring vs not biwiring, any way you draw the circuit, the amplifier sees the same impedance. Likewise, biwire or not, both drivers see the same signal from the amplifier. Shardone's transmission line theory offers a plausable explanation for changing the impedance as seen by the amplifier (at high frequencies) but does not provide for an explanation as to a difference in the individual speaker circuits as seen by the amplifier- in other words the change is macro affecting everything. Now, the induced noise theory offered, from coupling low frequency noise (probably 60 Hz) does offer a plausable explanation for different effects on the respective drivers, assuming that the coupling is different for each set of wires, because that theory essentially inserts different sources in each leg (Nice theories by the way) I don't think this is the intended goal of biwiring though. I checked out all of the online explanations of the biwire effects that I could find but could not find one that demonstrated the effects through an analysis. If there is an explanation why not show it with a step by step circuit analysis with reasonable lumped components - i.e. not assuming that the speaker wires are ideal conductors. That is how the rest of the engineering community explains such things. Seems easier and more convincing. Kind of hard to argue with math (actual math that is, not referrals to math terms).
Thank you all for your responses. I have a 50' spool of inexpensive 14 gauge wire. I'm going to take the Audience Au24s off the Dalis and make up 4 cables for a bi-wire experiment.

Markphd - what is "passive" bi-amping?
I would like to add 2 other reasons. One is skin effect that starts at gage 20 at 20kHz. Many woofer cables use gage 7 to preserve low impedance (damping factor). Second reason are the Eddy currents, generated by speakers, getting from speaker to speaker (tweeter to midrange or midrange to tweeter) in spite of crossover (far from perfect) and amplifier's output impedance separated by (inductive 0.5uH/ft) impedance of the cable (equal to about 0.5 Ohm at 20kHz = 1/10 divider with speakers source impedance). Effects are very small but remember incredible range of our hearing instrument. Bi-wiring creates divider of the mentioned 0.5 Ohm cable impedance and very low amplifiers output impedance. Amp's output impedance of 0.05 Ohm at 20kHz will create 1:100 divider with speaker impedance before it gets to the other speaker - 10 fold improvement.

My speakers are bi-wired with a shotgun cable and sound slightly more "airy" in comparison to non-biwired (shorted terminal). Is it worth to pay for that about double (of expensive speaker cable)? I don't know - probably not, but I treat cables as non-perishable items and over-invest a bit.

The fact that according to many users bi-wiring improves some speakers but has no effect on the others might be related to design (steepness and configuration) of the crossover.
actually our "hearing instrument" does not have an incredible range and is not all that accurate. A 3 db spl sound change is barely perceptible to the average person. While the audio range is defined as 20 to 20k most people cannot hear frequencies near 20 k and most people over 50 cannot hear 15k. Many years ago I repaired tv's as a part time job. I was in my 20's and worked with two guys in their 50's, I could hear the horizontal oscillator vibration on some sets - crt's ( more likely something derivitive of it from mechanical vibration - but regardless - a high frequency) which is 15,750. The other guys in the shop could not. Their hearing otherwise appeared normal, i.e. normal conversation etc. This is not merely anecdotal, , high frequency hearing loss with age, presbycusis, is very common and audiology testing does not go beyond 8k - the frequency losses are evident at 4 and 8 k with mild cases showing 30 db attentuation from normal. That is my point on a lot of my posts - the instruments we have available to measure everything that has to do with hearing (different than something to measure our tastes in what we hear or what the sounds mean to us - which is where the art comes in) are orders of magnitude more sensitive, accurate, and resolving, than our ears. So the values measured with such instruments should be the base from which to evaluate the effects of many of these quasi technological solutions.
Musicnoise...Your statements about lack of HF hearing are correct, but the conclusion, that HF response of an audio system is unimportant is not correct. Hearing tests for frequency response are done using sine waves. I can't hear a 14KHz sine wave, but I can detect when music is limited to 14KHz. (14 KHz was a while ago and it is probably worse now). In seeking an explanation I have come to believe that the ear senses, not only pressure change, but also the rate of change. This would correspond to steepness of the sound wavefront. A 14KHz tone that is not a sine wave can have a wavefront steepness corresponding to a 20 KHz sine wave. I haven't tried it lately, but supertweeters with response to 40 KHz and higher are audible to some people.
Post removed 
Musicnoise - with normal hearing, one can hear intensities from 0 dB to 140 dB. This corresponds to power ratio (defined as ratio of the highest audible intensity to the lowest audible intensity) equal to 100,000,000,000,000 - I would call it incredible range. Often structures cannot sustain sound levels and vibrations at 140dB SPL that we can.
This would correspond to steepness of the sound wavefront. A 14KHz tone that is not a sine wave can have a wavefront steepness corresponding to a 20 KHz sine wave. I haven't tried it lately, but supertweeters with response to 40 KHz and higher are audible to some people

Given that the ear drum is a filter that removes high frequencies from the inner ear it is hard to believe that anything above 20 Khz has any bearing at all in an adult. Test have never shown people able to hear at 40 Khz. So whatever they are hearing it is most likely an artifact that is in the band of 20 Hz to 15 KHZ and most likely something around 500 to 5 Khz where our hearing is most discerning.

Once again, like with amplifiers flat to 200 KHz - it seems plausible that the increased bandwidth of certain designs might cause differences to appear in the audible spectrum (amp instability at HF perhaps causing in band artifacts).

Alternatively, band limited devices - such as the use of a metal/ceramic transducers that are band limited and undamped and therefore have ringing problems within the audible range (requires a sharp high Q filter to prevent ringing) might cause phase distortion in the audible range. Like the brick wall filters in early CD's - it was known that the severe high frequency filtering sometimes caused audible artifacts/problems in band. In that sense, a device that has greater bandwidth might have better in band response - due to less artifacts from sharp filters...

In essence, there might be audible differences but the audible differences are most likely in band. This is the way ATC Hypersound technology works....modulated ultrasonic frequencies react with non-linear air particles and produce audible sound - again some mechanism is required ( in this case non linearities in air at 110 db SPL at ultra high frequencies) to make the effect audible. Perhaps a blind or something in your room could act as the non-linear converter.
Musicnoise - with normal hearing, one can hear intensities from 0 dB to 140 dB

Agreed. And how most home systems can be called "Hi-fidelity" when they are limited to maximum SPL of 95 db SPL at the listening position (before compression distortion and other non linearities appear in spades)....well lets just say that is a much bigger mystery than bi-wiring!
Shardone: I agree with you on the point of wide band amplifiers. A high frequency cut off at 200k will reduce the phase change and attenuation a decade down. This would allow for less attention to the characteristics of that 200 k filter. However, well engineered band limiting will achieve the same effect and avoid any problems resulting from the additional energy contained in those frequencies outside the audio range. Two ways perhaps to achieve similar results but I would opt for the more band limited approach.