Interconnect Inductance vs. Capacitance

How do the inductance and capacitance of ICs impact the sound? I have seen some ICs that have low inductance but high capacitance. On the other hand, some ICs have high inductance but low capacitance. One manufacturer even claims that his higher models have higher capacitance.

So can someone explain to me how they impact the sound?

Showing 10 responses by almarg

Shadorne -- Thanks for your observations. I just want to make sure it's clear to everyone that my previous post was written before I saw your last post, and was in response to the prior posts, not to your good observations.

-- Al
Assuming you are referring to analog ic's carrying audio frequencies (as opposed to ic's carrying digital signals or other high frequency signals such as video), inductance is likely to be insignificant.

High capacitance may cause the highs to be rolled off, particularly if the component driving the cable has a high output impedance. That is because the output impedance of the driving component and the capacitance of the cable form a low pass filter, having a bandwidth corresponding to the product of the output impedance and the capacitance.

It is true that some very high-end cables have highish capacitance. That is one example of how some high-end cables are designed to be non-neutral. It is also an example of how cable performance can be system-dependent, because the effects of the capacitance will be dependent on component output impedance.

-- Al
One could share to others how to measure inductance, capacitance, and resistance if they have a Digital Volt Meter. And explain the differences when apply to power cords or speaker cables.

Common VOM's (volt-ohm-milliameters), whether digital or analog, can measure resistance directly. Some digital vom's also have the ability to measure capacitance. There are also separate instruments specifically designed to measure capacitance. I am not aware of any low-cost instruments that will measure inductance, although there may be some.

I had said that inductance is insignificant for interconnects carrying analog audio signals. But it may be significant in a speaker cable, if the inductance is particularly high, as a result of the cable being long and/or the inductance per unit length of the particular cable being high. In which case it would attenuate the treble somewhat (inductance attenuates or blocks high frequencies).

Unusually high capacitance in a speaker cable can cause some amplifiers to operate out of their comfort zone, or to become unstable. It will not, however, produce the kind of high frequency roll-off I described for interconnect cables, because the output impedance of a power amplifier is vastly lower than the output impedance of a line-level component.

As for power cords, obviously sufficient gauge (meaning low enough resistance) is required to support the maximum amount of current that may be drawn through it. Beyond that, my opinion is that we enter the realm of metaphysics (definition: "a priori speculation upon questions that are unanswerable to scientific observation, analysis, or experiment"), and anecdotal evidence of differences is about all we can expect.

-- Al
Redkiwi -- Interesting post. But I think it should be pointed out that many, and I would venture to say most, people with relevant technical knowledge (who are not manufacturers of certain high-end cables) would disagree with some of your statements about characteristic impedance.

Characteristic impedance, being part of what are called "transmission line effects," is (at least for typical interconnect lengths) generally considered to be utterly inapplicable to audio frequencies. Note that I limited the statements in my first post above to cables carrying analog audio, not digital signals, video, or rf.

And I am at a loss to see how, even if there were some significance at audio frequencies, phase errors in the bass would result from impedance mismatch.

I do agree that pickup of high frequency noise might, in the hypothetical case of a cable that is both unshielded and unbalanced, be influenced by impedance mismatch between cable and source component. However, noise rejection is best addressed, and is usually addressed, by quality shielding and, in the case of balanced interconnections, by common mode rejection.

-- Al
Good responses by all. Yes, I too was wondering about the 500K -- that seems unusually high, and I would imagine that parasitic impedances in the circuit could become significant relative to that value, at least at high frequencies. But more significantly, I suspect that Vett's capacitor experiment is most likely a good example of what Redkiwi was referring to when he said:

One of the problems in science is that when experimenting you need to assume certain variables are not relevant in order to observe the impacts of an experiment on what you believe to be relevant. You cannot screen out all other variables all of the time.

My guess would be that the differing results with the two capacitors were not the result of the different capacitance values (the variable being tested), but were the result of differences in the departure of each device from being an ideal capacitor. Dielectric absorption, ESR (equivalent series resistance), leakage, stray inductance, and other known and unknown parameters make every capacitor something other than a pure capacitor. Which is why it is fairly widely recognized that different makes of capacitor, of the same values, can sound different, especially when they are in the signal path.

Redkiwi, I certainly agree that digital cables can sound different in an audio system. It is fairly widely documented, both in this paper and in threads at this and other audio forums, that 1.5 meters is an optimal length for cd transport to dac connections, and significantly shorter lengths will increase jitter by causing the round-trip timing of reflections from the dac input, and re-reflections from the transport output, to be such that the re-reflection would arrive coincident with edges of the original waveform. Of course, the degree of that effect would be dependent on the degree of impedance mismatch at both ends, as well as on the rise and fall times of the transport output, and the jitter reduction capability of the dac (if any).

Re skin effect, I haven't performed or studied in detail any analysis of its relevance to audio frequencies, but based on what I have read I would not disagree that it could be marginally relevant in some cable configurations.

I do still feel, though, that inductance and characteristic impedance are not relevant to analog audio interconnect cables of reasonable length, particularly in the bass region. When you say
If the characteristic impedance of the cable is below the output impedance of the upstream component then phase errors can get audible, particularly in the bass, and is a major cause of the belief that interconnects can be system dependent.

my feeling is that something else must have been going on to account for the differences your testing revealed. Of course, as I noted previously, inductance certainly can be expected to be a significant factor in a speaker cable (as opposed to an interconnect, where source and load impedances are much higher than for a speaker interface). And since characteristic impedance is a function of inductance (and capacitance), there may be an indirect correlation between speaker cable characteristic impedance and performance, but not in the usual sense of impedance mismatch resulting in vswr effects.

-- Al
Just out of curiousity, do you think that anyone selling cables has IC designs that would exhibit enough capacitance to grossly affect high end frequency response? Isn't this more an issue for the designer than the consumer?

No, as far as I am aware no one sells interconnects with capacitance high enough to "grossly" affect high end frequency response. But under extreme circumstances (high component output impedance, long cable length, high cable capacitance per unit length), it could become marginally significant. So in that sense it is potentially a system-level issue, that the consumer should be aware of.

Thanks for your very comprehensive and well done earlier post, btw.

-- Al
I agree with Vett's calculations, and the 12.5kHz answer. When I said that interconnect capacitance could become "marginally significant" under extreme circumstances, I was thinking of source components with active output stages. For passive preamps, or preamps with unbuffered resistive attenuators at their output, the effect can obviously be more than "marginal."

Audioquest4life, not sure where you are going wrong with your math, but for 100 ohm output impedance and 196pf capacitance, the 3db bandwidth would be:

1/(2*3.14*(100)*(196exp-12)) = 8,124,269 Hz (i.e., 8.1 MHz)

For Vett's example, it would be:

1/(2*3.14*(50000)*(255exp-12)) = 12,489 Hz

Although of course the 50K assumption is something of an oversimplification, and in practice I think the answer might not be quite that bad. The 50K output impedance assumes the control is set for 6db attenuation, and is the total impedance looking back into the output. But, first, I would think the control typically would be set for greater than 6db attenuation. Let's call it 12db, which would mean 25K between the output terminal and ground, and 75K between the output terminal and the preamp's internal voltage source which drives the attenuator. The high frequency rolloff would be determined, in this example, by the voltage divider ratio formed by the parallel combination of the 255 pf and the 25K, and the 75K. I'm not going to bother trying to figure that out, but my suspicion is that the net result would be a somewhat wider bandwidth than what would be provided by the 50K assumption. In any event, the 50K assumption does seem like a reasonable rough ballpark, which makes the point that the treble rolloff can be significant.

-- Al
Mathematics R us! lol...

LOL here too! As someone who also has multiple EE degrees, albeit one fewer than you do, this thread is definitely fun!

Agreed on the 6db, given your power amp's relatively low sensitivity. But I also chose 12db for my example in order to simplify my other comments (the references to 25K and 75K), which would have been harder to present if the impedances looking into the preamp output would have been 50K in both directions (to ground and to the signal source).

If you think 6dB attenuation is not enough, a higher level of attenuation will actually lower the 3dB freq. A higher level of attenuation means that you will have a larger R value in series with the output. So at a lower sound volume, the highs will be rolled off even more and yields a narrower bandwidth!

This would be true if the attenuator were simply a variable resistor in series with the output.

But I've been assuming (correct me if I'm wrong) that the end terminals of the attenuator are connected, respectively, to some signal source within the preamp (which itself is assumed to have negligible output impedance), and the preamp's ground. And that the preamp output is the wiper of the attenuator, with the output being referenced to preamp ground.

Given that, and using my 12 db example, the presence of the 25K in parallel with the cable capacitance makes for a very different situation than simply having some fraction of the 100K in series with the output. Without the capacitance, you get 12db at all frequencies. With the capacitance, you get a frequency-dependent voltage divider ratio equal to the combined impedance of the parallel combination of the 25K and 255pf (combined vectorially), divided by that figure plus 75K.

I'm not sure without doing some further analysis if that would result in greater bandwidth or less bandwidth than at a 6db attenuation setting (a 50K/50K setting on the attenuator, instead of 75K/25K). Note that in both cases, the capacitance is not being charged toward the source voltage. It is being charged toward some lower voltage, through an overall impedance which is not simply the resistance between the output terminal and the "top" end of the attenuator. In the 50K/50K case, the overall output impedance is 25K. In the 75K/25K case, the overall output impedance is only 18.75K. But of course the 25K is to ground, while the 75K is to the voltage source.

To use a wonderful expression I read in a completely different context a while back, my mind is becoming a bit too "pretzeled" by all of this to readily see the answer :)

-- Al
Vett93 -- It's been too long since I studied Thevenin's Theorem. It looks like I was right, and bandwidth will be greater when the attenuator is set for 12db attenuation, compared to when it is set for 6db attenuation.

The Thevenin equivalent circuit for the preamp output with the attenuator set for 6db, at the mid-point of its resistance range (what I've referred to as "50K/50K"), is a voltage source equal to one-half of the voltage being applied to the attenuator, in series with 25K.

The Thevenin equivalent circuit for the preamp output with the attenuator set for 12 db attenuation (what I've referred to as "75K/25K"), is a voltage source equal to one-quarter of the voltage being applied to the attenuator, in series with 18.75K.

Therefore the higher attenuation setting will result in a lower source impedance, resulting in a smaller RC time constant and a wider bandwidth.

-- Al

I can't really think of any good reference that would bring the scholarship which it sounds like the Ramos book contains to the subject of audio cables. My perception is that unfortunately most of the writing on the subject is in one of two opposing camps, neither of which is helpful. One being the camp which is well schooled in EE theory, but ignorant of high-end audio, and the other being the camp which believes in (or creates or promotes) the nonsense and quack science which pervades much of the cable marketing literature and other writings about high-end cables. Even the appeal of cables which undoubtedly (based on anecdotal indications) are really excellent performers, and worth their high cost, is spoiled for me by distaste for the white papers and other writings that are associated with them, which I am sufficiently schooled to know are nonsense.

If you've never seen it, you'll want to read this paper by Bill Whitlock of Jensen Transformers:

You won't agree with all of it, and I don't completely agree with everything he says, but he is a noted authority in the field, his products serve both the pro audio and high-end consumer audio markets, and this and some of the other papers on the Jensen site are the closest thing I've seen to writing about cables that is both knowledgeable and balanced (no pun intended).

Re speaker placement/room treatments, etc., Shadorne is very knowledgeable. I suggest that you research his posts in the Speaker category of the forum.

The debate about long/short interconnects/speaker cables is an old one, of course, with many previous threads here presenting differing opinions. My own feeling is that it is probably dependent on the particular components and cables, and on what is most synergistic with the overall sonic character of the system. My initial bias, in most cases, would be to err in the direction of having the speaker cables short, because of the higher currents that are involved and the low impedances that are needed. In my own system, physical placement considerations dictate that both preamp to power amp and power amp to speaker connections be about 6 to 8 feet.

I have no particular thoughts to offer about your cable experiments. Have fun!

And thanks for your good contributions to this thread.

-- Al