Different manual tube biasing methods


Different manufacturers of tube amps use different manual biasing methods. Broadly generalizing for the sake of examples, VAC and Conrad-Johnson use LEDs, whereas Cary and Audio Research use a voltmeter on test points. Are there any objective reasons why one method might be better than the other (e.g., because LEDs are inherently less accurate than voltmeters)? I've always been curious about this and finally decided to inquire.
mark
What you are looking for is a certain amount of current through the tube. In order to actually measure the current you have to break the circuit and insert the meter in series with the tube. This is impractical unless the meter is a permanent part of the amp so typically what is done is put a small resistor (say 10 ohms) in series with the tube and measure the voltage across the resistor.

The test points are across this resistor. If they feel 75 mA is ideal then you adjust to get .75 volts (.075A x 10 ohms = .75V)

The LED scheme has a comparator circuit that has a fixed reference voltage (in this case .75V) and when the voltage across the resistor exceeds .75V then the LED lights up. You turn it up until the LED lights up and then back it off until the LED just goes off.

Neither is more accurate but they both have their strong points. The test point scheme is better because you can try different bias points and you don‘t have the extra cost of the comparator, even though that cost is probably pretty low. The LED scheme is better because it is easier to do, doesn’t require a voltmeter, and its hard to make a mistake. If you don’t set up the meter properly with the test point scheme you could be way off with the bias.