What Makes a Good RIAA or Line Stage?


Hi Doug,

In a currently running thread on a certain RIAA / Line stage beginning with the letter "E", some very provocative comments were made that are of a general nature.

I fear that this conversation will be lost on the many individuals who have soured on the direction which that particular thread has taken. For the purpose of future searches of this archive, those interested in the "E" thread can click this link.

For the rest of us who are interested in some of the meta concepts involved in RIAA and Line Level circuits, I've kicked this thread off - rather than to hijack that other one. In that thread, you (Doug) mused about the differences between your Alap and Dan's Rhea/Calypso:

... the Alaap has the best power supplies I've heard in any tube preamp. This is (in my admittedly unqualified opinion) a major reason why it outplayed Dan's Rhea/Calypso, which sounded starved at dynamic peaks by comparison.

Knowing only a bit more than you, Doug, I too would bet the farm on Nick's p-s design being "better", but know here that "better" is a very open ended term. I'd love to hear Nick's comments (or Jim Hagerman's - who surfs this forum) on this topic, so I'll instigate a bit with some thoughts of my own. Perhaps we can gain some insight.

----

Power supplies are a lot like automobile engines - you have two basic categories:

1. The low revving, high torque variety, characteristic of the American muscle car and espoused by many s-s designers in the world of audio.

2. The high revving, low torque variety characteristic of double overhead cam, 4 valves per cylinder - typically espoused by the single-ended / horn crowd.

Now, just as in autos, each architecture has its own particular advantage, and we truly have a continuum from one extreme to the other..

Large, high-capacitance supplies (category 1) tend to go on forever, but when they run out of gas, it's a sorry sight. Smaller capacitance supplies (category 2) recharge more quickly - being more responsive to musical transients, but will run out of steam during extended, peak demands.

In my humble opinion, your Alap convinced Dan to get out his checkbook in part because of the balance that Nick struck between these two competing goals (an elegant balance), but also because of a design philosophy that actually took music into account.

Too many engineers lose sight of music.

Take this as one man's opinion and nothing more, but when I opened the lid on the dual mono p-s chassis of my friend's Aesthetix Io, my eyes popped out. I could scarcely believe the site of all of those 12AX7 tubes serving as voltage regulators - each one of them having their own 3-pin regulators (e.g. LM317, etc.) to run their filaments.

Please understand that my mention of the Aesthetix is anecdotal, as there are quite a few designs highly regarded designs which embody this approach. It's not my intent to single them out, but is rather a data point in the matrix of my experience.

I was fairly much an electronics design newbie at the time, and I was still piecing my reality together - specifically that design challenges become exponentially more difficult when you introduce too many variables (parts). Another thing I was in the process of learning is that you can over-filter a power supply.

Too much "muscle" in a power supply (as with people), means too little grace, speed, and flexibility.

If I had the skill that Jim Hagerman, Nick Doshi, or John Atwood have, then my design goal would be the athletic equivalent of a Bruce Lee - nimble, lightning quick and unfazed by any musical passage you could throw at it.

In contrast, many of the designs from the big boys remind me of offensive linemen in the National Football League. They do fine with heavy loads, and that's about it.

One has to wonder why someone would complicate matters to such an extent. Surely, they consider the results to be worth it, and many people whom I like and respect consider the results of designs espousing this philosophy of complexity to be an effort that achieves musical goals.

I would be the last person to dictate tastes in hi-fi - other than ask them to focus on the following two considerations:

1. Does this component give me insight into the musical intent of the performer? Does it help me make more "sense" out of things?

2. Will this component help me to enjoy EVERY SINGLE ONE of my recordings, and not just my audiophile recordings?

All other considerations are about sound effects and not music.

Cheers,
Thom @ Galibier
128x128thom_at_galibier_design

Showing 7 responses by jmaldonado

A Line or RIAA stage could be characterized as "good" if it complies with its primary objective: To amplify the music signal with enough gain to be listened comfortably at the proper volume level, and do it without introducing obvious anomalies not present in the original recording. I think many of today's products could fit in that description.

Do you want more than merely "good"? First, the device should be accurate, transmitting all aspects of the reproduction with completeness and neutrality. With accuracy I'm not implying a clinical, analytic or sterile sound. This is not accuracy, is just, well, sterility. It also should be able to process even the most dynamic signals without any trace of compression or congestion. Every nuance, every detail, every "emotion" should come out in the right proportions. Nothing should be added or removed. The device should be "transparent" in the sense that listening to it would give you the feeling of being closer to the original event, not processed through electronic circuits.

As for the circuit details, there are probably as many different opinions as there are designers out there, but some general desirable characteristics could be extracted. It should have as extended a bandwidth as possible, both ABOVE and BELOW the audio band (officially 20 Hz to 20 kHz). Low noise is absolutely fundamental. A low distortion is highly desirable, because a high amount of it can be easily discernable as an added "gloss" in the instruments. Feedback can improve the measured specs, but if not properly done it will also rob life from the music (causing that sterile sound). In the RIAA department, it should provide clean gain for your cartridge (some of them needing up to 70 dB), and be able to manage input signals of at least 10 times without overloading. It should decode the RIAA curve with the minimum error possible, or it will show up as a permanent color of your sound. Finally, the unit should be reliable, stable, and have a fast warm-up time.

We now have the technology to satisfy all of these requirements simultaneously. If a unit aptly does that, there is a good chance that you will experiment all that "emotion" trapped in your recordings.
"RIAA eq deviation no more than 0.05 dB... Why?"

Five reasons:

1. Because the technology exists
2. We can use this technology without any detrimental effect on the audio quality
3. It adds a negligible cost to the product
4. It automatically results in basically perfect channel balance throughout the full audio band
5. It'd be a tribute to Dr. S. Lipshitz (of RIAA eq. fame)

Of course, RIAA eq. is only one of many parameters affecting the reproduction of LP records. I support that. As for channel matching, please see point 4 above.

"What's next? +/- 0.00005 dB equalization?"

That's impossible!
Jose, or anyone else, have you correlated RIAA error to colorations? As in, error of x amount or of y type leads to a z coloration.

Dan, in my experience the term "coloration" involves 2 facets:

On one hand you have a measurable deviation of flat frequency response of X dB, sustained over Y Hertz of bandwidth, which will cause an identifiable sound similar to that produced by the bands of an equalizer. ABX tests have been conducted showing that the audibility threshold is lower as the bandwidth of the deviation is increased. This simply means that we are more sensible to this error when it spans several octaves. RIAA stages are particularly vulnerable to this kind of coloration, simply because the RIAA curve is made up of 3 turnover frequencies affecting big portions of the audible band.

On the other hand, coloration is also a characteristic sound caused by a circuit's more intrinsic factors that can't readily be measured with conventional frequency-domain analysis. Nevertheless it manifests itself as a "fingerprint" in the sound (punching bass for instance). This type of coloration, at times enjoyable, will reveal itself more with the passing of time. This is mainly the reason why ABX kind of tests commonly reveal the first type of coloration, but fail with the second.

Enjoyable or not, decreasing coloration is a good thing in order to preclude our ear from extrapolating the musical signal into a predictable sound.
...fully corrected and stabilised curve, channel identicity. The last two are more than just difficult -- they're horrendously painstaking, boring (think of "trimming" to get the "right" R -- and once you get there, you realise that your next pole is off...), and expensive: anyone ever try to really "match" components?

Greg, this is exactly the same reason that caused Dr. Stanley Lipshitz to express this words 28 years ago: "To begin with, trimming is a difficult procedure, for each component affects at least two of the finally realized time constants of the network. Furthermore, to be able to trim accurately one must have either a precision RIAA circuit for reference or else be able to measure over a dynamic range of >40 dB and over a frequency range of >3 decades to an accuracy of tenths of a decibel. This is not an easy task".

Fortunately enough, nowadays we have DSP technology, which can now be used to address precisely this task. Part of my research in the last few years has been to create an effective trimming procedure that allowed me to calibrate the RIAA with a resolution of thousands of decibels (not kidding). I believe having an accurate RIAA is audibly superior, for the same reasons that Mr. Carr mentioned, as well as many engineers and enthusiasts have investigated.
The above is true only for the standard Lipshitz method of EQ. If EQ is split across two stages (the way I do it, for example), then the switch is possible.

Quite easily in fact (with the right topology). The 3.18 us turnover point is an extension to the RIAA de-emphasis curve, whose purpose is to compensate for the practical considerations that the RIAA neglected when the standard was created. In this case, that the cutting heads at the mastering facilities were unable to pre-emphasize up to infinity, but instead they were only garanteed up to a certain frequency. The Neumann heads used this turnover point, and they have always been very popular. Many LP masters were cut with this heads.

The 3.18 us turnover causes a phono stage to stop its rolling-off of the treble at a frequency of roughly 50 kHz, which in turn causes an added sensation of "air" in the reproduction. However, this tool should be used carefully because some MC cartridges generate too much ultrasonic energy (ringing) when excited by the cliks and pops in the record.

On a scope display, the ringing looks like decaying high-frequency tones superimposed on the audio signal. Although this tone is beyond hearing, it could potentially cause trouble to the amplifier or speakers.

In my opinion, the solution is to have the option available with a switch, and leave the choice to the listener. This provided that the manufacturer actually understands that the 3.18 us is a zero in the transfer function, not a pole as some designs Raul and I have seen!
Stephen, exactly. Having the 3.18 us zero in the phono stage simply leaves the response flat after 50 kHz (instead of keeping the roll-off up to infinity). That way it cancels the 50 kHz filter that was not part of the RIAA eq. With all things being equal, the sound would theoretically be closer to that of the master tape (or the live event in case of a direct-to-disc recording).
Cartridges work the same way. It may not have been the intention of a cartridge manufacturer to make a balanced source out of it, but that is in fact how they behave since neither side of the cartridge is 'grounded', i.e. tied to its metal body. In fact many cartridges don't have a metal body! So really the question is more like: how in the hell can this thing be single-ended?" When looked at that way, you suddenly see why there have to be special grounding considerations (ex.: the third grounding wire) that you would not normally expect to see on your typical single-ended output (like from a tuner).
That's correct. I just wish every audio designer got an obligatory course on noise theory and balanced systems. There would be much less misinterpretation and mythology about this important advancement. Balanced is one of the greatest ideas in audio history.

Although the *output* of the cartridge is going to be the same regardless of balanced or single-ended, there is in fact a noise advantage to the input amplifier, simply because it is differential and makes less noise than a single-ended input amplifier.
Correct again, provided you are talking about RFI, hum and other sorts of external EMI entering through the input cable as a common mode signal. In MC cartidges, hum rarely disappears completely, but a balanced system will dramatically reduce it compared to a single-ended system. If you mean thermal noise (hiss), the right answer is: It depends of the design of the preamplifier. You can design a balanced circuitry with much less noise than a single ended one. It just depends on your skill and the technology you are using.

Being inductive should have nothing to do with balance. The transducer could be capacitive (touch sensor) or resistive (thermistor). It's just a two-terminal device.
Correct. Strain gauges, which are resistive elements, also work by using the differential principle.

>>there is in fact a noise advantage to the input amplifier, simply because it is differential and makes less noise than a single-ended input amplifier<<

I don't believe this is true. You get double the gain, but same SNR.
Incorrect. Strictly speaking, a balanced circuit will have two input gain cells operating in differential mode. This circuit would produce 3 dB [20log(sqrt(2))] more noise than a single gain cell operating in identical conditions. With double the gain (6 dB), the result would be a net loss of 3 dB in SNR. However, as said above, it depends on the designer's skill and the technology used. There's no limit on how noiseless a circuit can be (balanced or not), except that imposed by nature.

Regards,