Cartridge Loading- Low output M/C


I have a Plinius Koru- Here are ADJUSTABLE LOADS-
47k ohms, 22k ohms, 1k ohms, 470 ohms, 220 ohms, 100 ohms, 47 ohms, 22 ohms

I'm about to buy an Ortofon Cadenza Bronze that recommends loading at 50-200 ohms

Will 47 ohms work? Or should I start out at 100 ohms?

I'm obviously not well versed in this...and would love all the help I can get.

Also is there any advantage to buying a phono cartridge that loads exactly where the manufacturer recommends?

Any and all help would be greatly appreciated.

Thanks in advance.
krelldog

Showing 28 responses by wynpalmer4

As the Ortofon is a fairly low output MC cartridge there are generally two components to the frequency response which interact.
1. The electrical LCR response.
Contrary to what has been said, for LOMC the capacitance, unless it is quite large (in the order of .1uF), is essentially irrelevant and the objective is to set the response to look like a single pole LR system.
See the MC cartridge section in this article.
http://www.hagtech.com/loading.html
In most cases a resistance close to 100 ohms is fine- largely because the coil DC resistance is often a proxy for the inductance (most cartridges have a similar internal magnetic structure) and turns out to be close to 5 ohms and c.0.5mH, (There are exceptions- Miyajima cartridges are a good example of this) and as a result a 100 ohm load gives you a well damped electrical system (no peaking) with a bandwidth of c.35kHz. This results in c. 0.3dB attenuation at 10kHz and c. 1.1dB at 20kHz. You could try to flatten the response out a bit by greatly increasing the cap, but it’s tricky and usually impossible because of the tip/cantilever/suspension resonance. 
To illustrate this point, use the Hagtech calculator and set the capacitance to 200pF and the inductance to 0.5mH- the resonance is at 500kHz and into a 47k load the Q is c.30!  To reduce the resonance to 50kHz the cap must be increased to 20nF and the Q (R/inductive impedance) becomes 300! If the load R is reduced to 100 then the Q is 0.06 for the 200pF cap and 0.6 for the 20nF cap.
So, clearly a low load R reduces the electrical system to a simple, single pole RL form.
2. Tip/cantilever/suspension resonance.
Typically this looks like an electrical equivalent LCR resonance plus an additional HF pole. I wont go into the math, but it’s not unusual for this to have a resonant peak in the high sonic/supersonic range- 18-40kHz, and with an amplitude of 2-6dB at the peak. This is NOT the electrical resonance and cannot be corrected for by changing the loading of the cartridge. It can be modelled as an LCR plus an RC followed by a unity gain buffer or as a mismatched transmission line.
Essentially the cantilever flexes and resonates when stimulated by the movement of the stylus tip and the response is damped by the suspension/coil dampers.
So, the bottom line is- you’ll never get it perfect. You can either listen and decide what you like, or use a test record to check the sonic response. A couple of other things, the RIAA deemphasis of your amp comes into play, and it’s not unusual for that to be off c. 0.5dB or so over some frequency range, and most amps have restricted frequency responses to reduce the infrasonic and ultrasonic signals.
Also, your room/speaker response is probably poor with errors at least as large as any from the above sources, so unless you’ve characterized and corrected that then listening is probably your best bet.
I may not be a renowned Audio Designer, but I am a somewhat renowned IC designer with credits that include cell phone transceivers and high performance opamps. 
In truth, the issue with phono stage RF has little to do with the capacitance loading- rather it's that many RIAA stages are designed to be non-inverting and lack the additional pole necessary to provide attenuation at ultrasonic frequencies and above. For example:
http://audiokarma.org/forums/index.php?threads/ad797-phono-stage-build-and-help-desk-thread.501186/p...
Where I discuss this very problem as an aside to optimizing an opamp based phono stage.
The non-inverting amplifiers used in an RIAA stage never have a gain below unity unless an additional pole is added. It's hard to see why adding a capacitance of significant value to the input of a phono stage helps when the self resonant frequency of most larger value caps is well below the RF region of interest. Indeed, if that is your concern, then adding several caps of scaled value 1-2 orders of magnitude apart, say 0.1uF//3300pF//100pF as the cartridge load would be the way to go, and who does that- except as an extra pole in a non-inverting RIAA stage.
I'm a believer in fixing the problem where it exists and not by adding an additional parameter to an already over-constrained problem.


Unfortunately accentuated dynamics and resolution all too often mean a really nasty peak at the HF. In my experience, getting a good test record and testing the RIAA response can be a real eye opener.
Most of the differences in response that occur due to changes in load are in the 10k-20kHz range.
Just for interest sake, I ran some simulations with an "ideal" MC cartridge with a 5mH/5 ohms coil in conjunction with a near ideal active RIAA design with non inverting amps and the extra pole.

The ideal load- the one that results in the closest compliance to RIAA
is with 22nF||110 ohms (+/-.06dB 20Hz-20kHz). It also has 50dB of attenuation (relative to the ideal RIAA stage) at 1MHZ and 95dB at 10MHz. Dropping the load R to 100 ohms reduces the 20kHz output by 0.2dB. 
Increasing the capacitance to 0.1uf and reducing the load to 68ohms is almost as good as this.
None of these simulations include the mechanical response.
Anyway- as can be seen there is no perfect answer. There are many combinations of load R/C that are pretty well equivalent and you can't even simulate or calculate it to find a decent answer as no MC cartridge maker that I am aware of provides even simple models for their device, even when asked.
Whether to use the extra pole is an interesting question.
So, here is some more information, and a repeat of some old information so you can judge for yourself.
1. The additional pole serves not only to reduce the RF but also to reduce the deviation in the RIAA stage noise transfer function at high frequencies. Effectively the noise from the circuit peaks higher than it should at supersonic/lowRF frequencies.  This might not be a problem, but then again any non linearities in the amplification system might serve to mix the noise down to audio bands so why have it?
However, it does come at a small cost as the extra input pole 
rolls the RIAA stage off, making it flat to extended frequencies but does not compensate for the roll off that may be introduced due to a sub optimal loading on the input stage- basically you are tweaking the Q of the input stage load to flatten the response at 20kHz and that is hard to do.
2. Don't forget the MC mechanical characteristic may cause a significant peak. Reducing the input stage Q to be essentially a LR system adds a single pole low pass characteristic which, combined with the resonance, may produce a characteristic that for example might be +/- 1.5dB 20Hz-20kHz with a dip in the 4-10kHz region rather than -1.5dB (due to a 20Hz infrasonic roll off)  to +3dB (due to the resonance peak). 
3. The recording process (particularly analog) imposes restrictions in the frequency response- limiting the HF and LF responses. These restrictions are not set in any standard and are usually due to limitations in the equipment used (Tape recorder and lathe frequency responses and dynamic ranges for examples). Good recording engineers try to minimize the effects, but they still exist.

If you are like me where my HF sensitivity has been reduced by age etc. where I really can't hear above 13kHz, but my response is still excellent below, including down to 20Hz, then making sure that things remain flat to 10kHz or so is what really counts. Thus, over compensating the response to make the measurements have a minimal deviation from nominal over the "full" audio band is probably not the best approach.
Again, listening is best, but be careful not to delude yourself. 
Audiophiles (myself included) tend to get seduced by what are essentially deviations from what the real listening experience provides- such as excessive detail, ability to resolve supposed room artifacts etc. etc. 
These effects, in my substantial experience of live performances, just do not exist in a live listening environment, but what really matters are things like instrumental timbre and dynamics (both micro and macro) and that often gets lost in the shuffle, and in the recording.
Yes, others have emphasized this last point in this thread, but it bears repeating.

Hi Al,
so, let’s look at some simulations of the RIAA stage input.
Let’s assume that a minimum capacitance has been achieved of 100pF, which as it includes the cartridge winding capacitances, the internal wiring of the pickup arm, the interconnect to the preamp, the wiring to the preamp input and the preamp input capacitance. is probably optimistically low and beyond absurd for anything that involves a SUT or a tube amp. Let’s also assume that mechanical resonances do not exist and the only thing that matters is the electrical response.
Let’s also use our ideal MC cartridge with 5mH and 5 ohms series resistance and lets start with a 47k load.
The peak is at 663kHz and the magnitude is 28dB, the 10MHz rolloff (relative to 1kHz) is 48dB, the boost at 20kHz relative to 1kHz is 70mdB.
For RFI- conducted RFI is generally c. a few kHz to 30MHz. Conducted RFI can be converted to radiated RFI in the power cords, power supplies etc. Radiated RFI is generally considered to be 30 MHz and above, except where conversion occurs. What domain are we concerned about?
I’ll choose the loss at 10MHz as a metric.
OK, lets increase the capacitance to 1000pF, which is a realistic cap based on the values originally presented as available, and see what happens.
The resonant frequency decreases to 223kHz, the magnitude drops to 26dB, but the 10MHz roll off is now 65dB! The gain at 20kHz is 68mdB.

Which of these would be more benign to RFI while keeping an acceptable audio response? I would argue the 1000pF case.

Now lets change the load R to 1k.
In the 1000pF case the peak is c. 3.5dB, and, of course, the loss at 10MHz remains the same at 65dB. The 20kHz boost has decreased to 53mdB.
In the 100pF case the peak has gone and the gain is now 4mdB and the -3dB bandwidth is about 400kHz. The 20MHz loss is 48dB.
Again, the 1000pF case is better from an RFI perspective, provided you don’t care about a 49mdB increase in RIAA error at 20kHz.
Now change the R to 100 ohms. For the 100pF case the gain at 20kHz is -1.3dB and the 20MHz loss is 51dB.
For the 1000pF case the 20MHz loss is now 66dB, but the 20kHz loss is slightly less than the 100pF case, so it appears that a bigger cap might be better, so lets try that.

Increase the cap to 10000pF. The 20kHz loss is now 0.8dB, and the 20MHz loss is 77dB!
Now increase it to 28600pF- the error at 20kHz is now 0mdB, the attenuation at 10MHz is -77dB, and the -3dB point is c. 40kHz which is generally at about the limit that the cartridge manufacturers specify.

So which of these scenarios gives the flattest 20-20kHz ELECTRICAL response AND the highest rejection at 10MHz?
And who knows, a real cartridge might actually have a slightly more ideal audio frequency response with a small loss at 10kHz and a larger one at 20kHz, depending on where the mechanical resonance lies.
By the way, I am the owner of two Miyajima Madake cartridges, one that’s approaching end of life and can’t be retipped and a second one with 12 hours of play, and I’ve been going through this exercise with them both with a SUT/tube amp combo and the AD797 based preamp and it’s proving very hard to reach a conclusion as to what is best...

Wyn




Oh, and I just wanted to say- the Miyajima Madake cartridge designer effectively loads the cartridge at effectively 60 ohms (I know as we have exchanged emails on the subject) using his in house SUT and amps and doesn’t add any cap in parallel- he prefers the sound. This is not the same as the recommended load which if I remember correctly is c. 200 ohms //0.68uF which I believe gives the "best" measured frequency response.
Some people (reviewers) claim that the Madake is best into a 1k or 10k or even a 47k load and I disagree totally- even though we all love the cartridge- so what is truth?
The units that I have are, if I remember correctly, #106 and #261 and they sound a bit different- the new one has a little less bass and a more strident HF and doesn’t measure quite as well as the older one which has c. 350hours of play, so hopefully it will break in and perform like its venerable ancestor does :-)
Why is it nonsense? I know that I started out by seeing what others were loading the Madake with and it was seeing the huge range of answers that was posted that, to some extent, triggered the analysis process that I have gone through. 
If you view the cartridge as essentially a musical transformer taking what has been transferred to disk and producing a sound that you like then that is one thing, but if the idea is to, somehow, transfer the information from the disk to your ears in as perfect, unmodified, a way as possible then that is something entirely different.
At the very least, you'd think that getting something close to a decent conformance to the de-emphasis characteristic that corresponds to the original recording pre-emphasis would be a decent start, but as should be obvious, that is not so easy to achieve and some understanding of the limitations and trade offs in the choices that need to be made would seem to me to be valuable and I commend the original requestor for initiating this exchange.
No, I did not design the AD797. That was Scott Wurcer- a colleague at ADI and, incidentally, for whatever it's worth, also an ADI design fellow. However, I know the design quite well.
He and I were colleagues in the opamp group in the 80s. He focused on high performance relatively low frequency opamps such as the AD712 and then the AD797, amongst others.
I focused on high performance high speed amps like the AD843, 845 (at one point an audio darling), 846 (also a transimpedance design with some very interesting design aspects that I gave an ISSCC paper on) etc. etc. mostly using a complementary bipolar process that I helped develop that I believe was also used in the AD797. I also did things like designing the FET based AD736/737 RMS-DC converter and others.
I moved on to more RF, disk drive read/write, GSM, CDMA etc. transceivers, signal processing, PLL and DSP designs. 
Anyway, what the heck does "cartridge energy" actually mean?
It's an electromagnetic transducer that I grant you has a fair degree of non linearity (which is one of the reasons I like the Madake- it's quite low distortion for a cartridge and why I have ML Monti loudspeakers as most loudspeakers have far too much distortion for my taste) so it generates a voltage and has an output impedance.
I can vouch that the Miyajima cartridges respond to ticks and pops just about as you would expect based on their frequency response and the amplitude of the signal generated, so no alternative explanations are necessary. Are you perhaps stating that the increased current generated by the lower resistive load increases distortion? If so, I can say that I believe that it's not true for the Madake as I've measured IM and harmonic distortion under varying load conditions (Ah the joys of test records)- and they are really sensitive to cartridge alignment but not that I can tell, to  load.
And yes, I'm well aware of the SUT impedance transformations- and I also model them, although imperfectly, in LTspice, which uses calls to set up the parameters.  This can be quite insightful, for example are you aware of the LF response sensitivity to winding inductance? 
One of the "joys" of being an IC designer is the compulsion to measure/model everything! However, once the skills are developed it's relatively easy to do as long as someone else has done the hard work of producing suitable models to use.
Constructing an electrical model for the Madake was fraught with concern as using my own meters to measure the capacitance and inductance was anxiety producing.
Then when I plugged the parameters into the simulation and compared against my measured output I realized that the actual response had precious little to do with the electrical characteristics and everything to do with the mechanical resonances.
And so, the journey began...

Actually, adding an additional pole can be as simple as finding a bias/protection resistor in series with a high impedance node in the signal path then adding a shunt cap to ground. For example, in the AD797 opamp based MC phono stage in AudioKarma there is a second opamp that actually performs the de-emphasis and it has a 390 ohms resistor in series with the non-inverting input that also is in the signal path from the first gain stage. 
The resistor is there mostly to minimize offset in the second stage. Adding a 3300pF cap to ground from the "output" end of the resistor improves the RIAA compliance and also improves the RFI immunity of the second stage. Many LOMC preamps have a non-inverting gain stage followed by a non-inverting RIAA deemphasis stage and the changes can be readily made. 
The second stage has much lower overload margins than the first stage due to the fact that it has boosted signal levels, especially in the bass.
Also, If it is non-inverting, the gain at HF of the second stage is asymptotic to unity so that, without the additional pole, the level of RFI well above the audio band is the same as that at the output of the first amplifier stage.
However, the overload margin is lower and the desired signals are much larger, so the chance of RFI causing a significant amount of intermodulation which can become audible is higher than in the first stage.
Incidentally, passive de-emphasis stages, placed between the amps which are operated in fixed gain, usually are better in this regard as they don't have the implicit HF zero that forces you to add the extra pole.

The idea of driving a cartridge directly into the virtual ground of an amp either just using the amp input impedance (such as a grounded base transistor) or via a resistor is hardly a new one. Some of the earliest solid state phono stages did exactly that, including one that I sold in the UK in the 1970s. I also used a transimpedance op amp that I designed (the AD846) in that mode- using the device as a current conveyor and operating it both closed and open loop as the extraordinarily high impedance "compensation node" could be loaded by a resistor to provide a fixed, and low, transimpedance for the stage. 
I can't say that either approach seemed to be particularly successful.
A good way to view this is to simulate the response of the current from a cartridge model loaded in exactly this way, which basically means reducing the load R to whatever the amp input impedance is and measuring the current through that R- the assumption being that the current through the load R is what enters the ideal current conveyor.
It should be immediately obvious that the signal current is just whatever the voltage is across the resistor divided by the value of the resistor so it's just a scaled version of whatever voltage the original voltage amp saw and there is no difference in the output!
So, all we need do in the original design is to reduce the cartridge load R further from the 100 ohms and see what happens.
Let's return to our initial case- the one with 100pF, not the one that is "optimized" with a much larger cap- clearly as the resistor falls in value the effect of the cap is reduced so it seems like a good place to start.
Remember at 47k load the response is extremely flat in the audio band but has a screaming peak at c. 700kHz.
To get a decent noise performance a bipolar input stage needs to run at  least at 1mA current, and lets also assume that the input is complementary- NPN and PNP transistors with the emitters connected, both in a common base configuration- then to a first order the input resistance is about 10 ohms.
Under these circumstances the frequency response of the input current or voltage for our 5ohm 5mH cartridge is down c. 13dB at 20kHz! That doesn't seem so sensible to me.
The reason for this should be obvious. The generator output impedance is dominated at HF by the winding inductance so it increases c. linearly with frequency beyond the Rint/ ZLint point which in our case is 2*p1*5/.0005=2000pi=c.2.8kHz!
As far as the cartridge is concerned it can't tell whether the load it sees is into a common base configuration with zero dc offset, or the same load into ground. By the way, the DC offset needs to be zero. Running DC current into a cartridge is just not a good idea...
You could reduce the input bias current or add an extra R to make the load resistance go back to 100 ohms- but why is it different from the case with the voltage amp?
Yes, common base stages are different insofar as the stage is "broadbanded" compared to a common emitter transistor stage, and the collector base capacitance is not multiplied by the collector- base voltage gain (Miller capacitance)  but I don't really see why that is a big deal- indeed if you really care then just cascode the input stage and reduce the Miller capacitance that way- something that is often done anyway as it improves the bandwidth/linearity of the input stage.


Direct driving your ESLs? 
I'm sorry, but I don't know what you mean.
I use Rogue M180s to drive my Montis, which is interesting as the Montis are, of course, capacitive above the woofer crossover with an impedance of, if I remember correctly, 0.9 ohms real at 20kHz (the minimum) with the input looking inductive after that, and the Rogues are inductive with an output impedance, again if I remember correctly, of about 0.33 ohms resistive at mid frequencies rising to 0.9ohms inductive for the 4 ohm tap at 20kHz. It was interesting to simulate the response and add models for the interconnect wires which I measured for resistance, capacitance and inductance. The best turned out to be short (less than 2m), multi (3 or more) parallel 12 gauge wires. Fancy cables didn't add anything that I could determine.
Extra inductance or resistance always was worse. Extra capacitance didn't seem to matter- which is hardly a surprise.
Sorry, a typo. It was meant to be 0.5mH. The simulations were all done with 0.5mH- note that the roll off calc for the RL was done with .ooo5H. I can perform the sims for any set of values you choose, and if you care to do so I would be obliged to you.
I just measured a cartridge I have it's 10.7uH, 16 ohms DC.
I'll resimulate with that and see what happens.
I measured the inductance- cartridge plus interconnect to phono input on preamp.
The total was 11.8uH. The capacitance was 51pF. including the preamp input the capacitance was 205pF. Excluding the input load cap it's 85pF.
I'll use those numbers and see what I get.
By the way, I still have no idea what you mean by energy of the cartridge itself etc.
With the measured cartridge/minimum input cap (85pF) the response with a 47K R has a 29dB resonant peak at 4.3MHz and is -12dB at 10MHz.
With a 1k load it’s 4.2MHz and 9.5dB.
With 250 ohms it’s basically flat to 5MHz, with -14dB at 10MHz.
with 100 ohms it’s 1mdB down at 20kHz, with -17dB at 10MHz.
Let’s change the cap to 205pF, the original total input cap.
-1mdB at 20kHz, -20dB at 10MHz.

Now 1000 pF. 0mdB at 20kHz, -32dB at 10MHz, 0.5dB peak at 1MHz.

Now 10nF. 10mdB at 20kHz, -52dB at 10MHz, 3dB peak at 400kHz.
Now 22nF. 20mdB at 20kHz, -59dB at 10MHz, 2.2dB peak at 270kHz.
Now 47nF. 27mdB at 20kHz, -65dB at 10MHz, 0.8dB peak at 147kHz.
Now 0.1uF. -12mdB at 20kHz, -72dB at 10MHz. No peaking, -3dB at 150kHz.
And the actual load used by the designer, with an estimated cap based on the SUT ratio and a tube input stage of my knowledge.
+7mdB at 20kHz, -50dB at 10MHz, 1.7dB peak at 400KHz.
The same, with the recommended load cap of 0.68uF added.
-3dB at 20kHz, -87dB at 20MHz.
Incidentally, by my measurements the cartridge peaks by c. 5dB at 20kHz due to the cantilever resonance, so the extra cap makes the response more symmetric about 0dB, but at the cost of a dip close to 4kHz, which is not a great tradeoff in my opinion, so it’s no wonder the designer prefers to have no cap.

Personally, I’d go for the 100 ohms, 0.1uF load if I was using a non SUT input and 60 ohms no cap if using the SUT, which sounds about right.
Let’s now look at the driving into a low impedance current conveyor node case.

The lower inductance of course changes the R/L ratio by about a factor of 45, so driving it this way is more plausible.
However, a shunt cap between the input node and ground still would be beneficial as far as RFI is concerned.
For example, with the 10 ohms mentioned before the 20KHz loss is 14mdB and 10MHz is -29dB with 85pF, but with 1000pF it is still only 14mdB at 20kHz and -30dB at 10MHz.
With 0.1uF the loss is 17mdB at 20kHz, and at 10MHz it’s 65dB.
Personally I’d go with the 0.1uF cap in this case.





OK. Thanks for defining what you mean. So lets look at power.
I ran a simulation and calculated the power dissipated in the cartridge series R and the load R and plotted what happened to the total power as the load cap is varied.
The voltage and current must be in phase for a resistor so power remains V^2/R. We know that the voltage across the load R is reduced as the cap is increased- after all, that’s the objective- so the load power must fall- but what about the series R? I calculated this and added it to the load power to get the total power.
So, back to the "real" case with a 11.8uH winding inductance , 16 ohm Rcart, 85pF load and 100 ohms. I set the input to 1v rms and calculated the total power in the two resistors =10*log(((voltageacrossRcart^2/16) +(voltageacrossloadr^2/100))
The power plot starts at -20.6dB at LF then falls by 3dB at 1.7MHz and by 18dB at 10MHz. No peak is present.
I then changed the cap to 0.1uF.
The power at 1kHz was -20.6dB, it peaks at -13dB at 150kHz , is 3dB off the peak at 87kHz and 320kHz, then falls monotonically by 25dB at 10MHz.
So we’re measuring 1/7 the bandwidth and a bit less than 6x the power in that bandwidth- which is, again, hardly surprising, so the power is more or less constant, but the total power at 10MHz is reduced and the load power at RF is hugely reduced, so isn’t that better?
Is the increase in power dissipation in the cartridge at supersonic but not RF frequencies problematic?
Darn! I wish I had some way of showing plots.

Well, I once was the proud owner of a couple of quads- the original ESL-57 and a pair of ESL-63s that I also pulled apart- their delay line/filter design to drive the annular segments was quite neat. However, I had serious reliability problems and I switched to Martin Logans and I've stayed with them ever since. I have Montis in the main audio room and a pair of venerable Prodigies in the home theater room and I've never had a problem with either of them.
Yes, I am aware of the fundamentals of their operation, but alas not the details so I really cannot help you out in this. Frankly, I'd be pretty leary about shipping the needed HV to the speaker from an external amp. Visions of lethally shocked dogs, maids, and kids spring into my mind. Having said that I believe that at one time Acoustat built ESLs without the transformer, instead using built in output transformer free tube amps
What I don’t understand is why any of the purported effects of heavy resistive loading you state could be definitively true- certainly not on tracking which is demonstrably false based on IM tests on tracking performance that I have incidentally performed as a function of load. While mechanical impact does occur as a result of electrical load- there is some back emf necessarily generated by the signal current that affects the mechanical motion, but a quick back of the envelope calculation using Lenz's law and the 10uH cartridge suggests a 2 orders of magnitude difference between the generated signal and the back EMF for a 100 ohm load at 20kHz- certainly not enough to cause tracking issues I would think. As for the rest, well, take the Madake for instance- the resistive load that people (reviewers) claim is best literally varies by nearly four orders of magnitude! I load mine with 60 ohms (as do many users) and I find that the resolution and dynamics is excellent while maintaining a natural timbre, tonal balance and micro/macro dynamics while not creating the unnatural etched image that many "high resolution" MC cartridges produce.
In any case, I’ll have to research this to see what technical white papers or similar exist on the subject.
Yes, it has been quite interesting and informative for neophytes like myself.
By the way, I constructed a model for the cartridge back EMF using Lenz's law and incorporated it into my simulations.
For those who are interested, the simplest version of the law is V(t)= -LdI/dt.
In this case the parameters can be measured (the LC100A meter from Ebay is a great way to do it) and the back EMF acts to oppose the voltage developed in the coil. The fractional change (attenuation) in the signal voltage is easy to calculate as it approx. equal to -L*2*pi*frequency of interest/Rload. So, it's inversely proportional to the load R and proportional to the frequency.
For example, for a 11.8uH cartridge, with a 100 ohm load the error at 20kHz is c. 1.5%.
The model measures the current through the coil and adds a correction of the form -k*s to the source voltage.
The effect can be seen both on the frequency response and on the transient response of the Phono preamp that I'm simulating.
Is anyone interested in this, or the simulation results?
Yes, it really is back EMF- it's calculated using Lentz's law and is a consequence of Faraday's Law of Induction and it occurs as a result of the change in current through the coil- that's where the frequency dependent term comes from (the derivative). The term is subtracted from the voltage generated by the cartridge and in that way it acts to reduce the output voltage and hence the current, so there's a degree of negative feedback. I chose to use the full inductance rather than the MC inductance alone as a way to add a bit of correction for the physical displacement of the stylus/cantilever/coil that occurs as a result of the generated force. I did it that way as I don't believe that true reciprocity occurs and I have no idea what the losses are. The "gain" can be scaled to increase the mechanical feedback- for example the value of multiplier for the s term in the feedback could be increased to Icart*1.5 for example. What I actually calculate is 
FBvoltage= k.Lcart*Icart*s, where K is the scale factor mentioned above (a default of 1), s=jw as usual, Lcart is the extended inductance and Icart is the actual cartridge current in the coil which I measure using a very small R as sucky LTspice doesn't include the right components to let me do it easily.
In any case, yes, the error is small for the Madake, and the effect on the 1kHz square wave versus an ideal RIAA is miniscule. I'm currently running sims with varying load Rs to see what significant effects I see. My initial look suggests that 100 ohms has a faster rise time than 47K, for example- but it's early days.
By the way, higher inductance carts will need proportionally higher load Rs to achieve the same level of non-interaction.
Thanks, that's useful information. I have a good, old, friend that's a hi-fi reviewer in the UK and also is part of the team that produces the "Chasing the Dragon" series of recordings. We have arguments about this all the time as he feels so sure that his A810 with NE5532 opamps galore in the playback path is so superior to LPs and he often cites the limitations described to him by the mastering engineers. 
I assume that the 42kHz cut off isn't a single pole- and I also assume that it uses some kind of relatively benign analog filter like a butterworth or gaussian.
I too have a tape deck- an Otari MX50 that also uses opamps galore- some better than NE5532s, some worse, that sounds, to my ears equally good but slightly different to his A810 and which measures imperceptibly different to his except infinitesimally worse W&F and lower distortion.

I have a long history of listening to tape playback- starting from the time when I was with Decca at their production plant in Malden England, and to my ears a well recorded, well mastered and pressed LP can be just as good.
Well, if the simulations are to be believed the results are quite interesting, if hardly entirely unexpected.
I'm simulating an opamp based phono stage with near perfect 20-20kHz RIAA compliance, with both active FB and passive RIAA implementations. The feedback design has the extra HF pole.
 If you load the cartridge with 47k the input rings at the resonant frequency of the input network (>4MHz) when you hit the RIAA preemphasis network with a 2KHz square wave and it lasts for 10s of us. 
When the load is reduced to 1k the ringing is damped and it ends in a few us. 
Into 100 ohms the response is well damped with a small over shoot and after 1us it tracks the input perfectly. At 400 ohms there is just a small amount of ringing.
Adding 0.1uF to the 100ohm load noticeably slows the edge of the feedback RIAA preamp output square wave compared to 85pF. Increasing the R to 400 ohms, and keeping the 85pF shows the slight ringing on the output response but doesn't have a reduced rise time, increasing the R to 47k shows significant near-oscillation at the output of the preamp.
The passive design shows none of these pathologies with the change in load R, and always has a significantly slower and essentially constant, risetime, so clearly the RIAA de-emphasis producing opamps are reacting to the HF signal.
So, perhaps the answer to the loading question is, no matter how unlikely it seems- it depends on the architecture of your phono amplifer!
No, you are interpreting what I say about the simulations quite right.
It definitely seems that if you want to terminate the cartridge in a high impedance- and it hardly matters whether it's 47kohm or 1Gohm, you'd better be using a passive RIAA stage, or possibly an inverting RIAA stage- I haven't checked that out yet. 
The non-inverting active RIAA can be 50-90dB worse than the passive design in the several MHz regions where the resonances and RFI reside if the extra pole is not included,  and 20-60dB if it is included. Loading the cartridge with 100 ohms gives you most of the difference back, but the passive is still c. 20dB better.
The non-inverting active, however, has superior square wave response than the passive- due to having higher frequency correct amplitude harmonic content- under low load R conditions- but who knows if that matters.
I wonder if this correlates against the reports from various reviewers concerning their preferred load impedances.
Back to the cutter- could you possibly please tell me a few more pieces of information- like the inductance you use and the C/R of the head input?
One other thing, setting the input resistance >>100 ohms can have unfortunate effects on the input stage amp if the bandwidth is high. For example, with a AD797 opamp the gain bandwidth product is 110MHz, so for a c. 2.5MHz electrical cartridge resonance  the amp still has a gain of 44, so if the input stage has a high gain the total gain at resonance can be 35+32dB=67dB, or a gain of about 2k, so if, for example, there's a transient click which generates a 1mv rms output at resonance the input amp can produce 2v rms output.
This may not be an issue, but it would seem to me that when using a high resistive load an input amp with either a very high overload margin or a deliberately limited bandwidth (<<2.5MHz) would be essential. Once again, preamp architecture seems to be the deciding factor.
Well, the preamp design that I've been referencing has an unweighted 20-20kHz S/N ratio of 66dB at 1v rms output, 0.25mv rms input. Assuming the 0.25mv output from the cartridge is at 5cm/sec, and the max velocity before miss-tracking is 20cm/sec the effective dynamic range of the preamp is 78dB, unweighted. 
I don't have any LPs which do not noticeably increase the noise floor once the stylus is dropped, but that could mean that the LP dynamic range could still be >70dB weighted. For the passive design the gain is split c. 200x for each of the two gain stages- which produces c. 50mv rms at the output of the first stage and c.1v rms at the final output due to the attenuation through the passive equalization network. 
The amplifier GBW product provides the necessary roll off to prevent overload and the output noise is imperceptibly different between the passive and active designs. The active design measures slightly better as far as harmonic distortion is concerned, but were talking about the difference between 0.001% @ 1kHz, 1v rms,versus 0.001% @ 10kHz, 1v rms as at lower frequencies noise dominates and I can't measure it with my primitive test equipment.
If I add a 42kHz cutter -3db point, then not surprisingly, the 20kHz response is down by c. 1dB.
I'm using the opamp preamp for three reasons- I can simulate it quite well, it sounds, actually, pretty good, and it's easy to make changes in topology to examine various aspects of the design to investigate various things- such as the mystery of loading.
Well, IC designers would generally consider an amplifier circuit that has a section with a 35dB resonance to be pathological- that is, basically a problem waiting to happen- unless, of course, the goal really is to produce an oscillator, or at least a marginally stable system. All it takes is a small amount of unwanted feedback due to parasitics - resistance in the ground, inductive coupling or capacitive- anything could do it- from a point where there’s enough gain and phase shift back to the resonance and all sorts of nasty things can happen. So, even though we can model all of these effects to a degree that the non-practitioner would consider to be near magical (yes, we can do EM simulations for complete circuits that are much more complex than an opamp, and capacitive/parasitic resistance runs are entirely routine) we generally choose, just for good practice, to eliminate any such effects as a matter of priority. Just try getting something like that through a design review.
Seemingly, the practice in audio design is somewhat different.