Is performance dependent on grunge resistance or circuit design?
Years of tinkering make me convinced that good audio design is more a function of grunge resistance than just clever circuit design, be it analogue or digital, while fully acknowledging the interrelationships between both,
In particular, there seems to be still a trial and error approach to rejecting mains, RMI/EFI, vibration and sound borne grunge rejection in hifi design.
This is surprising and has given rise to a number of specialists addressing these issues. In my experience the impact of devices by Acoustic Revive (power conditioning and grounding, RFI/EFI, vibration of components and connections, resonance control) as well as ByBee (grunge removal from interconnects, power and speaker cables) gives testmony to OEMs lack of understanding in addressing these issues. The resultant effects on SQ are way more pronounced than from replacing individual components in the chain.
As an aside, it is commonly assumed that in digital audio sum checking eliminates any and all of these issues through bit-proof transmission. Again, my experience shows that nothing could be further from the truth and that galvanic isolation as well as low tolerance to spec on cable resistance levels are required for grunge busting.
If the above is true, minimising points of ingress through limiting both the number of connections and open inputs as well as grunge originators in the system (in particular large power supplies) becomes central to putting resistant systems together. and, btw, it puts into question the validity of individual components‘ reviews, representing a nasty challenge to the reviewing trade.
Is anyone out there aware of any literature on this subject?