Thanks all for not letting this devolve! The concensus seems to be pointing to "desirable" noise and/or distortions such as even harmonics, cross talk, phasing etc creating (or the illusion of) a difference, preferred or not . Im thinking of all of the ways a sound in nature propogates. Reverbs, numerous random reflections (creating distortions and affecting phase in frequency dependent way) density variations in the air, etc. Indeed maybe our brains might "expect" less than ideal waveforms. Perhaps "ideal" might be best approximated by something like DDD studio recordings on highly resolving systems (dither its own issue). While potentially similar but not proof, in digital signal processing some computations converge best, even require, some additive digital noise (variations in the digitized stream). Maybe our ear/brain pair expects similar to be happy. Im asking, but do full digitally recorded live performances suffer as much as studio recordings to folks? Or is the point of divergence at playback? Perhaps a recording "live" in a venue (think natural distortions) mitigates the "perfect" waveform idea, even if it is a DDD recording.
Strike a tuning fork at any frequency. Play the same pure frequency on a modern electronic device. Conduct a poll as to which sounds subjectively better. Seems obsurd but extrapolate to a guitar and even virtuoso performer on a hypothetical "pure note" guitar.
Enjoy what you find most pleasing, but i too am curious like the OP. Im guessing this topic never goes away.