Depending on many system-dependent variables, intentionally introducing what will probably be a significant impedance mismatch by using an analog interconnect to conduct a digital signal might in some cases, by happenstance, produce good results. In past threads here several members have in fact reported good results doing that.
However IMO it is poor practice, and if the results are sonically pleasing chances are that what is happening (at least in situations where the DAC does not provide jitter rejection that is near perfect, such as by means of ASRC technology) is that the mismatch is resulting in the introduction of jitter that happens to be euphonic in the particular system. From this paper
by Steve Nugent of Empirical Audio:
Another interesting thing about audibility of jitter is it's ability to mask other sibilance in a system. Sometimes, when the jitter is reduced in a system, other component sibilance is now obvious and even more objectionable than the original jitter was. Removing the jitter is the right thing to do however, and then replace the objectionable component. The end result will be much more enjoyable.
Jitter can even be euphonic in nature if it has the right frequency content. Some audiophiles like the effect of even-order harmonics in tubes, and like tubes, jitter distortion can in some systems "smooth" vocals. Again, the right thing to do is reduce the jitter and replace the objectionable components. It is fairly easy to become convinced that reducing jitter is not necessarily a positive step, however this is definitely going down the garden path and will ultimately limit your achievement of audio nirvana.