What do we hear when we change the direction of a wire?


Douglas Self wrote a devastating article about audio anomalies back in 1988. With all the necessary knowledge and measuring tools, he did not detect any supposedly audible changes in the electrical signal. Self and his colleagues were sure that they had proved the absence of anomalies in audio, but over the past 30 years, audio anomalies have not disappeared anywhere, at the same time the authority of science in the field of audio has increasingly become questioned. It's hard to believe, but science still cannot clearly answer the question of what electricity is and what sound is! (see article by A.J.Essien).

For your information: to make sure that no potentially audible changes in the electrical signal occur when we apply any "audio magic" to our gear, no super equipment is needed. The smallest step-change in amplitude that can be detected by ear is about 0.3dB for a pure tone. In more realistic situations it is 0.5 to 1.0dB'". This is about a 10% change. (Harris J.D.). At medium volume, the voltage amplitude at the output of the amplifier is approximately 10 volts, which means that the smallest audible difference in sound will be noticeable when the output voltage changes to 1 volt. Such an error is impossible not to notice even using a conventional voltmeter, but Self and his colleagues performed much more accurate measurements, including ones made directly on the music signal using Baxandall subtraction technique - they found no error even at this highest level.

As a result, we are faced with an apparently unsolvable problem: those of us who do not hear the sound of wires, relying on the authority of scientists, claim that audio anomalies are BS. However, people who confidently perceive this component of sound are forced to make another, the only possible conclusion in this situation: the electrical and acoustic signals contain some additional signal(s) that are still unknown to science, and which we perceive with a certain sixth sense.

If there are no electrical changes in the signal, then there are no acoustic changes, respectively, hearing does not participate in the perception of anomalies. What other options can there be?

Regards.
anton_stepichev
@herman
2 files with different checksums are different, 2 files with identical checksums could be different
https://en.wikipedia.org/wiki/Hash_collision


Thank you for the interesting information, I take off my hat! Yes, I have to admit that the checksum is not a proof of the similarity of the files, although the probability of the match is extremely low and this is definitely not our case.

In our case it is easy to exclude any possible mistake, it’s enough to open the files in the hex editor and compare their binary code. See the screenshot https://www.backtomusic.ru/wp-content/uploads/2021/07/hex-compare.png - the files are identical (there is an empty "Comparison results" window on the right).

This simple experiment can be carried out by anyone. I hope there are no more doubts about the identity of the files?
The smallest step-change in amplitude that can be detected by ear is about 0.3dB for a pure tone. In more realistic situations it is 0.5 to 1.0dB'". This is about a 10% change. (Harris J.D.)

if you start with a flawed premise you often end up with a flawed conclusion.


It doesn’t matter what seems different to you, it only matters what the computer understands the same.
I think your understanding of what is "the same" and what is "different" is too simplistic.  You're showing off of your "computer skills" seems a little too obvious.


they have the same checksum. This is an absolutely indisputable proof of their similarity no matter what you call these files and no matter how you created them.

I am assuming you mean identical rather than similar, but in any case a checksum does not prove 2 files are identical

2 files with different checksums are different, 2 files with identical checksums could be different

https://en.wikipedia.org/wiki/Hash_collision

@andy2
4+7 = 3+8? Don't look the same to me.

Your answer only say that you do not understand the basics of computer technology. It doesn't matter what seems different to you, it only matters what the computer understands the same.

Computers consider files to be the same if they have the same checksum. This rule holds all digital technology - the work of software, the Internet, and so on, including the picture on the screen that you are looking at now. If this rule didn't work, you would be looking at a black screen right now.

I'm waiting for something smarter from you.
Wire directionality in an alternating current (AC) application? How interesting.

This is a very common misconception. This is not about current, which is also commonly misunderstood as a "flow of electrons." Electrons do not flow down a wire like water through a hose. Electrons do not flow back and forth in AC. "Alternating current" was a horrible choice of words to describe the phenomenon and the water hose analogy is just plain wrong which  leads to many of these common misconceptions

What's important is the transfer of energy. Energy flows from the source to the preamp, from the preamp to the amp, from the amp to the speaker. This doesn't prove wires sound different when you change the direction, but the transfer of energy is definitely in one direction,


There is nothing "alternating" about which way the energy flows. 


B = A because they have the same checksum
4+7 = 3+8?  Don't look the same to me.

Could you explain how Is identical to A, if B is an optimized version of A? Seems like an contradictory statement.

B = A because they have the same checksum. This is an absolutely indisputable proof of their similarity no matter what you call these files and no matter how you created them.
Sorry anton, you deserve better. Problem is you are working on a high level.   

manueljenkin, click on the users name, and select Message User from the drop down menu. 
First of all you need to calm down.

OK. Imagine that we have an original file (A), an optimized file (B) that is PHYSICALLY identical to A, and an ordinary digital copy of the file (C) that is PHYSICALLY identical to A, all located on the same disk.

Could you explain how B is identical to A, if B is an optimized version of A?  Seems like an contradictory statement.












Andy2, there are no contradictions, I’m trying to explain to you a simple thing that is understandable even to a schoolboy. This is getting ridiculous, by God.

You wrote:
Of course if optimized then it may sound different. During playback, the audio file has to be "decompressed" or if you will "processed" by the CPU. Therefore it will have its own digital interference signature which will affect the DAC clock.

OK. Imagine that we have an original file (A), an optimized file (B) that is PHYSICALLY identical to A, and an ordinary digital copy of the file (C) that is PHYSICALLY identical to A, all located on the same disk. Then:

By the sound: A ≈ C ≠ B.
In the digit: A = C = B

At the same time, you claim that A sounds different from B because A is not physically equal to B since the files have "its own digital interference signature". This in itself is nonsense, but let’s assume that it is.

Then answer the question:

Why then is the sound A ≈ C? In other words, how can normal copying differ from optimization if in both cases we always have files with the same checksum at the output?

Keep in mind that if we do re-optimization (B1, B2, B3...) and re-copying (C1, C2, C3...), we have:

By sound: A ≈ C ≈ C1 ≈ C3 ≈ Cn ≠ B ≠ B1 ≠ B2 ≠ B3 ≠ Bn

In digit: A = C = C1 = C2 = C3 = Cn = B = B1 = B2 = B3 = Bn

So answer the question please, then we’ll see who is schizophrenic and who is just stupid.
^^^ I don't think you know what you're talking about.  You seem to be contradicting from what you wrote before.  I thought you were making sense in your previous post, but then you're saying something different so it's hard to argue with someone who seems to be schizophrenic.
@endy2, You're talking nonsense. If everything were as you write, each digital copy would sound different in the same way that optimized and non-optimized files differ.
Of course if optimized then it may sound different.  During playback, the audio file has to be "decompressed" or if you will "processed" by the CPU.  Therefore it will have its own digital interference signature which will affect the DAC clock.

File formats such as .wave or .flac are all in compressed format.  They have to be decompressed by the CPU before the DAC can take the digital data and turn into analog.
If you're talking about different file optimization, then sure it would have effects on the sound
I'm talking about exact digital copies of files that sound different after optimization with the Junilabs player http://www.junilabs.com/fr/products/audioplayer.html.




The preliminary analysis tells us that there is no physical or material cause-and-effect relationship in the situation with optimizing the sound of audio files
If you're talking about different file optimization, then sure it would have effects on the sound.  


@andy2
BUT if everything is the same: same file, same source, same equipment, then I guess it's hard to see how the sound could be different.

Yes, if you play copies of files from the same HDD, everything is PHYSICALLY the same, all the things being equal.

If you really want to down that path ... to the nth degree then anything is possible.

A preliminary logical analysis of a task is not at all an n-th degree, but the first thing anyone should do before starting to delve into technical details.

The preliminary analysis tells us that there is no physical or material cause-and-effect relationship in the situation with optimizing the sound of audio files (as well as with changing the direction of wires and some other things). Therefore the cause, the effect or both are beyond the scope of conventional physics, isn't it obvious?
I guess it depends on what you meant by "IDENTICAL".  

If two files are identical, using the same compress format (i.e. wave or flac ...), then for the most part they should sound the same.

That is UNLESS, each file comes from a different source, for example, one comes from a USB or one comes from HD, the playback could sound different.  Each source may have different interference signature on the DAC clock, then the sound could be different.

BUT if everything is the same: same file, same source, same equipment, then I guess it's hard to see how the sound could be different.

If you really want to down that path ... to the nth degree then anything is possible.
@Andy2, we have two IDENTICAL digital audio files that sound differently whatever order or how many times you play them. Do you really mean that one copy of the file by some miracle is always knocks down a DAC's clock, and the second does not?
Not sure why the OP made it more complicated than it has to be.  Here is the bottom line:

1. The DAC is a mix-signal : has to operate in digital and analog domain.
2. The DAC needs a clock to clock out the data.  The clock ideally has to be clean but no real world clock is clean.  It is corrupted by noise form its own analog and digital signal.
3. If the clock is corrupted, then the signal being clocked out is also corrupted in the "time domain".  I don't mean the 1's and 0's are corrupted, I mean the timing edge of the output data are corrupted.

So it's simple as that.  
@manueljenkins-  

      We can hope the OP won't mind a bit of further communication.

      Given your obvious interest in computing: did you know they're working on a system, by which spinning electrons could, rather than the typical bit, which only contains a possible 0/1 (binary); by stopping it's spin at various locations, enable that quanta bit (or qubit) to exist in more than one state, simultaneously, until detected?

       https://www.newscientist.com/article/dn9768-electron-spin-trick-boosts-quantum-computing/

        and: https://www.sciencedaily.com/releases/2020/03/200302113310.htm 

        FUNNY, that a dispute regarding one of my favorite Physicists, Feynman, would have also postulated precisely that proposition, in 1982:

        https://refubium.fu-berlin.de/bitstream/handle/fub188/1646/04_DissCarolaMeyerChapter1.pdf?sequence=5...

         One of his Rules of Life, mentioned often in his lectures, was, "Never Stop Learning".     That always resonated within me.

         https://harrywhite.org/2020/06/29/richard-feynmans-rules-of-life/

          I've made an attempt, best as possible, to abide by 3 and 4, as well.

          Sadly: I've encountered more MENTAL OHMS on the 'GoN (the levels of which have proven enormous, at times), than anywhere else, in my experience.     Hence: potential, pronounced provocations are often presented, to number 6, on these pages.

           For quite a while there: the only/best place for keeping current with developments in Physics and Quantum Mechanics, was the Library, perusing the newest editions of Encyclopedia Britannica and the science journals, published every month.   ie: the now gone and sorely missed*, 'Science Digest'.       *Well: by me, anyway.

            Now, of course, we've got the Net's profusion of sites and yet even more monthly magazines, through which to keep up, ie: the two above-cited links and:

             https://www.sciencenews.org/article/physics-ligo-mirrors-lasers-quantum-mechanics-limit

                                             Enjoy the journey/never stop learning!




Thank you. I misunderstood the context (millercarbon didn’t mention it, and the same group of scientists were into quantum theories when the Bohr Atom model was being published).

I have been exploring in these areas for a while (plenty of fantastic resources online and in youtube). I am not sure if there is a Personal Message function on Audiogon, but I haven’t been able to spot an option. I am interested to learn further and would like to know your recommendations for books/articles/blogs in any of these areas - Modern Sampling and signal processing (wavelet and time-frequency analysis based), Bio Instrumentation (I know it’s a vast area, any recommendation from any area you are familiar would be fine), and Quantum physics. I have done a fair amount of homework on all of these for wavelet transforms, took guidance from online resources on epigenetics, and I am somewhat familiar with Maxwell’s 4 Electromagnetism laws and foundation of Schrodinger wave equations and the particle in a 1d-box problem, and looking to learn further. Hopefully this isn’t too off topic!
Post removed 
@manueljenkins-  

     Refer to the third post of this thread, which I typed before learning the secret to paragraphs:  

https://forum.audiogon.com/discussions/ok-but-does-your-audio-gear-have-rotons-metamaterials

      Perusing some of the following posts, probably wouldn't hurt either.

     Hopefully: a few facts will help with your bewilderment.

     ie: Feynman and Bohr both worked on The Manhattan Project and Feynman was still lecturing on his (subsequent) QED theories, in the mid 60's, some of which I enjoyed.

     https://www.google.com/search?q=manhattan+project+scientists&oq=man&aqs=chrome.0.69i59j69i57...

                                            Higher Education is your friend!
And then in a decades time this theory was made obsolete by something even better, and explains more phenomenon that Bohr atom model failed to explain/predict!

A lesson for us all!

Also, not sure where you got this folklore from. Dr. Richard Feynman was born in 1918, while the Bohr atom model was proposed in 1913.
Feynman tells the story of all these great physicists sitting around the table at Los Alamos- Bohr, Fermi, et al- trying to figure out what is going on. First one, let’s say it was Bohr although I don’t remember. Not important. What matters is he nails it. Absolutely nails it. Accounts for everything. Feynman is thinking wow that was fast we are done!

But then they continue around the table, and one after another proposes alternative explanations. Feynman is all, "WTF?!" Not literally, but he really is wondering what is going on? Can they all not see the first answer is the one?

This continues until they all have spoken. At which point the leader says, "Well it is settled then, Bohr’s theory is the one." They all agree. And just like that they are done.

Marvelous story. A lesson for us all.


@manueljenkin
The influence of noise is just your assumption". Yes, I arrived at this guess after all other guesses of - files being different, defragging, and other causes you listed were found to be not true... ...Regarding the explanations, I have mentioned it multiple times.

Can you just answer my previous question:

@anton_stepichev Please explain, hypothetically, what can be found in the filling of the HDD so that it indicates the influence of noise (or whatever) on the sound of the file processed by the program?

Keep in mind that:

- nothing physically happens with the file
- There is no code in the program that would analyze the disk surface for noise or anything else or put data in the certain place on the HDD.
- the program, when repeatedly overwriting a file, makes similar changes in the sound.


You tend to complicate things,

The understatement of the year.
and with this approach it is almost impossible to understand the problem,

Indeed.

Why is it I can't help feeling I've seen this all before?  https://youtu.be/EZSx3zNZOaU?t=46

"The influence of noise is just your assumption". Yes, I arrived at this guess after all other guesses of - files being different, defragging, and other causes you listed were found to be not true. The other system noises causing changes to sq gives me more confidence in this approach. If you do have other suggestions, do let me know, I’ll check that too! I don't think neglecting this case scenario would be a good idea. I donot have the necessary equipment to measure the access noise, else I would have tried that too.

Regarding the explanations, I have mentioned it multiple times. I don’t know much about modern HDDs (the head alignment etc involves a lot of modern control theory, and almost each batch has its own specific firmware), but with flash storage it is stored as a set of charges within a specific threshold value inside a floating gate NAND transistor cell. The pattern can change the noise profile when accessing. Modern devices are far more complex than the basic structures I have seen in academic resources and I am searching for leads to check the actual pathway/design (need to check everything from firmware etc, and I am not sure how successful it will be considering that most of these are kept as secrets).
@manueljenkin Yes, and noticed a lot of improvements. I have tried a lot of system level tweaks, even went to the extent of taking a custom minimal commandline linux distro for audio (again sounded better than the familiar gui systems), and have also explored custom tools in windows that optimize a lot of the internal system processes.... Everything from even changing the buffer size changes the sound. (Includes all types of buffer, either in dac or in system).

I agree, simple structures, schemes and programs sound better to me either.

In my experience the more lower noise system gets (lower system activity), any further improvement is far easier to hear than with a stock system configuration... ... My friend has successfully done AB-X on other computer audio tweaks I mentioned in earlier paragraph, his network streamers and signal regenerators. I don’t think there’s anything wrong with my approach.

Nothing wrong, of course, if you are just looking for a better sound. The problem is it will not help us understand the reasons for what is happening, and I would like to understand it. The influence of noise is just your assumption, so far you have not given a single description of the experiment confirming the specific influence of noise and not any other causes that occur during the tweaks. At the same time, in the previous message, I gave proof that the work of the program is in no way related to changes in noise or other analog properties of the media. You have not refuted or confirmed the correctness of this proof, instead you wrote:

I am looking deeper into the intrinsics of the drives (the physical manifestation) since it has an impact that is uncorrelated from the rest of your guesses (doesn’t do defragging, cannot check optimal areas in disk if there was ever one since it is out of control of even the OS, it’s the controller that handles it).

Please explain, hypothetically, what can be found in the filling of the HDD so that it indicates the influence of noise (or whatever) on the sound of the file processed by the program?
"If desired, you can come up with a lot of actions that will simulate serious changes in the level of noise and interference of the computer and conduct a simple but effective study of how your DAC reacts to heavy changes in the level and spectrum of interference emitted by it. Have you conducted such experiments?"

Answer: Yes, and noticed a lot of improvements. I have tried a lot of system level tweaks, even went to the extent of taking a custom minimal commandline linux distro for audio (again sounded better than the familiar gui systems), and have also explored custom tools in windows that optimize a lot of the internal system processes. In my experience the more lower noise system gets (lower system activity), any further improvement is far easier to hear than with a stock system configuration. Everything from even changing the buffer size changes the sound. (Includes all types of buffer, either in dac or in system).

"In my opinion, it is quite enough to stop clogging your head with noises and charges and start thinking of something else."

As mentioned I am also trying to set up a proper AB-X to confirm the changes of this player. My friend has successfully done AB-X on other computer audio tweaks I mentioned in earlier paragraph, his network streamers and signal regenerators. I don’t think there’s anything wrong with my approach. I am looking deeper into the intrinsics of the drives (the physical manifestation) since it has an impact that is uncorrelated from the rest of your guesses (doesn’t do defragging, cannot check optimal areas in disk if there was ever one since it is out of control of even the OS, it’s the controller that handles it).
@manueljenkin

The transistors and other analog components in the amplifier and the dac (dac also has lots of gates) can be easily influenced by noise in the data, clock and especially ground lines and here we are looking at the output in a continuous spectrum and not merely threshold conditions. No simple way to correct these errors, actually no simple way to fully characterize and analyse these errors even. Arbitrary signal generation and fidelity is a very complex area.

You tend to complicate things, and with this approach it is almost impossible to understand the problem, because, as you correctly noted, we have a lot of areas in the computer where we can suspect problems. However, you forget that we can logically exclude the vast majority of these suspicions.

daс can be easily influenced by noise in the data, clock and especially ground lines

When we compare the sound of two identical files recorded on the same medium, when the player switches from one file to another, none of the above interference in the computer changes, that is, these interferences do not prevent us from making the correct conclusion about the sound of the compared files,  all other thing being equal.

On the other hand, we can easily make sure that the intended noises do not affect the sound the way we hear it in our audio examples. To do this, it is enough during listening, for example, to insert and remove a USB flash drive from the computer, connect/disconnect the PSU in the laptop, start background hi-res video playback via Internet or any power consuming software, etc. If desired, you can come up with a lot of actions that will simulate serious changes in the level of noise and interference of the computer and conduct a simple but effective study of how your DAC reacts to heavy changes in the level and spectrum of interference emitted by it. Have you conducted such experiments?

To go into exact details of the modes in which the write environment affects the access noise profile, it'll require a deeper understanding into the actual physical characterization of the floating gate cells used (and most of this is proprietary and not visible to general consumers, includes me)

Here again, it is enough to simply analyze the situation, and not go into deep technical details. What do we have - the program loads a file into memory, presumably does something with it (in fact, nothing physically happens with the data) and writes the file back to disk. At the same time:
- This program does not work "at low level", that is, it cannot select a certain "low-noise" place on the disk and write the improved copy of the file to this place, since the physical location of files and folders on the disk is completely determined by OS.
- There is no code in the program that would analyze the disk surface for noise or anything else.
- Suppose different cells of the disk "sound" differently, then it is still unclear how the program, when repeatedly overwriting a file, makes similar changes in the sound (improvements or deterioration-it does not matter) for each copy.  In theory, we should get an unpredictable result with each rewrite - either an improvement or a deterioration or non at all, and as a result, something indefinite in the end.

In my opinion, it is quite enough to stop clogging your head with noises and charges and start thinking of something else.
@millercarbon - Your quote "Simple and basic reasoning is how we understand something as complicated as human DNA. Simple and basic reasoning is how we understand something as complicated as how we came to have DNA at all."

Either you have no clue about what you have just said (and just assuming the only top level abstraction you are aware of to be the entirety of the domain), or you somehow find the amount of structural and chemical (thermal/energy related as well) complexities in the process of replication and repair of DNA, something that took ages to be understood to a decent level, with still areas that we don’t fully understand yet, and something that students study for years together, to be trivial.
@djones (et al)-

         ps:  Would you scoff, were I to tell you that certain birds actually navigate their migratory routes from Scandinavia to Africa and back, every year, via receptors in their eyes, that detect the Earth's (very weak) magnetic field?

        How about: your nose doesn't, "smell" anything, but: actually LISTENS to frequencies generated by molecular bonds?

         ie: the frequencies generated by the chemical bonds of Almonds and Cyanide are identical and that's why they smell the same, though their molecules/chemistry are vastly different.

         btw: Both of those scientific/biological facts were established though the studies of Quantum Mechanics.

                                    https://phys.org/news/2011-01-quantum-robins.html

                                    https://aip.scitation.org/doi/10.1063/1.5084270                                       


                                                    Higher Education is your friend!
@djones-

     "If these new models don't agree with your preconceptions and biases you'll dismiss them as well."

     Do you mean: the way the scientifically uninformed, "dismiss" and scoff at such established truths as Quantum Entanglement; as they incessantly have, though the facts have been in for the past five decades?

      If the world's best inventors, throughout human history, hadn't ignored, "scientists", naysayers and scoffers (such as you): we'd still be living in a relative Stone Age, with respect to technology.

       ie: When the steam locomotive was invented: the day's best, "scientists" claimed man couldn't survive speeds in excess of 20 MPH!

        Interesting, that most of the electrical theories your ilk espouses, came from the same century (the 1800's).
@millercarbon my apologies. English isn't my first language, I'll correct this next time. The context is, his assertions come from a surface level abstraction of a more complex thing. Every physical manifestation of what we call as a bit, is a set of charges occupying a specific location, within a certain "threshold".

"We have no errors while copying and playing back files, therefore the noises, the charges etc. are within normal limits." - normal limits for a digital read/write circuit. There is no bit error anywhere here, if that's what you're wondering, even at the dac input interface. But that's not the only way to create sound change. The transistors and other analog components in the amplifier and the dac (dac also has lots of gates) can be easily influenced by noise in the data, clock and especially ground lines and here we are looking at the output in a continuous spectrum and not merely threshold conditions. No simple way to correct these errors, actually no simple way to fully characterize and analyse these errors even. Arbitrary signal generation and fidelity is a very complex area.

"Another question - can you explain how the file sound optimizer can affect the noise, charge or any other ANALOG properties of a HDD for the better?" - it's hard to say how this tool exactly does that since the developer doesn't want to leak much details. To go into exact details of the modes in which the write environment affects the access noise profile, it'll require a deeper understanding into the actual physical characterization of the floating gate cells used (and most of this is proprietary and not visible to general consumers, includes me). I had a few links regarding just how much noise an sd card can pump out into the ground lines, and the swap of an sd card changing reducing it by a large margin. This is because the phy layer design, choice of fabrication methods, read/write circuit design, power design and also the firmware for controlling all these (including throttling and power saving profiles) are all different. Unfortunately the links don't seem to be available now: https://forums.terraonion.com/viewtopic.php?t=1217 . I'll check if I can find the video.
Resort? Simple and basic reasoning is how we understand something as complicated as human DNA. Simple and basic reasoning is how we understand something as complicated as how we came to have DNA at all.

The origin of species by means of natural selection, otherwise known as Darwin’s theory, accounts for all life on Earth, yet is extraordinarily simple: Species produce more offspring than are viable in the environment. Offspring are not identical, they vary in their characteristics. Nature selects for the most successful variants. These pass on their successful variant genes to the next generation.

Simplicity is not a "resort" to be taken when all else fails. Simple and basic reasoning is a virtue.  Indeed, it is the very foundation of the scientific method.
It always is interesting to see how people resort to simple and basic reasoning in trying to understanding something as complicated as how humans perceive music.
you seem to be making some assertions which may not really be completely true.


"Seem" to be. "May" not "really" be. "Completely". Awful lot of qualifiers for just one sentence. Would it not be more clear and direct to say, "You ripped my whole story to shreds, and I don't like it one bit"?  

There is nothing wrong, when confronted with a genuine mystery, in admitting it really is a mystery. 

@manueljenkin

Wait. I know about thresholds, error correction and about various difficulties of reading from damaged storage media cells. All this should not concern us, since when copying and reading a file in a working computer, after all the background correction operations we get either an identical copy or a read/write error. We have no errors while copying and playing back files, therefore the noises, the charges etc. are within normal limits. Moreover, even if we hypothetically assume that some latent digital error occurs, it is not clear where exactly it occurs so that it cannot be detected. Do you have any ideas on this?

Another question - can you explain how the file sound optimizer can affect the noise, charge or any other ANALOG properties of a HDD for the better?
@anton_stepichev you seem to be making some assertions which may not really be completely true. Digital data works on "thresholds" and has some tolerance to specific distribution levels. Multiple types of charge distribution can correspond to the same digital data if they fall on a specific side of the threshold. So you can have a different charge distribution corresponding to the same data but a different noise pattern when accessing. And this is considering just a very basic memory/storage unit structure. Modern memories are a little more complicated, with self error correcting schemes and other stuff - even a compact disc format uses Reed Solomon codes to recover from certain bit error, and hence they can be corresponding to the same final data even in scenarios where internal noise levels are quite large.

Analog and mixed signal design circuits are however sensitive to all these changes since they don’t work on thresholds. I recommend you to kindly have a read of the full write up I made in the previous posts I have explained the same there.
@manueljenkin
06-24-2021 3:25am
The data content is 100% identical if you’re considering digital bits (threshold levels), including the hash values. The sound change is very likely to be from the intrinsic noise / charge distribution patterns inside each storage cell (which can be influenced by the conditions in which the write action happened).
I can't quite understand it. IMO if the noise and charge distribution somehow affect the integrity of the file, then this should affect the checksum. Conversely, no matter what noise and interference may occur on our hard disk or anywhere else, if the checksums of the files are equal in the end, then any previous interference does not matter, since they do not affect the final result.

To find out that there is no negative impact of noise and charge, you can simply copy the file about ten times, and if the checksum of the last copy does not change, you can be sure that the hard disk, software or anything else on this computer does not cause digital errors. Therefore, copies of files on this computer should sound exactly the same.
But this is not the case. For example, the file will sound different if you copy it to a second hard drive or USB flash drive and play it from there. At the same time, if we copy the file back to the hard disk and check its integrity, we will not find any errors. All this looks more than strange for a theoretically perfect digital sound, doesn't it?
The data content is 100% identical if you’re considering digital bits (threshold levels), including the hash values. The sound change is very likely to be from the intrinsic noise / charge distribution patterns inside each storage cell (which can be influenced by the conditions in which the write action happened).

And I would like to add that, this doesn’t seem to have anything to do with defragmentation, if anyone is thinking from that aspect - I am testing the tool with an SSD and I can hear the improvements, and neither does defragmentation take a consistent 2 minutes each time.
@manueljenkin, I see you are a pro in digital, a lot of special information, thanks. I wonder how it can help us. You know, no matter how complicated the situation in hardware and software are, if we going to play two files with similar checksum on the same computer, they should sound identical. But in our case they don’t.

So may be they are somehow not identical? Can you check the sameness of the files? Your opinion is quite interesting.
@mahjister,
Have you ever tried blessing or praying on wires or components and see what happens after?
Science isn't science after all 
listening wires is far more sophisticated than listening music, so why make your life harder?
The developer of this player did answer questions of another person who asked similar questions. I guess he is getting repeated questions, and hence not finding time to respond. It loads to RAM, does an "optimization" (specifics not described, but looks like its there in the code), waits for a couple of minutes and then stores back to drive.

Regarding user preference, I actually am not fond of results of first optimization - it sounds a bit distant and veiled, even though it clearer than original file. But run the same through optimization process 3-4x and all the veil is gone, and the clarity remains (and actually gets better) and now it is definitely much better than stock file on all aspects. If the preference to original files among the users were in comparison to first optimization, I recommend giving 3-4x optimization a try. At present it doesn’t seem possible to do optimization for multiple files at once, so that’s a cumbersome task. I can hear the differences for sure, and I am working on getting a true double blind test done (its not an easy task to do one that doesn’t have loop holes).

Regarding why it works, I think it is well within conventional physics, we just need to analyze it deeper than our current FFT analysis methods (we are analyzing only a very small subset of test tones at present mostly and I don’t think much conclusive results can be obtained from this). In a normal storage disk, every bit is stored as a set of charges in a cell (typically a floating gate nand cell), and the scenario in which the write action happens can likely manifest in differences in the structure of charges and magnetic fields stored in the cell that the next access after optimization may have either lesser noise or lesser correlated noise. Also to note that RAM and normal storage work in different ways. PC RAM works as a Dynamic Random Access Memory unit with constant refreshes (volatile memory) and Normal storage is non volatile and retains data once stored.

Digital circuits work just with thresholds. Above a certain threshold it is 1, below it it is 0 (or vice versa in some implementations), and there are boundary conditions which the designers have to work hard to ensure data integrity is maintained. This is the reason why you don’t magically get infinite clock speeds. There’s more to it in modern devices (they are multi, triple layer cells etc) and there’s a lot of algorithmic stuff that goes on to it.

There’s a lot of hardwork in making a reliable working digital system, but it’s even harder when you get into analog systems. The problem with analog/mixed signal systems though is that it’s not merely working on thresholds. A fair amount of noise may be mostly harmless in a digital system but will cause significant issues with an analog/mixed signal systems as every single flaw/deviation will cause deviations in the analog circuit (the dacs) and later get amplified in the buffer and amplification stages. So any of the activity you do has a potential manifestation in the analog circuit, and any task that reduces noise at source can be beneficial. Grounds act as common points to transfer noise from one place to another. You can claim optical isolation but it is more fairytale than reality. They have their own jitter and noise footprints and any attempt to correct it will have its own jitter and noise footprints. If you’re thinking transformer coupled isolation, they have non linearities (real world magnets don’t magically follow an ideal abstractions), and other leakage phenomenon (AC noise leakage over copper ethernet has been measured and demonstrated). And I would like to add that the improvements to SQ by this player is audible even through ifi micro idsd bl which does have some form of galvanic isolation afaik.

Any circuit can always be tweaked to fake numbers to specific scenarios while not being truly capable in other scenarios, and hence measurement charts get unreliable. It is impossible to get full test coverage for any analog design at present. I think of audio measurements generally shown to be similar to some vague synthetic CPU benchmark tweaked to show as if a cell Phone CPU beats a supercomputer (Maybe it does at that specific calculation in that specific optimized software, but not likely for a real world task that the cell phone CPU cannot handle, or an emulation layer on the supercomputer with the same code might run faster!).

Yes there are massive amount of layers, buffers and Phys present through the chain. And of course software abstractions, and each abstraction layer = generally longer non optimal code = more processor and component activity = more switching noise, and of course there’s more considering the speculative execution etc and these are accounted for with many of these audio software. There are many of them try to work at a lower level language with less abstractions (some written in even assembly level language code), and hence lesser noise (one general example is using kernel streaming). So the whole thing actually reinforces the benefits of a customized software system.

It indeed is phenomenal that the data storage access noise seems to pass through all these layers but if you consider the path, none of them have anything to compensate for the fluctuations, and as long as it is within thresholds of digital circuit operation it’ll be passed through (but analog and mixed signal systems are picky). It indeed is profound that this distinct improvement is not buried within noise generated from the rest of the link.

Now if you were considering issues from other CPU activity during idle tasks, like say displaying a wallpaper, it would be a gross approximation to think CPU generates all pixels at every instant of time and loads into GPU memory for displaying, then there is no purpose for a GPU. GPU has a parallel pipeline to generate these, has its own architecture that might have its own noise patterns (need not be as high as cpu for the same task) and send via hdmi port, but it could very well be almost completely decoupled from the CPU data lines going to USB! Do they influence each other? Very likely. Can one completely mask the differences of the other? May or may not be! It’s about reducing issues in any area that is feasible. There’s also something known as correlation. Certain types of noise correlate more to audio issues (8khz tizz from 125us polling if the system priority is too high, or other issues which cause sudden spike during these polling) than others. So it’s not quite as direct as things may seem, and of course this area is too profound so we don’t have any well established conclusive correlation metrics yet (and unlikely anytime soon, we haven’t even figured out human hearing beyond a certain basic abstraction). Also not to mention, a lot of the computer tweaks do have modes to remove image displaying load on the cpu, or even going fully headless/commandline.

What about the abundance of switching components throughout the motherboard? PC pcb design is generally very high level stuff (very large multi layer PCB), and the power supply design (regulators etc) is are extremely sophisticated, especially the ones feeding the CPU. A 12V supply is regulated in multiple stages to ensure that there is enough buffer in place to take any disruption that changes power consumption would bring and it is generally very low noise because it’ll have to run through multiple layers in the CPU. Can they be improved by a better power supply input? Surely yes, and a better power supply input can also help the rest of the pcb, but I will have to say they are generally extremely well designed. There’s massive developments on this front on the low power area, and it has also been successfully expanded to certain areas in audio - The new Burson Audio amps uses a SMPS design that sounds very good. You can afford to do this much level of buffering and filtering because it is power (a specific fixed voltage and current with some transient deviation). But you can’t do this multiple levels with data which is a switching sequence of pulses or else you’ll be losing speed. There’s not much ways to fully control the noise on the data line other than controlling your software.

Ok why not a raspberry pi instead? Well just because something is lower power doesn’t necessarily mean it is lower noise. The consideration in most budget SBCs are mass production at a very affordable price and the components used are unlikely to be of any quality comparable to say a high end motherboard, let alone a server motherboard. In fact you’ll likely be getting worse aberrations even on the data integrity (unlikely to be an issue with data rate of audio though) and will need just as much software changes/usability compromises anyway. As mentioned above, the research on the components for Desktop Motherboards are extremely high level. One can try to customize everything from ground, like many companies doing digital transports do, but it’ll get crazy expensive pretty quickly, or leverage all the development on Desktop PCs, and just try to control the few aspects they didn’t optimize for with respect to audio and noise (will have to give up speed and ease of use in that scenario, but just a reboot into another OS and you’re back with a fully functional PC that can be used for any other task).