Electrical/mechanical representation of instruments and space


Help, I'm stuck at the juncture of physics, mechanics, electricity, psycho-acoustics, and the magic of music.

I understand that the distinctive sound of a note played by an instrument consists of a fundamental frequency plus a particular combination of overtones in varying amplitudes and the combination can be graphed as a particular, nuanced  two-dimensional waveform shape.  Then you add a second instrument playing, say, a third above the note of the other instrument, and it's unique waveform shape represents that instrument's sound.  When I'm in the room with both instruments, I hear two instruments because my ear (rather two ears, separated by the width of my head) can discern that there are two sound sources.  But let's think about recording those sounds with a single microphone.  The microphone's diaphragm moves and converts changes in air pressure to an electrical signal.  The microphone is hearing a single set of air pressure changes, consisting of a single, combined wave from both instruments.  And the air pressure changes occur in two domains, frequency and amplitude (sure, it's a very complicated interaction, but still capable of being graphed in two dimensions). Now we record the sound, converting it to electrical energy, stored in some analog or digital format.  Next, we play it back, converting the stored information to electrical and then mechanical energy, manipulating the air pressure in my listening room (let's play it in mono from a single full-range speaker for simplicity).  How can a single waveform, emanating from a single point source, convey the sound of two instruments, maybe even in a convincing 3D space?  The speaker conveys amplitude and frequency only, right?  So, what is it about amplitude or frequency that carries spatial information for two instruments/sound sources?  And of course, that is the simplest example I can design.  How does a single mechanical system, transmitting only variations in amplitude and frequency, convey an entire orchestra and choir as separate sound sources, each with it's unique tonal character?  And then add to that the waveforms of reflected sounds that create a sense of space and position for each of the many sound sources?

77jovian
Um, first, some instruments don't have a lot of energy in the fundamental.

But otherwise, you may be very interested in Head Related Transfer Functions.

Best,
E
@77jovian You may find the following writeup to be instructive. (Coincidentally, btw, as you had also done it uses the example of a flute for illustrative purposes):

http://newt.phys.unsw.edu.au/jw/sound.spectrum.html

Note particularly the figure in the section entitled "Spectra and Harmonics," which depicts the spectrum of a note being played by a flute.

To provide context, a continuous pure sine wave at a single frequency (which is something that cannot be generated by a musical instrument) would appear on this graph as a single very thin vertical line, at a point on the horizontal axis corresponding to the frequency of the sine wave.

The left-most vertical line in the graph (at 400 Hz) represents the "fundamental frequency" of the note being played by the flute. The vertical lines to its right represent the harmonics. The raggedy stuff at lower levels represents the broadband components I referred to earlier. Note this statement in the writeup:

... the spectrum is a continuous, non-zero line, so there is acoustic power at virtually all frequencies. In the case of the flute, this is the breathy or windy sound that is an important part of the characteristic sound of the instrument. In these examples, this broad band component in the spectrum is much weaker than the harmonic components. We shall concentrate below on the harmonic components, but the broad band components are important, too.

Now if a second instrument were playing at the same time, the combined spectrum of the two sounds at a given instant would look like what is shown in the figure for the flute, plus a number of additional vertical lines corresponding to the fundamental and harmonics of the second instrument, with an additional broadband component that is generated by the second instrument summed in. ("Summed" in this case refers to something more complex than simple addition, since timing and phase angles are involved; perhaps "combined" would be a better choice of words). And since when we hear those two instruments in person our hearing mechanisms can interpret that complex spectrum as coming from two different instruments, to the extent that information is captured, preserved, and reproduced accurately in the recording and playback processes our hearing mechanisms will do the same when we hear it in our listening room.

Best regards,
-- Al

So, I’ll ask again, how is the audio signal in cables and electronics affected by external forces such as RF and vibration as well as by better cables? And what IS the audio signal? Anybody! Is it electrons? Photons? Current? Voltage? An electromagnetic wave? Something else? That’s really what the OP is talking about. Don’t be shy!
Post removed 

As per Geoff’s questions:

Sometimes I think of a given system hardware as a ‘doorway’ that the signal will try to go through unscathed, but is compromised along the way by 2 things at least: noise and distortion. Sometimes I wonder what it might be like if we could, say, just flip a switch and reduce All noise that could actually affect the signal, regardless of source, by an infinite amount and listen to the result. I imagine everybody, if they could do it, would be like blown away at not only the quality of reproduction, but also struck I think by how Everything sounds the Same (no more obvious differences anymore between brands, price ranges, tubes and ss, digital and analog, wiring, fuses, directionality of same, etc, etc)...I’m thinking it might all sound amazing and all of it sound overwhelmingly similar in doing so...far more like the real thing and all that.

I just know I can’t prove it, lol. But, the thought does keep coming back to me on occasion.

Q: Can mega-expensive speaker wires that have been successfully implemented into a system, for example, be thought of as simply the result of a happy, ‘random accident’ of the interplay between system, wires and (most importantly here) *noise*? IOW, could the idea of eliminating all noise mean the elimination of any need by anyone for pricey wires at all? AND, can these same cables then be thought of as justified only in systems **that are dominated by noise** (currently all systems)?

AFAIC, you’re asking the right questions, I just don’t have the right answers.