I really do not understand this mindset, as has been mentioned countless times, a loudspeaker does not know what it is playing, a series of test tones or a symphony. A well designed full range loudspeaker 20Hz up in a properly treated room will play everything , why would it not?
Keith.
853guy said:
Do you really want to know the answer, Keith?
Do tell.
853guy said:
Sound is not music. Music is not always sound. Does that answer it for you?
(
Cue the sound of crickets...)
Hi Keith,
Unfortunately, I’m only going to repeat myself from previous threads and risk exceeding my experience-to-opinion ratio, but I agree with your statement above: “a loudspeaker does not know what it is playing…”. No, it does not, and it cannot. No component can. Sonata or sine wave, no component can or ever will possess the sentience to “know” the difference. Only we do.
Sound is simply the rapid compression and rarefaction in the average density or pressure of air molecules above and below the current atmospheric pressure. As such, it can be produced by any vibrating body, whether it be a set of vocal chords or a volcano. The ear mechanism (and the body as a whole) discern these vibrations in the air, which upon arrival at the eardrum and middle ear cause sympathetic vibrations. Of course, sound does not need us to hear it for it to exist - that is whether we hear it or not depends on its frequency, amplitude and our proximity to it. It can also be converted into an electrical signal (at which point it ceases to be sound), and that electrical signal then converted back into sound via a transducer. I’m sure you know all this already, right?
Music, however, is a perceptual phenomena that is culturally-socially prescribed. Music differs from sound in that in order to be “music” it must be intentional. A volcano may be an impressive sound, but it is not the result of a human being’s intentional gestures. Neither, for that matter, is a whale call, as “musical” as that sound may be (sorry whale music lovers, take the issue up with musicologists).
These intentional gestures express themselves as pitch and amplitude over time. However, sound need not be present in order for music to exist.
For instance, right now Deftones “Hexagram” is playing from their self-titled album. Exiting the first verse, mariachi horns begin playing the riff. As the chorus kicks in, Mavis Staples sings a harmony vocal alongside Chino Moreno’s guttural screams.
‘Weird’, you say. ‘I didn’t know that remix existed.’
It doesn’t. All that music has taken place inside my brain just now. If you wanted, we could perform an fMRI scan on my brain, and observe the neural-biological process taking place. No sound, but very definitely music.
To take it further, if you had learned to read music, we could go into space, put on space suits, and using only a pencil and paper (or a digital tablet - let’s be 21st Century about this), I could transcribe musical notation for a melody, hand it to you, and you would “hear” the exact melody I had written, in the vast gaseous partial vacuum that is outer space, without a single sound being made, even if the melody itself had never before been heard by a single human being.* If you were a conductor, I could have transcribed an entire symphony for orchestra, pipe organ, sitar, fog horn, mixed choir and eight soloists and you could have “heard” every note, change in tempo and dynamic shift.
That’s the power of perception, and why music exists as a socio-cultural phenomenon, because its existence is not necessarily reliant on the presence of sound. Music can and most often is represented by sound, as it is when ever someone sings, plays an instrument, or indeed, plays a musical storage medium via the reproduction mechanism, but it’s always a phenomenon taking place within one's perception.
Back to your point above. No component knows what’s music and what’s not. But we do. We have a portion of the brain containing neural populations specifically dedicated to the processing of music, wholly distinct from other intentional meaningful sounds including speech (Norman-Haignere, McDermott, Kanwisher (2015).
Not only that, we bring to each musical event (whether live or pre-recorded) a unique set of experiences in which the listener’s perception, ability to discern changes in pitch, amplitude and time** and general mood influence the perception of music that have only begun to be explored in the last twenty years. This research is still in a nascent state of development, and is mining data apropos the human hearing mechanism and its neuro-biological processes that testing components on benches with steady-state signals will never do.
“A well designed full range loudspeaker 20Hz up in a properly treated room will play everything” is true in as much as “everything” will be an electrical signal converted to sound within the context of the transducer’s inherent sensitivity, efficiency, phase-coherency and frequency response deviations and the amplifier's electrically co-dependent ability to drive it (room-related time, phase, amplitude and frequency anomalies notwithstanding - there is no universally agreed upon standard for "properly treated"). Whether that component’s resultant sound is music, and matches the listener’s expectations and experience of what is music and what is not has nothing to do with its independent objectively-measured electrical parameters, no matter how precisely those measurements may be performed.
As always, I reserve the right to be completely and utterly wrong in regards to my current understanding of the available research, and of my limitless capacity for delusion and deception. This is, after all, a rock drummer typing.
Love, hugs, etc,
853guy
*Obviously, at some point, one needs to have perceived the tones in order for them to be consciously attached to the notation and be understood as music. If one were born profoundly deaf, this would not be possible (at least, I’m not aware of any research that suggests otherwise).
**It should be noted pitch and rhythm have been found to be neurally separable; see Levitin, Tirovolas (2009), and that musicians tend to be more sensitive than non-musicians to co-varied timing and amplitude variations; see Hatara, Tirovolas, Duan, Levy, Levitin (2011).