PeterA said:
853guy, Thanks for posting this. Have no regrets, I enjoy your writing and always learn something.
Hi Peter,
Thanks, I’m learning too, so it’s a pleasure. Hope life is treating you well.
Detlof said:
The purpose of this post, written quite a time after you had written yours, the 20 minutes long past, should, so I am hoping, help you not to regret it!
If I am not mistaken, it was the great conductor Sergiu Celidibache who in his writings about music stressed those very points you mention as essential, where of course the whole of a performance was more important than those very parts which constitute it. I must read it up again. I think he used the term tempo for the time factor. Once I have refreshed my memory I will get back on this.
In the meantime: Thanks!
Hi Detlof,
Great conductor indeed. He and Günter Wand are my favourite Bruckner conductors. Celibidache did say this: ”Most of these ignorant people think I take a gradual pace or a fast tempo because I just happen to want it that way. The tempo is the condition that reduces and unifies the physical vastness that is otherwise present. That is the tempo!” I need way more coffee before I even attempt to think about that.
jkeny said:
Excellent post as usual, 853guy
It's highly likely that the neural pathways that process music developed before pathways that process speech - afterall, nature is full of music & we were exposed to those sounds for a long, long time before speech emerged.
You are also correct in pointing out the patterned structure of music makes it a far different perceptual task to listening for isolated, specific aspects in the soundstream. The focus on such elemental aspects is IMO misplaced but not surprising in those whose bible is measurements. There is simply illustrated by looking at CMR (comodulation masking release) where a modulating sound which is imperceptibly buried in noise is revealed when another sound is modulated at the same frequency. Anybody can hear this here
https://auditoryneuroscience.com/top...asking-release
The processing aspect of auditory perception is the 90% of the iceberg - it lies underneath the obvious stuff but represents the majority of what's going on in this perception and like any parallel processing that has the ability to cross-correlate aspects of the signal though time, it can extract & prceive more from the signal than a simple view of the auditory signals would suggest.
I'm off to read the references you hinted at
?
Hi jkeny,
Nice to hear from you! Hope you’re warm and dry. Measurements tell us more about the measurer than anything else, right? If we’re going to move forward to a greater understanding of music reproduction, like you, I agree we need to look at the only thing in the universe that knows what’s music and what’s not. We need more FMRI. More, much more.
Amirm said:
I can tell you that there is not one system I have heard that sounds like that guitar did in the crowded airport terminal. And hearing it in no way or shape allowed me to determine fidelity of any system better.
Hi Amir,
Why would anyone expect a pair of microphones with a predetermined polar pattern and response to ever be able to “hear” the same things as a human being with a highly complex auditory mechanism and neuro-physiological processor walking through an airport? Why would anyone expect a complex inter-dependent electro-acoustical system with a unique phase/frequency and direct/reverberant relationship ever be able to recreate what a single human being heard?
I don’t think anyone’s arguing a human being’s electro-acoustic and neuro-physiological hearing mechanism listening to an instrument played live can either A) be captured in the same way by a single pair of microphones, or B) recreated in the same way by an audio system. Human beings “listen” - we are always processing the data and are never simply “hearing”. “Listening” is a sentient property of being human. No audio device on Earth will ever possess that ability.
So although hearing is not listening they are dependent on one another. Hearing is the first part of the process determined electro-acoustically by the ear. Listening is the second part of the process determined by the brain. The ear doesn’t know whether it’s listening to music or a train crash. Only the brain does. And as the research continues to make clear, it’s the brain that looks for and responds uniquely to sounds that contain the signature elements of pitch/dynamics/rhythm constantly modulating over time. Listening to music and understanding its aesthetics does impact brain response, and what’s more, continued practice of it does allow discrimination of certain musically-significant elements on a higher level than non-musicians. The research is all there and doesn’t require membership to a self-congratulatory “professional association” to view it.
Our brains have incredible capacity for learning. A conductor is fundamentally a practical aesthetics disciplinarian. His or her practice is to constantly distinguish musically-meaningful events from one another and actively shape them toward a coherent whole through judgement. We, whether active practicing musicians or active music listening attendees, share the exact same ability to learn to distinguish musically-significant and musically-meaningful variables from one another and apply judgement to them whether they be a guy and guitar in an airport miked and amplified, or a minimalist two-channel DSD track played back over the least distortion-compromised system ever assembled.
Why? Because I say so? Because that’s the way I want it to be? Because my self-esteem is dependent on it? No, because that’s what the research says.
Because my brain, and your brain, and everyone’s brain is hardwired to process music in a way that is completely independent from processing ALL other sounds, whether they be speech, pink noise, brown noise or your child’s breathing in the middle of the night when they’re asleep (man that’s a great sound). That’s what the research says and it’s been put forward by myself and others on this forum (too*) many times. What’s more, it demonstrably possible for all of us, musicians or not, to learn the practical aesthetics of music and become better judges of musically-meaningful variables.
But it doesn’t come from training to discern distortions in codecs for lossy compression. All that will ever do is allow you to better discern distortions in codecs for lossy compression, leading to domain-specific myopia, leading you to believe the same findings must therefore apply to music, which is processed by an entirely different part of the brain, specifically and intentionally looking for pitch, dynamics and rhythm constantly modulating over time. If you want to understand music better, and I believe many of us do, then it's crucial that any individual element always be understood and evaluated in the context of its inter-dependent relationship to the other two, for as soon as we begin to remove and isolate any one of the elements, we destroy that which makes music "music".
My hope is that you'll take the time to look at the research - there's lots of it out there, and it's robustly reviewed. You may choose not to, of course. In any case, there's just as much joy to be found in opening one's mind to a new way of approaching the data, as there is defensiveness to be found in remaining ignorant. At least, that's what my wife's teaching me.
*This is why I try not to post very often. Inevitably, I run out of things to say, so I say the same thing in a different way, which is like banging you head against a brick wall and then deciding to bang your torso against it, but ending up with a lesser result.