I have read many explanations how people use live music as a reference, but considering a significant number of people do not use it, I think it would be great to hear from them telling us what is their reference and how they select components and assemble systems.
Hi Micro,
I’ve not voted in the poll because it’s either/or premise made it difficult for me to answer.
My “reference” is ultimately historically and continually experiential and therefore still in development. Fundamentally, I’m repeating myself here, having already discussed these issues with members who have since departed for forums more likely to reinforce their preconceived notions of music being sound as determined by low-distortion and “linear” components and a pair of mics and some software at the listening position in which the results are evaluated by forced-choice listening tests, whom I sincerely wish all the best in their pursuit for whatever those systems end up sounding like.
But if f I had to break it down into three things I consider a “reference”, it’d be these:
1) Exposure and participation in the aesthetical practice of music during the period of most intensive brain development:
I began singing in the cathedral choir at age eight. For the next twenty years I practiced and played in bands not limited to musical theatre, pop/rock, metal/post-rock, electronica, classical orchestra, brass band and jazz band. I began doing session work and eventually produced and engineering indie pop/rock bands. In a 2003 study, Harvard neurologist Gottfried Schlaug found that the brains of adult professional musicians had a larger volume of gray matter than the brains of non-musicians. Schlaug and colleagues also found that after 15 months of musical training in early childhood, structural brain changes associated with motor and auditory improvements begin to appear. In a more recent 2015 Northwestern study published in the Proceedings of the National Academy of Sciences by Adam T. Tierney, Jennifer Krizman and Nina Kraus, the researchers took electrode recordings at the start of the study and three years later.
The results show that participants in the music group (versus participants in the physical activity group) had more rapid maturation in the brain's response to sound. They also demonstrated prolonged heightened brain sensitivity to sound details, noting during adolescence, N1 (a negative deflection at around 100 ms generated within primary and secondary auditory cortices) amplitude increases whereas P1 (a positive deflection at around 50 ms generated within lateral Heschl’s gyrus) amplitude declines. This process is not complete until young adulthood, by which time N1 has become the largest component in the cortical response to sound. In adults, music training amplifies the N1 response. Here, we find an increase in N1 amplitude relative to P1 amplitude only in the music group.
Thus, music training may have accelerated cortical development. The change in response consistency from year 1 to year 4 did not correlate with cortical maturation across all participants, suggesting that different mechanisms underlie the development of subcortical response consistency and the maturation of the cortical onset response across adolescence. Although synaptic pruning is a likely candidate for driving response consistency, recruitment of a larger pool of neurons involved in the generation of the cortical onset response may underlie the emergence of N1 in adolescence.(
http://eprints.bbk.ac.uk/12456/6/Adolescence_training_revised.pdf)
Does that make me a “Golden Ear”-type listener? No. It’s simply likely that by immersing myself in music at a crucial time in my brain’s development, altering its plasticity and hardwired it toward certain certain neurobiological responses when perceiving music. And it’s looking to repeat those now as an adult.
2) Producing, recording and mixing music:
I spent much of my twenties and early-thirties in recording studios, firstly as a session musician, and then as a producer and engineer. This process lead me to a gradual hierarchy of importance in which room selection, placement within the room, instrument selection, instrument tuning, mic placement, mic selection, mic-pre selection and converter selection were organised from most-to-least important. While it’s always been true, and likely will remain so, that all mics capture sound in a pre-prescribed manner relative to the type of diagphram (large condenser, small condenser, ribbon, dynamic), polar pattern (omni, cardioid, figure eight, etc), frequency response, sensitivity and equivalent self-noise specifications, and many are exceptional relative to other mics of the same type, no mic hears in the same way a human listens. And having been very fortunate to work with a few of the better examples, I can say that while no mic/pre/converter has ever captured the totality of the sound that I, the subject, capture via the electro-mechanical mechanism of my ears and the neurobiological device that is my brain, the recording chain is certainly capable of capturing the artist’s intent. The former - the sound - is and will always be a preference the producer/engineer shapes via the hierarchy mentioned above, while the latter - the artist’s intentions - are and will always be directly related to the artist’s ability to convey those wholly separate from the mechanism the producer/engineer employs to capture them. That is, what I capture with the mic (and how it’s processed from there) will ultimately define the sound - what that the artists does will ultimately define the performative aesthetics of the music. No mic, pre, converter, compressor, EQ, reverb, software package, or mixing console can “fix” the performative aesthetics the artist is only ever solely responsible for. The sound of it can certainly be manipulated to sound however you want it to (and to a certain extent, the pitch and timing - but not without sacrifice), and what’s more can be eternally remixed and remastered and upconverted and released on ever-higher resolution media, but the performance is bound by time to that moment the tape or hard-drive began to roll, and finished the moment someone pushed “Pause”.
Does this make me a “Golden Ear”-type listener? Nope. It’s simply lead me to be able to separate out the sound of the recording from the music contained within it. That’s continued to be true irrespective of whether it’s a live performance or a recorded one, whether it’s an analogue or digital release, stereo or mono, historic or contemporary. Sound is not music. Music is not sound. Music begins in the brain of the artist, and is perceived in the brain of the listener. The medium will be an audible mechanical wave of pressure and displacement, and if recorded, varying degrees of voltage and current reproduced as another audible mechanical wave of pressure and displacement. But its beginning and ending is always going to be a perceptual phenomenon neurobiologically processed in the brain.
3) Rejection of systems/topologies that heighten the sound of the music at the expense of the gestalt of the music:
Kinda obvious, isn’t it?
Does that make me a “Golden Ear”-type. Nope. Just a guy with a lot of preferences and biases that have been forged through the above two processes. I’ve heard a lot of systems - too many, actually - that heighten the sonics of a recording and yet diminish or destroy the pitch/dynamics/rhythmic time-based contingently dependent relationship inherent in the aesthetics of music. Now that fMRI testing has begun to map the area of the brain that processes music as distinct from non-musical sounds and speech, I hope that continued research will be able to ascertain as to whether the neurobiological processes required for passing A/B/X listening tests is the same area/process used for distinguishing music from non-musical sounds or a different area, and/or whether such testing produces conflict in the brain as it attempts to process tasks that are dissimilar. JackD201 referenced this in his “uncanny valley” reference (Post #178), the research of which is here:
https://www.jove.com/video/4375/perceptual-category-processing-uncanny-valley-hypothesis-dimension
Blah blah blah blah.
This thread is moving too fast for me to keep pace with it, so apologies in advance if I’ve missed the point. Which, goodness knows, is more than likely.