Do Members use Live Music as a Reference

Do Members use Live Music as a Reference?

  • I use live music as a reference.

    Votes: 50 73.5%
  • I do not use live music as a reference.

    Votes: 18 26.5%

  • Total voters
    68
Indeed. It is dubious to rely on a traditional or 'classical' definition of music. Also, some ambient music, e.g. with slowly modulating electronic drones, has barely anything that qualifies as rhythm on a 'normal' time scale. You can also use speech from a radio and lay imitative/modulating speech over it as musical process, as happens in some avant-garde music. Rhythm in that case becomes a very broad or opaque concept.

Music in its most general idea is organization of sounds in time. Nothing more, nothing less.

i respectfully beg to differ. Music in its most general idea is an organisation of sounds within rhythm, pitch and time. Not quite the same thing.

If language and music are not shared processes, and are in fact wholly distinct and autonomous functions within the brain, is it any wonder that when we use one to describe the other we fall short?
 
P.S. Do you have any links to research in which there are rhythmic sounds in nature which our brains mistake for music? I’m very curious, as I’m not currently aware of any.

Thanks again for your new input. I know of no links of course, but if you are interested in an anecdote here goes: While camping alone in the wild in my younger years and listening to the rhythm of the sea lapping at the shore, as if a huge beast were calmly breathing, I started to compose music in my brain, using the primal pitch and rhythm of what I heard. I did not intend to, or listen especially for it, it just overcame me. At another time in another place the rhythmic hum and pitch of a group of insects had the same effect on me and I suddenly started to use this input and do some variations to it. Both times it was a most enjoyable and satisfactory experience. There have been more of those incidences, always with rythm as a trigger, but those two above were to me the most impressive.
 
Also, some ambient music, e.g. with slowly modulating electronic drones, has barely anything that qualifies as rhythm on a 'normal' time scale.

Yeah, baby - I love that stuff.

Untitled-1.jpg
 
Hi jkeny,

When we discuss the art form socio-culturally prescribed to be ‘music’, yes I agree, defining what it is and is not is problematic. Kanwisher, McDermott and Norman-Haignere’s research is the most comprehensive so far in making an attempt to define it, but even they admitted “It’s difficult to come up with a dictionary definition… music is best defined by example.” That is, they simply played subjects 165 of the most commonly heard sounds and let the data reveal itself. In this case, the study was hypothesis neutral, which is to me, more revealing of what sounds our brains consider to be music and what they do not.
Ah, no this isn't correct then - one can't say that certain neuron clusters in the brain process certain types of sound & then use this to categorise these certain types of sound as music or take the leap in logic that this then defines what is music - it's circular logic, I believe
Of the six basic response patterns the brain used to categorise incoming sound, four responded to general physical properties, the fifth speech, but the sixth not only responded specifically to music, it responded to every musical clip they played, regardless of whether it was a solo drummer, whistling, pop, rock, Bach, melodic or rhythmic. This is in distinction to English speech, foreign speech, non-speech vocal, animal vocal, human non-vocal, animal non-vocal, nature, mechanical and environmental sounds, some of which were rhythmic in-and-of-themselves (walking, breathing, ringtone, cellphone vibrating, water dripping, phone ringing, sirens and alarm clocks, for instance). Speech, long thought to be the dominant neural process in which music is perceived, is much more homogeneously contained, and did not trigger a neural response in the same way music did. That the brain gives specialised treatment to music in the same way it does speech - with no crossover - is in itself is certainly a breakthrough.
Again, I haven't read the paper so can't read the nuances in the researchers categorisation of music but you see where I'm coming from? In a way it seems like the definition of "competently designed audio device" - what is it? - it's a phrase used, pretending to be a definition, that has no reference point - it can be made to mean anything the user wishes it to mean - so anything which sounds different is therefore "not competently designed" . I know the paper must be more rigorous than this.

And yes, voxel decomposition was necessary because of the limits of FMRI analysis. However, they did search for components with non-Gaussian weight distributions in order to create the algorithm (“negentropy” and Gamma-distributed) and although both methods explained reliable voxel response variance, they went with the first one which did not depend on a specific parameterisation of the data, and were robust to the specific statistical criterion used. Until we have greater pools of data to analyse, or indeed, have FMRI scanners that can move beyond the current limits of its resolution, it’s the best we’ve got.
As I said, I too hope that it proves to be as robust as it appears based on the reports about the paper that I've read but (isn't there always one) just because they found statistical models which best explaine d the data, & this suggested that there were only six mechanisms, doesn't necessarily mean that this is the way the brain works - it just means that of the mathemetical approaches take this is the best fit data model - understand, I'm playing devil's advocate here & not trying to deny the research, just trying to pick holes in it to try to verify the robustness of the conclusions reported. Remember the map isn't the terrain.

But as to whether it constitutes a ‘truth’… Well, it’s probably easier to define ‘music’, and much less philosophically problematic.

P.S. Do you have any links to research in which there are rhythmic sounds in nature which our brains mistake for music? I’m very curious, as I’m not currently aware of any.
I'm not sure I understand the question as I don't know the brain's definition of "music"
 
Thanks again for your new input. I know of no links of course, but if you are interested in an anecdote here goes: While camping alone in the wild in my younger years and listening to the rhythm of the sea lapping at the shore, as if a huge beast were calmly breathing, I started to compose music in my brain, using the primal pitch and rhythm of what I heard. I did not intend to, or listen especially for it, it just overcame me. At another time in another place the rhythmic hum and pitch of a group of insects had the same effect on me and I suddenly started to use this input and do some variations to it. Both times it was a most enjoyable and satisfactory experience. There have been more of those incidences, always with rythm as a trigger, but those two above were to me the most impressive.

If we accept the ionosphere acts as a waveguide and has a fundamental electromagnetic resonance of 7.83 Hz, then it's entirely possible the Earth itself is "musical" (note the use of inverted commas). Interestingly, 60 Hz (the 8th partial) and 120 Hz were often some of the frequencies I'd sneakily boost when tracking drums for some extra LF whoompf. Anecdotally speaking, of course.
 
Ah, no this isn't correct then - one can't say that certain neuron clusters in the brain process certain types of sound & then use this to categorise these certain types of sound as music or take the leap in logic that this then defines what is music - it's circular logic, I believe Again, I haven't read the paper so can't read the nuances in the researchers categorisation of music but you see where I'm coming from? In a way it seems like the definition of "competently designed audio device" - what is it? - it's a phrase used, pretending to be a definition, that has no reference point - it can be made to mean anything the user wishes it to mean - so anything which sounds different is therefore "not competently designed" . I know the paper must be more rigorous than this.

As I said, I too hope that it proves to be as robust as it appears based on the reports about the paper that I've read but (isn't there always one) just because they found statistical models which best explaine d the data, & this suggested that there were only six mechanisms, doesn't necessarily mean that this is the way the brain works - it just means that of the mathemetical approaches take this is the best fit data model - understand, I'm playing devil's advocate here & not trying to deny the research, just trying to pick holes in it to try to verify the robustness of the conclusions reported. Remember the map isn't the terrain.

I'm not sure I understand the question as I don't know the brain's definition of "music"

I certainly do not have the knowledge to join your conversation with 853, but I do know a bit about semantics. Your last sentence bothers me. Since when can brain define something like music. Perhaps I am naive, it can recognize, judge, compose, kick off emotions of whatever quality. But define? Please clarify......
 
Ah, no this isn't correct then - one can't say that certain neuron clusters in the brain process certain types of sound & then use this to categorise these certain types of sound as music or take the leap in logic that this then defines what is music - it's circular logic, I believe Again, I haven't read the paper so can't read the nuances in the researchers categorisation of music but you see where I'm coming from? In a way it seems like the definition of "competently designed audio device" - what is it? - it's a phrase used, pretending to be a definition, that has no reference point - it can be made to mean anything the user wishes it to mean - so anything which sounds different is therefore "not competently designed" . I know the paper must be more rigorous than this.

As I said, I too hope that it proves to be as robust as it appears based on the reports about the paper that I've read but (isn't there always one) just because they found statistical models which best explaine d the data, & this suggested that there were only six mechanisms, doesn't necessarily mean that this is the way the brain works - it just means that of the mathemetical approaches take this is the best fit data model - understand, I'm playing devil's advocate here & not trying to deny the research, just trying to pick holes in it to try to verify the robustness of the conclusions reported. Remember the map isn't the terrain.

I'm not sure I understand the question as I don't know the brain's definition of "music"

Screen Shot 2016-12-02 at 23.10.13.jpg

Here you go...!

http://web.mit.edu/bcs/nklab/media/pdfs/SVNH_NGK_JMD_2015.pdf

And with that, he fluttered his eyelids in a sultry manner, and departed for bed.
 
I certainly do not have the knowledge to join your conversation with 853, but I do know a bit about semantics. Your last sentence bothers me. Since when can brain define something like music. Perhaps I am naive, it can recognize, judge, compose, kick off emotions of whatever quality. But define? Please clarify......
Yes, I was using this strange concept as a direct quote from 853guy's question to me - "Do you have any links to research in which there are rhythmic sounds in nature which our brains mistake for music?"
I can't really understand what 'our brains mistake for music' can mean?

In a way, that's the point of the paper & 853's question - the brain 'decides' what is 'music' & processes it through a different neural pathway than other sounds!!
 
Last edited:

Bob, Notice the descriptions he uses..."the room disappears", "effortless","bass control" these are the markers of the new frontier in audiophile speaker technology. I have described facets of this kind of reproduction for a long time. It is really a big step in audio reproduction because of the increase in information that is revealed from the recording to the listener. I believe we are no longer making small incremental steps here,as we are nearing I think the ability to produce a information level into the 90 pct range.

Just a note about a experience I had the other night as I continue to optimise my system. I adjusted the position of my main speakers and then listened,I then adjusted my outboard psycho speakers. Almost instantly the recording revealed a solid illusion that included the physical space of the recording venue. I sat there and marveled how the room totally disappeared and the recording took on a 3-dimensional character. The only thing I noticed was that the recording seemed to be just slightly out of focus. I have experienced this before but not to this same degree. Was my brain confused with this new paradigm? I then retired for the night and wondered when I listened to this same recording would it still have the same illusion. It in fact did,as I listened the next morning,except the the slight out of focus quality was gone and a razor sharp precision had replaced it.

I think this new technology will enable innovators to take audio reproduction to the next level....exciting times indeed. Thanks for posting this....

p.s. and increased information results in more realism,dynamics, tonal purity,and micro detail....it's all connected.
 
Last edited:
Ok, I'm reading through the paper now & these things pop out related to what defines music
-
Component 6, in contrast, responded primarily to sounds
categorized as music: of the 30 sounds with the highest
response, all but two were categorized as musical sounds by
participants. Even the two exceptions were melodic: ‘‘wind
chimes’’ and ‘‘ringtone’’ (categorized as ‘‘environmental’’ and a
‘‘mechanical’’ sounds, respectively). Other non-musical sounds
produced a low response, even those with pitch​
- later in the paper, it defines one aspect of what this component 6 responds to - temporal struture over relatively long timesacles

So it would seem that answers my question of what characteristic is used by auditory processing to determine if it's music or not - longish temporal structure

It's a heavy paper to understand & would take a while to absorb it all - not sure I have the time
 
Bob, Notice the descriptions he uses..."the room disappears", "effortless","bass control" these are the markers of the new frontier in audiophile speaker technology. I have described facets of this kind of reproduction for a long time. It is really a big step in audio reproduction because of the increase in information that is revealed from the recording to the listener. I believe we are no longer making small incremental steps here,as we are nearing I think the ability to produce a information level into the 90 pct range.

Just a note about a experience I had the other night as I continue to optimise my system. I adjusted the position of my main speakers and then listened,I then adjusted my outboard psycho speakers. Almost instantly the recording revealed a solid illusion that included the physical space of the recording venue. I sat there and marveled how the room totally disappeared and the recording took on a 3-dimensional character. The only thing I noticed was that the recording seemed to be just slightly out of focus. I have experienced this before but not to this same degree. Was my brain confused with this new paradigm? I then retired for the night and wondered when I listened to this same recording would it still have the same illusion. It in fact did,as I listened the next morning,except the the slight out of focus quality was gone and a razor sharp precision had replaced it.

I think this new technology will enable innovators to take audio reproduction to the next level....exciting times indeed. Thanks for posting this....

p.s. and increased information results in more realism,dynamics, tonal purity,and micro detail....it's all connected.

You don't need modern technology to accomplish an immersive 3-D soundstage where the room audibly disappears, decades old horn systems and SET amps can do it no problem... and probably better than the B&O speaker as the signal path is far simpler.
 
You don't need modern technology to accomplish an immersive 3-D soundstage where the room audibly disappears, decades old horn systems and SET amps can do it no problem... and probably better than the B&O speaker as the signal path is far simpler.

the 3D soundstage is just part of it.
 
the 3D soundstage is just part of it.

I agree... The 3-D soundstage where the room disappears is a combination of the signal chain preserving fine detail + room acoustics not mucking it up. Dynamics and tone are a strong suit of horns + SET amps too... imo basic DHT amplification and old driver technology is the best the world has ever seen and has not been surpassed.

There's more than one way to skin the cat though, I get that... traditional dynamic speakers can be excellent if used in a proper room, and controlled dispersion can help achieve the goal as well. The B&O uses a very complicated system for controlling it's dispersion, it's interesting and to be honest I haven't heard them yet, so I'll keep an open mind, but I've heard many other attempts at using DSP to determine a speaker's behavior and haven't been impressed yet...

Of course, the speaker I designed is a horn hybrid intended to be used with SET amps, so I might be a little biased. ;) However, achieving a 3-D soundstage in the average living room was a primary goal of my speaker, so I have studied and thought a lot about it.

But, back on the topic of realism and live music, hearing the venue and not the listening room is a HUGE advantage with live recordings to get the feeling that you're AT the concert and not just listening to a recording of the concert.
 
I agree... The 3-D soundstage where the room disappears is a combination of the signal chain preserving fine detail + room acoustics not mucking it up. Dynamics and tone are a strong suit of horns + SET amps too... imo basic DHT amplification and old driver technology is the best the world has ever seen and has not been surpassed.

There's more than one way to skin the cat though, I get that... traditional dynamic speakers can be excellent if used in a proper room, and controlled dispersion can help achieve the goal as well. The B&O uses a very complicated system for controlling it's dispersion, it's interesting and to be honest I haven't heard them yet, so I'll keep an open mind, but I've heard many other attempts at using DSP to determine a speaker's behavior and haven't been impressed yet...

Of course, the speaker I designed is a horn hybrid intended to be used with SET amps, so I might be a little biased. ;) However, achieving a 3-D soundstage in the average living room was a primary goal of my speaker, so I have studied and thought a lot about it.

But, back on the topic of realism and live music, hearing the venue and not the listening room is a HUGE advantage with live recordings to get the feeling that you're AT the concert and not just listening to a recording of the concert.

And it is even apparent in non-live recordings ie studio's and concert halls....classical is a real treat

The people at Western Electric and Bell Labs really knew their craft imho. Of course single driver systems not in vogue,except by a smaller percentage,but capable of some amazing sound,though not without limitations.
 
853guy, I agree with most of what you say but I would have a couple of caveats.
I think the idea of drawing a very firm dividing line between music & non-music is mistaken - I believe there is a gradation between the two - defining what is music is not an easy task - there are a lot of rhythmic sounds in nature that would straddle this divide & I'm not sure how the researchers on that paper made the differentaition (I couldn't find a non-paying version of the aper to read)

The second caveat I would have is just to take some pause about fMRI as an analysis tool. The problem up to now with fMRI has been that it isn't fine enough resolution to be very useful beyond a certain point - see here for a primer on fMRI & voxels (the finest cluster of cells that it can resolve). The recent research is considered a breakthrough - as far as I understand it, as it is using complicated data analysis to extract information related to finer levels than the voxel limits.

Although I fully agree with you that taking the subjectivity out of perceptual testing is the way to make better progress in this area & I hope that these latest breakthroughs in fMRI prove to be an advance, I always take pause when heavy mathematics are used for deriving 'truths'

CGI artists have a name for such artifice. The horse acting funny. It's called "uncanny valley". We get versions of that in sound synthesis as well. Over time synth programmers ended up using high resolution samples of actual instruments playing. What amazes me is how easy it is to fool our hearing when it comes to localization but how difficult it is to fool when it comes to timbre.
 
CGI artists have a name for such artifice. The horse acting funny. It's called "uncanny valley". We get versions of that in sound synthesis as well. Over time synth programmers ended up using high resolution samples of actual instruments playing. What amazes me is how easy it is to fool our hearing when it comes to localization but how difficult it is to fool when it comes to timbre.

Haha, "uncanny valley" - I hadn't heard that before. It brings up a good point about how perception is perceived to work & the various layers that seem to be involved in perception. All of this is my recollection of matters & may be wrong in some details but hopefully not in the general nature of what I state:

At the general level both visual & auditory perception seem to operate in roughly the same way - they break out & analyse certain aspects of the scene & then coalesce these analyses back into our perception of each object in the scene. In your "uncanny valley" example of the the horse, one aspect that visual perception analyses is the movement of the object - so it is analysed much like a stick model where everything else, texture, colour, spatial organisation, etc. is ignored except the relative movement of various parts of the horse. These other aspects of the objects (texture, etc) although analysed separately, are then recombined to form our perception of that object & all other objects in the scene (this is a simplification - in fact the analysis is across the whole scene & objects are derived out of this). There are differences in complexity between the various analyses - analysing texture is not as complex as analysing the gait of the horse - so we get relatively quicker & simpler conclusion on texture than on the moment by moment analysis of horses gait - the visual cues needed to anayse texture are fewer than the cues needed to analyse gait. So we can decide fairly quickly if a horse is just the wrong colour but not so quickly if a horse has an unnatural gait. How do we know if the gait is unnatural? Because we have something like an internalised stick model that defines a horses gait built up over the many times we have seen horses in action.

Auditory processing is similar but different. It breaks down & analysis the scene from various perspectives or aspects. One perspective is the spatial arrangement of the objects in the scene (again this is simplistic - like above). Like the colour texture of the horse, the fewer cues needed to determine spatial arrangement of auditory objects means that we can manually manipulate these cues easier & fool our perception. Whereas timbre is like the gait of the horse - it seems to be determined by many characteristics of the object & their behaviour over time - temporal fine structure together with the structure of the envelope of the sound. Any slight unnatural behaviour in these aspects causes our perception to register an "uncanny valley" aspect to the timbre & we are less convinced by its realism

The thing is that we don't recognise such unnaturalness as easily in sound as we do in sight. When something is slightly 'off' visually it tends to be easily noticeable - not always so with sound. I reckon that's because auditory perception is dealing with a more ill-formed problem than visual perception - the nature of auditory cues are not sufficient to form a conclusive analysis & therefore we are less sure of its determinations - as a result we are in a constant state of subconscious insecurity with our hearing. All of which means that anomalies in sound don't jump out at us as they do in vision. This has implications for blind testing & why blind testing is such a mistaken approach when used in the way we see on audio forums - it doesn't recognise the ill-formed problem that auditory perception is tasked with solving & the implicatiosn that result from this.

A non-technical paper about this "Auditory Scene Analysis: The Sweet Music of Ambiguity"

A useful, if somewhat simplistic animation of a few soundwaves emanating from an orchestra which demonstrates the ill-formed problem that auditory processing is tasked with solving - in this simple example, we are receiving the overlay of these few simple soundwaves at our two ears & making sense of this mixture of pressure waves. Click on the animation here http://www.animations.physics.unsw.edu.au/jw/linear-non-linear-superposition.html
 
Last edited:
Yes, I attend live acoustic music regularly- the San Francisco Symphony Orchestra, the SF Opera, and the SF Jazz Center. But, I must admit I usually enjoy music on my own system more!
 
Yes, I attend live acoustic music regularly- the San Francisco Symphony Orchestra, the SF Opera, and the SF Jazz Center. But, I must admit I usually enjoy music on my own system more!

Why is that, you think?
_______

Off topic: I enjoy movies more @ home than @ the cinema theater. The picture is cleaner and the sound more detailed.
But that's normal; @ home the picture and sound are fine tuned for that main seat area and for way less people.

There might be some similarities between music listening @ home and @ large live venues, like for music concerts.
Some moments in time are frozen into great musical live memories; nothing @ home can approach it.
And other musical moments are much more enjoyable @ home than @ live venues.
But that's normal too; it is something that we have no control over, only the environment and the performance in that space in time...plus our own psyche in the overall scheme of all things living and imaginary. ...Like a lost illusion of an untouchable reality; only felt emotionally...a very personal and intimate trance.

I think, from memories experienced.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu