So yes, I’d believe so Tim, the digital distinction is something of a straw man as nobody is actually listening to digital and happily we’re all in the analogue together.
Agreed. Our posts crossed.
So yes, I’d believe so Tim, the digital distinction is something of a straw man as nobody is actually listening to digital and happily we’re all in the analogue together.
In those very early samples of awkward staircases we all pretty much fell down. Ouch. In the attempt to lower noise for quite some very long time we also lost connection with the essential signal... but thankfully it’s back.There is no "gestalt of digital" since we don't hear in sample points (which are not stair cases, for the uninitiated ). The output from a DAC is analog wave forms, thus digital also delivers a "gestalt of analog".
For me, there is an "ease" and "effortlessness" to the music which is best conveyed by 1/2", 30 ips, half-track tapes. The closest I've heard to that is DSD 256. Hope everyone is well!
Huh? How does one preference vs another relate to the ability to distinguish between live vs recorded music? And yes everything happens in a context, I think we know that.
For me, there is an "ease" and "effortlessness" to the music which is best conveyed by 1/2", 30 ips, half-track tapes. The closest I've heard to that is DSD 256. Hope everyone is well!
I am simply stating that in some conditions people will easily distinguish between real and recording, in others they will not. What can we conclude from that?
Yea, we all try to correlate the ease & effortless, I call it realism, in sound reproduction with some factor in individual electronics or system components chain that is responsible for this realism. I tend to agree with you but would express it in a wider context - that electrical noise in general, is paramount in achieving this realism. My experience tells me that power is a big factor in all of this but so is electrical noise coming from connected devices, common mode noise infiltrating DACs from the connection to computer, network server, etc.This is exactly what I hear when jitter and distortion in a digital system is very low, but it also requires the power subsystem to deliver the high-frequency transient currents. All three are really needed in order to achieve digital that sounds like analog.
The fatigue factor, or effortlessness is the audible manifestation of these features. I get this even with 44.1 redbook tracks.
These examples of water running, rain on a tin roof, audience applause, fire crackling, etc are called texture sounds in the acoustics research literature & are used as examples of how auditory perception uses summary statistics to analyse & categorise them. I extrapolate this function to suggest that we use it to some extent in all our auditory perception. Knowing how biological organisms are efficient in their use of their limited resources, it seems that when we discover a working mechanism like this it is seldom just being used for a very specific scenario like these specific examplesIf your track is non-music, such as water running/splashing, gravel on a road being walked on or similar, everyone knows what these sound like. These are pretty good test tracks for determining liveness.
I like to compare my real analog system, my acoustic VictorII gramophone to my digital system. This is true analog, no electronics involved.
My 2 cents for what it's worth - all this is IMO
We don't know what the original event or the final cut sounded like so how do we judge the sound of our replay systems?
If we look on recordings as a piece of art, the art of the musicians as interpreted by the art of the recording engineer as a representative interpretation. I'm sure given different recording engineers & producers we would get different interpretations.
So you want what you hear to be what the recording professionals intended but you can't really know this precisely. So how do we judge our replay systems? I suggest that we judge the sound in terms of how 'real' it sounds. When it sounds 'real' we find engagement & immersion in the performance - we transcend the sound of our playback system. This can only happen if the 'realism' of the sound is maintained during the playback.
Most of our auditory perception is happening below the level of consciousness with the end result presented to consciousness. What I mean is that our brain is analysing the nerve impulses from our two ears & making sense of these nerve impulses organising & categorising them into an auditory world model that makes sense - it's a heavy duty analytical process that evaluates what we perceive through our senses. This can only work efficiently if we have an internal series of rules/models against which the analysis is performed - rules/models that have built (& continue to be built) over the years of exposure to sound in the world.
So my contention is that it is this sub-conscious analysis which determines how real we perceive the sound to be.
To explain in a bit more detail - the real-time analysis seems to work along the lines that at any point in time it is finding the best fit for the collection of nerve impulses into a working model & as a result predicts what should come next according to that working model. An example of this prediction function is how alien sound seems when played backwards - it doesn't match the behaviour of sound as we usually experience it & have built our internal analytic models around.
So what is happening when we listen to our playback system? We are analysing in the exact same way. If it sounds natural & real, it's because the sound ticks all the analytical correct boxes that matches our internal real-world models of sound. If, at any point in the sound stream, it doesn't match the prediction in some aspect, the working analytic model is changed to best fit the new collection of nerve impulses and so on. Too much of these deviations from the model's expectations & we have too much modification of the working model, too much energy expended as this best fit analysis & changing of models is heavy on resources. But all this is happening below consciousness - what emerges at a conscious level depends on the how many such misfits there are - a disinterest in the music at one end , perhaps even a wish to turn it off as it is disturbing/jarring in some way at the other end. We generally don't get into the relaxed state where music transports us as our brain's energy is mostly being used in figuring out the nerve impulses it is presented with
The opposite, where less resources are expended listening to playback, allows the saved energy resources to now be used by higher levels of brain function & I believe why we feel engagement, immersion in the sound & enjoyment in listening to the music playback. This only happens when the rightness of the nerve impulses (music stream) is in concurrence with our inbuilt models of natural sound.
I'm not sure what all the characteristics are that determine the 'realness' of the sound - it may be that we form a statistical analysis of the ongoing collection of sounds we call music i.e it's not individual freqs or amplitudes or timing but an ongoing statistical analysis/abstraction moment to moment so a sort of pattern of ongoing sophisticated pattern analysis with prediction - so what has occurred in the music some moments ago (how many moments I don't know?) is of importance to this on-going statistical analysis.
All this leads me to kinda try answering the question posed in the o/p - an all analogue system can be wrong but the mistakes are of a certain type - a type that auditory perception finds easier to accommodate to, perhaps? Digital audio system errors may be considered more unnatural to auditory perception? For instance I've often seen wow & flutter compared to jitter or close-in clock phase noise as if they are equivalent but I don't believe that to be the case. Perhaps digital audio is focused on the wrong goal - removing noise? By doing so it may expose patterns of errors which were previously buried in the base noise of analogue? Perhaps patterns are more easily exposed in digital audio & it's patterns that our auditory system uses for it's analysis? Again, take all my statements as working hypothesis & IMO, best guesses - not set opinions or definitive descriptions of the way auditory perception works.
So it's not so much does the replay sound like a strad/fender or whatever but is it internally consistent when analysed by auditory perception as real-world sound?
And on top of all this we are listening a a very limited version 2 channel version of what we would hear in the real world - which adds another complexity to the scenario
ThanksI think this is very thoughtfully and beautifully written. A group here developed in 2016 four alternative, but not mutually exclusive, objectives of high-end audio:
1) recreate the sound of an original musical event,2) reproduce exactly what is on the tape, vinyl or digital source being played,3) create a sound subjectively pleasing to the audiophile, and4) create a sound that seems live.I think that your objective of achieving a sound that sounds "real" is substantially similar to objective 4) "create a sound that seems live."
I was trying to tease out whether there can be degrees of aural memory, or if discriminating live from recorded music was something other than 'remembering'.
If one hears enough live music I think you know what it sounds like - you've learned what it sounds like. I was asking if that is the same as remembering, if knowledge is different from memory. If you know what a live piano sounds like I don't know if that requires remembering what a specific note sounds like from a specific piano.
Most of our auditory perception is happening below the level of consciousness with the end result presented to consciousness. What I mean is that our brain is analysing the nerve impulses from our two ears & making sense of these nerve impulses organising & categorising them into an auditory world model that makes sense - it's a heavy duty analytical process that evaluates what we perceive through our senses. This can only work efficiently if we have an internal series of rules/models against which the analysis is performed - rules/models that have built (& continue to be built) over the years of exposure to sound in the world. ...
So it's not so much does the replay sound like a strad/fender or whatever but is it internally consistent when analysed by auditory perception as real-world sound?
The opposite, where less resources are expended listening to playback, allows the saved energy to now be used by higher levels of brain function & I believe why we feel engagement, immersion in the sound & enjoyment in listening to the music playback. This only happens when the rightness of the nerve impulses (music stream) is in concurrence with our inbuilt models of natural sound.
What I'm suggesting is that before we consciously think about the timbre of an instrument & whether it accurately matches our memory of the live instrument, our analytic engine has done a lot of work subconsciously. This work at the subconscious level is very basic but also very complex.This suggests we may be close in how we think about gauging what we hear, at the minimum how we distinquish live from reproduced music.
Yes, I agree that we are transported when listening to a good system & even music we are not familiar with is interesting - maybe not as interesting/engaging as music we know & love but still there's enough realism in it to engage us. My quip about the sound of background trains was really to point out that this detail isn't the goal but rather that realism/engagement is the goal. I do believe that this sort of low level detail is necessary for realismFor a while now I've believed the most immersive level of enjoyment of reproduced music happens when one is mostly or wholly focused to or in touch with music to the point where we are not thinking about other stuff, when we are not thinking about system or reproduction, when the music, as it were, takes us. And I've read that the state of such immersion occurs largely in the limbic area of the brain, which I read is more primative in terms of evolution and not an area where higher levels of brain function occur. This is Copland's "sensuous plane" -- "a kind of brainless but attractive state of mind [that] is engendered by the mere sound appeal of music." Whether higher or lower order brain function, I agree that we 'do less work', expend less thought resources at that level of appreciation. (Admittedly though, sometimes the Kingsway Hall underground can break in.)
I'm considering this at a lower level initially as you see above & what I'm suggesting is that from babies onwards we absorb the world of sound, correlate it with the world of images & with these two senses build internal models of how objects behave in the world both in their visual aspects & in their auditory aspects. So a bell sound has a sharp attack & a long decay (not the other way around) - a small bell produces a higher freq than a large bell, etc. In the visual model I think of a scene from Father Ted "small cow or far away"One thing of which you wrote I find interesting is the notion of the "internal series of rules/models" built "over the years of exposure to sound in the world." Perhaps their application might be considered a kind of memory but as themselves are a kind of knowledge. We might think of those as our preferences or reflecting our preferences (i'm not sure on this) and ask whether they come entirely from exposure to live music or are perhaps more fungible across individuals in terms of their origin. Being audiophiles we dicker amongst ourselves over the sound of this system or that system.
Steve Williams Site Founder | Site Owner | Administrator | Ron Resnick Site Co-Owner | Administrator | Julian (The Fixer) Website Build | Marketing Managersing |