Hi-Fi is NOT a subjective hobby.

Well, if 100% of people can consistently locate the singers in front of them, then the location of the singers can be considered a fact based on empirical evidence.

You've dropped 'objectively' or 'objective' from your account. That makes a difference. If 100 listeners give the same description of the relative location of their psycho-acoustic images then you probably can claim that it is a fact that these 100 people give the same description.
 
  • Like
Reactions: Kal Rubinson
You've dropped 'objectively' or 'objective' from your account. That makes a difference. If 100 listeners give the same description of the relative location of their psycho-acoustic images then you probably can claim that it is a fact that these 100 people give the same description.
Nope, I used the term fact as in objective fact.
 
Many here think the hobby is purely subjective but there are some objective attributes that one can assign to a music reproduction system. For example, one system can objectively image better than another. Agree/disagree?
Well that depends on your goal in this hobby. My goal is fairly simple - find a way to enjoy the music in my listening space - end of story. That's about as subjective as it gets. Over the years I have tried many approaches to this and for the last 15 years or so I have settled on my current SET/horn path.

There were/are objective "truths" that I used along the way but my primary goal was enjoyment, not the typical "reproduce the concert hall in my living room.

Beau
 
  • Like
Reactions: Rexp
Nope, I used the term fact as in objective fact.

Then - no offense - you're simply wrong. Psycho-acoustic images in people's heads are not objective facts. Psycho-acoustic images do not exist apart from individual's experiences. They cannot be verified as existing apart from your claim they do. Just because you have a thought does not make it a material reality.

I'm leaving it there. I honestly urge you to read about accepted views on objectivity and subjectivity and maybe read a bit about ontology.

On the other hand, this might be a giant troll and I bit on the hook. :D
 
Then - no offense - you're simply wrong. Psycho-acoustic images in people's heads are not objective facts. Psycho-acoustic images do not exist apart from individual's experiences. They cannot be verified as existing apart from your claim they do. Just because you have a thought does not make it a material reality.

I'm leaving it there. I honestly urge you to read about accepted views on objectivity and subjectivity and maybe read a bit about ontology.

On the other hand, this might be a giant troll and I bit on the hook. :D
I think locating a couple of singers within a soundstage with your eyes closed is a bit like puting a blindfold on and locating a couple of real singers in front of you, do you agree?
 
Objectively most people would probably say that the soundstage for headphones exists entirely inside the listener’s head, that it‘s forced to by physics and limitations of the system. This is supposedly true even for very expensive headphones. Yet, the soundstage on my inexpensive headphones.and inexpensive portable CD a player now extends *way outside* the headphones - depending on the source - up to ten or twenty feet outside the headphones!

When I focused on the goal of “extending the soundstage” and it slowly dawned on me what could be done, it started to happen. It didn’t start out that way. For many years I was subjected to the “inside the head effect.“ You hear something repeated many times, you start to believe it must be true. :)

But why should‘t the soundstage/imaging for headphones be as good - or better - than a very good speaker system? It’s the same signal. The signal contains the soundstage information. It’s actually an even better signal since the room acoustical interferences are removed from the equation.

Geoff Kait
Machina Dynamica
Not too chicken to change
 
Last edited:
Well, if 100% of people can consistently locate the singers in front of them, then the location of the singers can be considered a fact based on empirical evidence.

That statement is a little bit of a logical fallacy for a number of reasons, but primarily because evidence, even good evidence, is not proof. As my old boss at NASA used to say, never get behind anyone 100%.
 
Objective vs. subjective. Three pages in 24 hours, Wow. The cat is among the pigeons.

-Remind me, what is the sound of one hand clapping?
 
  • Like
Reactions: Republicoftexas69
Objective reality is logically independent of human experience. Some say we can never know the ding an sich - the thing in itself, and that what we 'know' is a product, a combination, of percepts from the world outside of us and the concepts (such as space, time and measurement) that we bring to 'experience'.

What objective attributes can be attributed to a fundamentally subjective psycho-acoustic image in your head?

Do I detect a trick question? :)
 
Last edited:
Subjective or objective I truly enjoy the music and it is how I start and end my day.
 
Last edited:
  • Like
Reactions: Chop
Objectively most people would probably say that the soundstage for headphones exists entirely inside the listener’s head, that it‘s forced to by physics and limitations of the system. This is supposedly true even for very expensive headphones. Yet, the soundstage on my inexpensive headphones.and inexpensive portable CD a player now extends *way outside* the headphones - depending on the source - up to ten or twenty feet outside the headphones!

When I focused on the goal of “extending the soundstage” and it slowly dawned on me what could be done, it started to happen. It didn’t start out that way. For many years I was subjected to the “inside the head effect.“ You hear something repeated many times, you start to believe it must be true. :)

But why should‘t the soundstage/imaging for headphones be as good - or better - than a very good speaker system? It’s the same signal. The signal contains the soundstage information. It’s actually an even better signal since the room acoustical interferences are removed from the equation.

Geoff Kait
Machina Dynamica
Not too chicken to change
While some headphones do better at an out the head experience the only ones or type for me without software to do so are
Winged open types like raal or akg
tima makes the point very clear
 
While some headphones do better at an out the head experience the only ones or type for me without software to do so are
Winged open types like raal or akg
tima makes the point very clear
I’m using vintage Sony v700 headphones. They are the non-winged closed type. Does that surprise you? I bet it would surprise Tima.

Where I’m going I don’t need any software.
 
Last edited:
If everyone who hears system A can accurately locate the singers within the sounstage of a specific recording, yet cannot locate them accurately in system B, then I would say system A has objectively better imaging.
In any hi-fi system there are 2 analog signals which are converted into 2 sound pressure waves. Neither of those analog signals or sound pressure waves have a soundstage component. What they have is accurate amplitude, accurate phase and accurate timing. Sound stage is entirely a psycho-acoustic phenomenon created in the brain based on the differential between the 2 signals coming from each ear.
So soundstage is entire subjective In that it can be ‘heard’ but not measured. Its an artefact based of the act of listening to a single signal which has been manipulated to artificially split it into 2 signals in order to create an illusion of space. Some systems are better at creating that illusion than others.
 
The two signals are really the top and bottom of the sine wave that represents the instantaneous audio signal at any given point in time. Thus, all acoustic information that the mic picks up during recording - including ambient type soundstage information, e.,g., reverberant decay, echo, first and second reflections - is contained in those two signals.
 
Last edited:
In any hi-fi system there are 2 analog signals which are converted into 2 sound pressure waves. Neither of those analog signals or sound pressure waves have a soundstage component. What they have is accurate amplitude, accurate phase and accurate timing. Sound stage is entirely a psycho-acoustic phenomenon created in the brain based on the differential between the 2 signals coming from each ear.
So soundstage is entire subjective In that it can be ‘heard’ but not measured. Its an artefact based of the act of listening to a single signal which has been manipulated to artificially split it into 2 signals in order to create an illusion of space. Some systems are better at creating that illusion than others.
Ok, are you agreeing or disagreeing?
 
Ok, are you agreeing or disagreeing?
Hi, my first posting on WBF:)

I think tima’s comments here are spot on. In the big picture there are three elements going into the listening experience of reproduced music: The media/system, the room and the listener. Human senses vary, on which it is not possible to establish an objective standard. Hearing ability varies and listening to music is about connecting to one’s emotional capital, which of course also varies between individuals. That’s why we like and prefer different genre of music, and the analytical part and the emotional part blend in a listening session. Thus, by definition, it is not possible to establish an objective standard on individual’s perception of music/sound to form that reference of objectivity we would be looking for.

One may of course swap the human ear for a measurement tool. A measurement tool can bring objectivity as one would be able to set some defined measurement references and furthermore apply a systematic approach (results to be control tested). But swapping the human ear for a measurement instrument is really to look for the missing key under the light pole, just because this would be the only spot you would see something. Measurements will tell something about the sound quality, but then again, those measured results would be of variable value or inconsistent to individuals as listeners. There is no other tool than our ears/our hearing (except physical impact of LF) to experience sound. And that experience end up being individual and subjective. A group of listeners may agree on some aspects of sound listening to the same piece of music, but I see no way that agreement on aspects by the group of individuals can be brought out of that listening session to be used in other listening sessions as a reference in any objective way.
 
I can’t get how imaging is subjective.
it’s there or not. How tall , wide and deep is factual.
what is subjective if any is how the sizing is.

The problem here is that depth can be an artifact and not a virtue. My room acoustics used to be such that *every* recording had depth, and too much of it. Audiophile friends who came over were impressed by the "depth" of my soundstage, but at some point I really started to hate it. It took a while to make the front half of my room less reverberant (also with the help of ASC window plugs) to reduce "depth" artifacts, and eventually I arrived at an acoustics and system performance that offers a maximum of display of soundstage depth differences across different recordings. Very immediate and intimate recordings now do sound upfront and in your face, and great soundstage depth is represented with some large-scale orchestral and choral recordings. Which is as it should be, in my estimation. Depth only where appropriate.

Again, the exaggerated soundstage depth" that I had before the acoustic and system changes was NOT a good thing, impressive as it may have sounded to visitors.
 
The problem here is that depth can be an artifact and not a virtue. My room acoustics used to be such that *every* recording had depth, and too much of it. Audiophile friends who came over were impressed by the "depth" of my soundstage, but at some point I really started to hate it. It took a while to make the front half of my room less reverberant (also with the help of ASC window plugs) to reduce "depth" artifacts, and eventually I arrived at an acoustics and system performance that offers a maximum of display of soundstage depth differences across different recordings. Very immediate and intimate recordings now do sound upfront and in your face, and great soundstage depth is represented with some large-scale orchestral and choral recordings. Which is as it should be, in my estimation. Depth only where appropriate.

Again, the exaggerated soundstage depth" that I had before the acoustic and system changes was NOT a good thing, impressive as it may have sounded to visitors.
You point to a useful criteria for evaluating the accuracy of reproduction: the extent to which recordings are differentiated and not homogenized.
 
You point to a useful criteria for evaluating the accuracy of reproduction: the extent to which recordings are differentiated and not homogenized.

Keep in mind that tonal balance is directly correlated to sound pressure level, therefore things are more complicated than putting a signal through a static filter. Every system has a frequency and signal level dependent transfer function; and every room and our ears have a frequency and sound pressure level dependent transfer function. For human ears this transfer function has been documented with the Fletcher-Munson curves. Further to this, the brain does a form of normalization on incoming sound based on what the brain expects the sound to sound like based on prior experiences. Psychoacoustics and human perception are not as straight forward as looking at everything through the same lens.

In other words, even if the system & room combined for the ideal 20Hz to 20KHz flat frequency response, which is neutrality, you would not avoid a certain homogeneity at any given sound pressure level. In this scenario neutrality would become its own form of homogeneity.

If the individual contributors are considered then this becomes a very complex linear algebra boundary conditions exercise, but if one accepts the system’s and room’s transfer functions for what they are, then there are mechanism to use our ears as feedback devices to apply the required corrective action until the resultant perceived sound converges on our desired individualistic target composite transfer function.

The best way to address and adjust the composite transfer-function is through convolution and dynamic filtering. This ensures that it’s the spectral makeup of the source material that serves as the stimuli to compensate for the Fletcher-Munson effects and the unknown audio system & room transfer functions. Banking encryption systems perform similar functions with processes where algorithms validate and authenticate information without knowledge of the actual information itself.
 
Last edited:
Keep in mind that tonal balance is directly correlated to sound pressure level, therefore things are more complicated than putting a signal through a static filter. Every system has a frequency and signal level dependent transfer function; and every room and our ears have a frequency and sound pressure level dependent transfer function. For human ears this transfer function has been documented with the Fletcher-Munson curves. Further to this, the brain does a form of normalization on incoming sound based on what the brain expects the sound to sound like based on prior experiences. Psychoacoustics and human perception are not as straight forward as looking at everything through the same lens.

In other words, even if the system & room combined for the ideal 20Hz to 20KHz flat frequency response, which is neutrality, you would not avoid a certain homogeneity at any given sound pressure level. In this scenario neutrality would become its own form of homogeneity.

If the individual contributors are considered then this becomes a very complex linear algebra boundary conditions exercise, but if one accepts the system’s and room’s transfer functions for what they are, then there are mechanism to use our ears as feedback devices to apply the required corrective action until the resultant perceived sound converges on our desired individualistic target composite transfer function.

The best way to address and adjust the composite transfer-function is through convolution and dynamic filtering. This ensures that it’s the spectral makeup of the source material that serves as the stimuli to compensate for the Fletcher-Munson effects and the unknown audio system & room transfer functions. Banking encryption systems perform similar functions with processes where algorithms validate and authenticate information without knowledge of the actual information itself.

I honestly have difficulty relating what you wrote to my initial comment. I won't comment further :)
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu