Why do some Objectivists fear Psychoacousitics?

Well, do you think audio hasn't improved continually over many years?

I guess you don't want to actually investigate internal controls & their usage in blind tests, then?

No John... again and again you do this
Stop if you can just for one minute ... stop talking about internal controls in blind tests because it comes across as single minded and frankly, as if you are avoiding any other discussion... I have said that of course, better controls will result in better tests... OK
I'll say it again, better controls will result in better tests

Now... talk to me about how you think that sighted tests can be improved.
Do you really think, as you alluded to before, that nothing can be done to improve them, that they are perfect?

Please just answer the question... that or go into politics
 
Rob, as I said to esldude "manufacturers (even Harmon) don't continually use bind testing for every development stage in their product evolution - it's not feasible & uneconomical to boot & yet they improve their products using sighted listening."


Sure, but they do use blind testing.
You are saying that blind testing is flawed and unreliable. Therefore, all the blind testing they do use is in no way responsible for any improvements in their products.
That is the logical conclusion of your argument... are you really saying that
 
So now they have to use it continually. Using it, which you said they didn't, now has to be continuous. I guess their design staff must be blindfolded at all times for it to count. You are becoming quite humorous.

I don't happen to know currently the details of Harman's speaker design process. My guess would be, now that they have data on the kind of response and directivity that customers prefer, they have their engineers design and then measure toward that goal. Then at some point they likely have a blind test off against competitor models and perhaps more than one proposed design. As some of their data indicate they now can design speakers that are chosen preferably over more expensive models from other companies your contention this process is infeasible and uneconmical holds little validity. Instead once the investment in the data gathered is made, and design procedures follow it the result is more economical for the performance obtained. Chuckle, chuckle!
 
No John... again and again you do this
Stop if you can just for one minute ... stop talking about internal controls in blind tests because it comes across as single minded and frankly, as if you are avoiding any other discussion... I have said that of course, better controls will result in better tests... OK
I'll say it again, better controls will result in better tests

Now... talk to me about how you think that sighted tests can be improved.
Do you really think, as you alluded to before, that nothing can be done to improve them, that they are perfect?

Please just answer the question... that or go into politics

I didn't say sighted tests were perfect - I said I didn't think they could be improved whereas I do think blind tests can, simply by the inclusion of internal controls (oops I just repeated myself :))
 
So now they have to use it continually. Using it, which you said they didn't, now has to be continuous. I guess their design staff must be blindfolded at all times for it to count. You are becoming quite humorous.
esl, don't blame me for your not reading what I said - I originally said "I also would direct you to the fact that audio product development do not use blind tests to continually check their progress & yet they progress to better sounding products." So maybe your apology is warranted?


I don't happen to know currently the details of Harman's speaker design process. My guess would be, now that they have data on the kind of response and directivity that customers prefer, they have their engineers design and then measure toward that goal. Then at some point they likely have a blind test off against competitor models and perhaps more than one proposed design. As some of their data indicate they now can design speakers that are chosen preferably over more expensive models from other companies your contention this process is infeasible and uneconmical holds little validity. Instead once the investment in the data gathered is made, and design procedures follow it the result is more economical for the performance obtained. Chuckle, chuckle!
I'm sure they are directed in their designs by measurements as are all manus - I'm sure they don't organise formal blind tests every time they want to evaluate the sound.
 
I didn't say sighted tests were perfect - I said I didn't think they could be improved whereas I do think blind tests can, simply by the inclusion of internal controls (oops I just repeated myself :))

So they are imperfect and can't be improved?
That's a bit of a sad state of affairs isn't it
 
So they are imperfect and can't be improved?
That's a bit of a sad state of affairs isn't it

Yes, imperfect, isn't everything? It's really a matter of what level of imperfection we are willing to accept.
Some people want reassurances that their perceptions aren't fooling them & turn to unreliable tests in a blind belief for this assurance
Others realise that there is no such reassurance possible & live their lives knowing that perceptions will be mistaken sometimes (but not nearly as often or to the extent that the first group fear)

No more sad a state of affairs than the blind test unreliability - in fact a far better state of affairs, IMO
 
Yes, imperfect, isn't everything? It's really a matter of what level of imperfection we are willing to accept.
Some people want reassurances that their perceptions aren't fooling them & turn to unreliable tests in a blind belief for this assurance
Others realise that there is no such reassurance possible & live their lives knowing that perceptions will be mistaken sometimes (but not nearly as often or to the extent that the first group fear)

No more sad a state of affairs than the blind test unreliability - in fact a far better state of affairs, IMO

And to back up this opinion of his......he has......well......his opinion. Whatmore, what more do we need? :rolleyes:
 
I wouldn't say that it's only psychoacoustics that objectivists fear but judging by the postings here they also fear analysing their own tests lest their belief system will be exposed.
So instead of looking at this they try to ridicule & distract to anything other than this analysis.
 
I wouldn't say that it's only psychoacoustics that objectivists fear but judging by the postings here they also fear analysing their own tests lest their belief system will be exposed.
So instead of looking at this they try to ridicule & distract to anything other than this analysis.

This from a fellow who prefers a method of testing that by his own words:

I have no evidence for you that will satisfy you or probably anybody else other than this is what my experience has led me to conclude - that sighted long term listening is the lesser of two evils.

While mischaracterizing some of us who have agreed that better controls would make for better blind tests. Only his preferred method is not amenable to such an improvement. Again in his own words when asked what improvements might be made to improve long term sighted listening:

None, as far as I can see but if you have any suggestions I'm willing to listen - I welcome any opportunity to make my comparisons more accurate.

You can't make this stuff up. WOW.
 
http://www.nousaine.com/pdfs/Flying Blind.pdf

You might have seen this before. It isn't academic research quality testing. But as Whatmore complains, I too find your automatic value to long term listening being trustworthy suspect and lacking in evidence. The above article could even be said to have a positive control of sorts. Distortion levels known to be reliably detected blind in short term tests.

Given long term listening, in this case weeks, people failed to detect 4% distortion vs clean. Some of those very same people then did a blind test using short intervals of a few seconds which they got at or near 100% correct. Not a perfect test, but more than nothing. Some data that long term doesn't equal short term perceptual limits. And your data for the obverse idea?
Hmm.
And that is a really big hmm from me when I read it with the narrative "The case against long term listening"....
Long term aspects are needed for tolerance/threshold relating to cognitive and for some perceptual behaviour/preferences-dissonance/etc.
These people thankfully were lightly trained to identify distortion (in fact I have used this study as a point that it suggests distortion spec of tubes-etc is meaningless to some who use that as a reason why tubes sound preferable :) ) but they were not trained in methodology-approach and this has a big part to play IMO.

We actually need both; quick blind (and non-blind) listening comparison switching (as discussed at length and positive results shown in a different difficult test by Amir when using correct listening methodology) and also longer term listening as this assists in defining perceptual anchoring/listening behaviour/tolerances-thresholds (that come back to what we lock on to and listening behaviour).

Just to add to the discussion why IMO blind listening needs to be part of any development is down to some pretty complex biases not normally discussed (raised a few times myself); the fact when we change anything our brain starts to calculate-model that differences should happen, and bias quirks including such as spatial information because even seeing the position of a speaker can change ones perception of its sound quality (this side result came out in one of the Harman studies)/mOFC pleasure related bias/etc along with more usual biases.

I find Tom's study was more interesting for the fact no-one could reliably identify distortion below 4% (even in training - read a different article by them) rather than the narrative against long term listening :)
But we do need both.

Cheers
Orb
 
I find curious people mix Amir tests (on small differences audibility) and Harman tests (on speaker preference). As far as I know, Harman never published results on audibility of the small differences - if so I would like to know about them. In fact, all their studies admitted that "small differences" could not influence speaker preference - something that is not accepted by most of the audiophile community.
 
That first study in Tom's pdf (it was not his test but one from 1980s) I commented on before & it has one glaringly obvious flaw - the A/B listening was preceded by a 45 minute training session. So the training is likely the most important difference between the sighted & blind test, not anything else.

The second test conducted by Tom, is an unnaturally restricted test that bears no relationship to how long term listening normally operates - asking that listeners identify distortion by listening solely to a CD without any comparison allowed to the original CD. What would have been a fair test would have been for the listeners to get one CD, spend as long as wanted with it, etc. Give it back & then get the other CD, spend as long listening to it as wanted. Then decide which CD was original & which distorted.

Again, the results & conclusions are dubious because not enough attention has been paid to factors which effect the results or the test has been constructed that is skewed towards a particular result.

Citing Tom's report as evidence that quick A/B blind is "better" than long term sighted listening & not recognising this flaws in these tests, goes right to the heart of what I'm saying - many objectivists don't examine all the aspects that influence the results & therefore accept dubious results & conclusions. In fact, there should be a warning on DBTs "DBTs should not be tried at home - they are meant for research labs only"
 
I find curious people mix Amir tests (on small differences audibility) and Harman tests (on speaker preference). As far as I know, Harman never published results on audibility of the small differences - if so I would like to know about them. In fact, all their studies admitted that "small differences" could not influence speaker preference - something that is not accepted by most of the audiophile community.

I thought I remember them mentioning in one paper caveat regarding tolerances-thresholds and that this would potentially influence preference but they never done a study into this with smaller variables
In theory tolerances-threshold and way it associates with preference could be both large and small.
I do not think your statement was directed at me but agree the purpose and scope of their studies are used in unusual ways on audio forums; was strange how many (definitely not all though) technical-objective posters on other forums used one specific study to emphasise expectation bias on preference based upon branding/its look and totally missed a more interesting phenomena of perceived preference-sound quality also being influenced by sighted position of speaker; I think the score was lower when it was seen closer to boundary compared to its blind result for same position.

Cheers
Orb
 
perceptual anchoring!!

Orb, this is not directed to you, but, and is this saying that if we listen to one thing long enough, then our memory memorizes this as the new "standard" (long term listening) and then we listen to some different gear(and can not hear a difference in fast a/b switching?) and then over time we can now hear differences?

Does this not point directly to our brains technique of establishing a pattern that is a MOVING TARGET or pattern?

I will say again that long term listening is long term adaptation by the brain.

REALLY, how do we KNOW that our memory of how a guitar is supposed to sound (what guitar, what venue, where were we sitting, etc) is CORRECT and can be applied to how REAL something sounds coming off of two channel? Is this not all somewhat absurd ?

IMO what is really going on here is the brain MOVING REFERENCE over time and so long term listening is seriously flawed, while short term a/b does FORCE you to use your ears as the MOVING REFERENCE in our brain is not MOVING during quick a/b and thus we REALLY are mostly using our EAR part of the ear brain interface. That is why I trust those quick a/b when I evaluate making a difference to a circuit.
Glad you brought that up, tom - our auditory processing has in-built references of how audio behaves in the real world (something that is being uncovered gradually in psychacoustic studies). It would seem we are all wired in a certain way but build this reference model over time & experience.

And this set of rules, if you like, is exactly what I'm suggesting long-term listening references to. What is picked up more so in this sort of listening is more about this aspect of our auditory perception - how much like real-world audio is the sound (yes, I know it's a flawed illusion & this makes it somewhat difficult). It's a more holistic viewpoint on the playback. But I also feel that frequency/amplitude/timing issues should also be noticed - if they are of importance to what we hear normally. They may not be presented in as much relief as in quick A/B tests but they will still be evident, if audible. Quick A/B listening focusses on the detail but this is not necessarily what is important to our auditory perception over the long term.

I think when people say things like "how do we know what a guitar sounds like", they are talking about something different than that which allows a guitarist to know the sound of a Gibson or a Les Paul from memory & be able to identify it, blind. We're not sure how our memories are stored, particularly memories of perceptions but we do know that before they reach the higher cognitive memory areas they have become very divorced from the original signals that produced them i.e we are not storing frequency/amplitude details - it is a much more synthetic form of the original signal & is more an analysis of the signal over time rather than a snapshot of it at a particular moment in time
 
Last edited:
I think I brought up a lot more than what you focused on John! While I agree that tests need to be properly done well, that includes any sort of long term listening tests as well. And so far, are there any properly or even somewhat properly controlled long term listening tests that anyone is aware of.

tom, Did I not address your main point in your post - of a MOVING REFERENCE in long term tests & this being why you felt quick A/B listening was superior?
 
I think I brought up a lot more than what you focused on John! While I agree that tests need to be properly done well, that includes any sort of long term listening tests as well. And so far, are there any properly or even somewhat properly controlled long term listening tests that anyone is aware of.

There are no well done blind tests of long term listening I am aware of so far. One reason is there have been plenty of tests done comparing hearing acuity for a few minutes vs a minute vs a couple dozen seconds vs a few seconds. The superiority of the short to shortest test is enough I doubt there is lots of impetus for researchers to go with really long term. Only the audiophile community seems to have this idea longer is better. Making tests longer makes it a much more difficult proposition. If doing so nevertheless proved more enlightening it would be done sometimes. Instead the reverse, that shorter is better, means no one has a good reason to try really long tests.

None of which guarantees there isn't something else going on long term. But for those who think it matters the onus is on them to do the tests and show it. Otherwise they are left with nothing other than anecdotal reports.

Now I did read a long term audio test once from an unlikely source. A maker of heavy equipment did a blind test of long term noise exposure to determine how to make their equipment less objectionable. They wanted to know which frequencies or types of noise most needed reducing so that people wouldn't be annoyed by their equipment. They both made recordings of real equipment and generated various filtered and synthetic versions. That used to be on the web, but no longer is.
 
Nothing beats a blind test when it comes to reliable results for DIFFERENCES. As for qualitative evaluation of a single piece you have no choice but to go sighted.
 
Nothing beats a blind test when it comes to reliable results for DIFFERENCES. As for qualitative evaluation of a single piece you have no choice but to go sighted.

And there you have it....
If you are designing a bit of gear where the differences are small (relative to, say speakers) you might choose to do sighted testing (long term or short) with all it's known, proven biases. Given the biases, the results of these tests may well be badly flawed and most likely those flaws will be in the direction of false differences.

Surely, you'd be better off performing blind tests (perhaps in addition to the sighted ones if you like) to see if you still hear the difference. Then you can be far more confident that it is real.

You might even try to eliminate your own bias against blind tests (just supposing you have that bias, that is) by getting people who are biased in favour of blind tests to perform them on your behalf
 
And there you have it....
If you are designing a bit of gear where the differences are small (relative to, say speakers) you might choose to do sighted testing (long term or short) with all it's known, proven biases. Given the biases, the results of these tests may well be badly flawed and most likely those flaws will be in the direction of false differences.

Surely, you'd be better off performing blind tests (perhaps in addition to the sighted ones if you like) to see if you still hear the difference. Then you can be far more confident that it is real.

You might even try to eliminate your own bias against blind tests (just supposing you have that bias, that is) by getting people who are biased in favour of blind tests to perform them on your behalf

Quite the contrary. If I were designing a piece of gear for personal or commercial purposes and say I was choosing between passive parts I would definitely go ABX so I can use the cheaper part.

Thing is, I'm not. As a consumer I am left with narrowing down choices based on specifications (throw out whatever obviously wouldn't work within that particular system context) and arrange an audition for those left on the list. Perhaps in a fantasy world where I could snap my fingers and everything on the list appears and an army of grunts to do the swapping along with them, I would try some serious blind trials but let's face it. There are just some components where the swap time is so long that a blind test wouldn't be practical or even reliable. Take loudspeakers for example. For a fair comparison you need to set up the pairs as best you can. Try fast switching that in your own home even if you did have an army.

My position on blind testing has been and will remain consistent. When I can't tell the difference or can barely tell the difference, I will always go with what costs less with the proviso that I may go for the more expensive for reasons OTHER than sound quality. Reasons like ergonomics or even industrial design. Sue me. I'm only human. :)
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing