Doing anything in a rigorous way is always harder than pulling stuff out of your butt, so that shouldn't be any surprise at all.
... doing it in a sloppy way, or doing it in a way which most likely will confirm your prejudices. The simple truth is, if you really, really want audio to work "properly", that is, as good as it is capable of being then you have to work hard at it, and if you likewise want to test it in such a way to truly understand what's going on then you have to work just as hard again.Doing anything in a rigorous way is always harder than ...
Now that's some first class CYA double talk.
Could it be ABX is obsolete because its' to hard even for the inventor?
Arny just reminding you how you use peer reviewed to your own advantage, but when it is used to point out that scientific research involved with product development was other than ABX as I discussed with Microstrip you now argue against your position of using peer reviewed papers, that is selective bias.
And yes I understand that ABX can be used as Amir has said in the past and as I have used it in engineering myself, but you should accept it is anecdotal evidence in the development or more critically fine tuning/fault resolution of a product in development.
If you vary the audio track being played through equipment this has almost no chance of disturbing the equilibrium, the status of the system as a total entity. There are extreme examples: if you played heavy metal at high volumes, and then immediately switched to some soft string quartet, then the temperature rise and "stress" of the previous track on amplifier and speakers may effect the SQ. And there are other subtleties at work here too ...
But if you vary the equipment configuration then the equilibrium of the system is altered, sometimes dramatically, and often it can take considerable time for a new stability to return to the system. You may not believe this to be the case, but most people who are aiming for higher levels of performance are grappling with these issues on an ongoing basis.
So I would say that unless the ABX or some other method of testing takes these considerations on board then it has little chance of convincing the more "discriminating" listeners ...
I am curious why you think it is the responsibility of one side and not other to back up their respective claims. Can you point us to amplifier companies performing DBTs that back your point of view with their current products? I have looked everywhere and other than Harman/Mark Levinson, none talk about blind listening tests.
Yes Arny,
but now you are forgetting the original point from Microstrip and my response, now talking about bias you are using selective bias and choose the arguments that suite yourself.
Again for concrete proof one should utilise scientific studies.
The above is 100% speculation on your part. The effects you are claiming are audible simply aren't. The issues that you are obsessiing over are simply figments of your imagination.
If you want to prove your point, first prove your subpoints!
The way I see it, there is considerable scientific opinion that sighted evaluations are a totally inadequate means for doing studies based on sensory perceptions. So, as soon as I see people taking that to heart, and abandoning that totally fallacious course of action, only then can we talk about alternatives. Far more logical alternatives exist.
What I now see is people en masse completely and totally ignoring the utter rediculousness of doing sighted comparisons of amplifiers, DACs, media formats, etc. If there is a desire for a rational solution, then there should be an admission of the irrationality of the status quo.
I'm with this. The best argument for blind audio evaluations, regardless of the informality and/or lack of scientific rigor, is that no matter how sloppy they are, if they remain blind they are way ahead of sighted evaluations.
tim
I'm with this. The best argument for blind audio evaluations, regardless of the informality and/or lack of scientific rigor, is that no matter how sloppy they are, if they remain blind they are way ahead of sighted evaluations.
tim
Doing anything in a rigorous way is always harder than pulling stuff out of your butt, so that shouldn't be any surprise at all.
If we talk science it probably goes like this:
Do you have an experimental design?
No: it is no science
Yes: is the design apt for what you are going to investigate?
No: invalid experimental design
Yes: is the experimental setup correct?
No: invalid results
Yes: is the analyses correct?
No: invalid results
Etc.
To conduct a well-controlled experiment is not easy.
Anybody familiar with meta-analyses knows that often reports are dropped from the analyses because of methodological flaws.
If science was easy we all are scientist.
On the other hand; does our judgement improves when our perception is influenced by all kind of factors not relevant to where it is about: sound quality.
This pic from Sean’s blog demonstrate it nicely
In both cases the big floorstanders (G,D) are preferred compared with the two smaller ones.
However, in the unsighted test the differences are much smaller.
Shows you how easily our perception is influenced.
We simply use clues irrelevant to the task.
Unsighted testing removes the clues.
Of course we are not scientist so our experiment is not well controlled so not really valid.
But I prefer a non-controlled unsighted test over a non-controlled sighted one.
Saves you a judgement based on irrelevant clues.
Sloppy tests provide sloppy results. Prove it.
My position of using peer-reveiwed papers? What is that? I don't recognize any such thing.
Are you saying that since an rank amateur audiophile doing a sighted non-level-matched of equipment in questionable condition is providing anecdotal evidence and a degreed engineer doing a level-matched, DBT of equipment known to conform to its technical specs is providing anecdotal evidence, that their evidence has equal weight?