Is ABX finally Obsolete

Status
Not open for further replies.
You do. That's the reason for the short switching. Everyone, except for Audiophiles who don't believe in the methodology at all, seems to agree that quick switching is the only reliable way to differentiate subtle differences. And they base this not on speculation, but on what got the best results when they knew a difference existed.

Tim


Odd to argue what method reveals differences best when you steadfastly argue that there are no differences. That's why everyone fails, right? Is that a fair argument?
 
You do. That's the reason for the short switching. Everyone, except for Audiophiles who don't believe in the methodology at all, seems to agree that quick switching is the only reliable way to differentiate subtle differences. And they base this not on speculation, but on what got the best results when they knew a difference existed.

Tim
Tim I am not so sure that this is fully proven as I am not aware of a study looking into this while also having say 30mins to an hour for just one ABX selection.
I agree that it is known about short term memory when it comes to selection, however I feel more needs to be done to look at the various forms and affects of anchoring (which more seen in other sciences subjects) type effects in AB/X and process ensuring a constant cognitive baseline reference.

As an example an online publication here in the UK did true blind test on cables without any intermediate switch box (request of one who listener before hand), so this meant for the process to work and no tells to be picked up each time the cables were swapped the listeners had to leave the room and before they came back in the swapper had to leave the room.
This as you can imagine is a reasonably lengthy situation, but what makes it interesting is that one listener still managed 100%, one differentiator between him and the other 2 was his patience and insistence to repeat the listen of AB and X many times before deciding.
His decision to do that meant they could only do 6 ABX selections before they ran out of time over two days (they spent around 5 hours if I remember), the other two who felt cables sounded all the same were making decisions much quicker.
The problem is this is not enough to really have any statistical implications, but the point is that it seems it is possible a quick switch is not needed as is assumed and that maybe it is the length of the time the listener focuses on all three A-B-X before making a decision is the critical aspect.
After the testing the one with 100% wished he did not insist on no intermediate switch box, would be great if they redid the test with one but I guess they would had gone insane :)

I am not bothering linking the article as the page is poorly written and easy to misunderstand what was done, to follow what happened one really had to be following it at the time - would need to read all the forum postings before and after testing, and that is a really long read with 90% assumption and debating type posts - usual forum then :)

Anyway, there are some aspects I would like to see further investigated when it comes to perception selection and the use of AB/X process, which I have mentioned in the other ABX threads and briefly say further up in this one (anchoring and cognitive baseline reference, and importantly study of extensive/long term listening preference).

Cheers
Orb
 
Odd to argue what method reveals differences best when you steadfastly argue that there are no differences. That's why everyone fails, right? Is that a fair argument?

You've misunderstood me, Gregg. I believe there are differences -- differences in noise, distortion, frequency response, transient response. What I don't believe in is what audiophiles hear. I can move your chair forward, toe in your speakers and, assuming said speakers are decent, immediately improve your imaging. I can turn them parallel to the wall and expand your sound stage (and smear the imaging a bit in the process). I can tilt them back and heighten your sound stage. I can push back your chair, pull the speakers further apart and, if your room is big enough, expand your soundstage, even with relatively small speakers. I can make a night and day difference.

But I can accomplish absolutely none of the above with a change of cable, tubes, preamp, amplifier or DAC. And I can't make it all collapse into a thin phantom by playing a digital file instead of a record or tape.

The nature of, and ability to affect sound stage is just one example of the Audiophile mythology; those are the differences I don't believe in.

Tim
 
The short comings of ABX are well documented after almost forty years. Perhaps we can list them in an attempt to design a better test.

If there was an attempt to appear to be fair and balanced, there would be a corresponding list of the short comings of sighted evaluations. By constantly harping on just the problems of ABX, people pretty well tip their hands and expose their biases.

ABX - the top of many people's list of things they love to hate.

The results of ABX tests can be soooo upsetting and thought provoking!

That would be great if we would come up with the WBF Blind listening test, complete with statistcal calculator. Use would be voluntary and there would no such thing as pass or fail.

What a concept - avoid upsetting people by avoiding the creation of evidence that they don't already agree with! I see a happy future in politics for people who choose this path. ;-)

That is I would compare section A to selection A rather than selection A to selection B. That would deal witht he issue of memeory.

You can do that with ABX! The fact that I have to point this out says something...

Remember I said is ABX obsolete not DBT.

In a certain way ABX has been obsolete for almost 20 years - its been about that long since ABC/hr testing was first written up in professional audio journals. But ABC/hr made no friends in many sectors of audiophilia because its results were no less upsetting to High End Audio true believers than the results from ABX.

It comes with the territory - if you address bias, all of the beliefs that are supported only by bias come under attack.

There are actually a goodly number of more-or-less different alternative methodologies for doing DBTs. For example, DBTs are widely used and generally accepted in the food industry to address issues related to flavor.

Despite the fact that some prefer one DBT methodology over another, or say that this one is better for this application, and that one is better for that application, they all have the same problem - they actually address listener bias and routinely produce results that don't agree with people's long-held and long-invested-in beliefs and acquisitions.

Many people just can't handle situations where an effective attempt is made at addressing bias. They want to be reinforced in their preferred truth, not the truth that one finds in the natural world.

Many people appear to be in the denial stage of their grief over finding out that other people think that a lot of high end audio is based only on unreliable listening evaluations. It can so upsetting when you find out that someone else actually disagrees with you!
 
Amir was the one who knew exactly where the ball was: scientists and "objectivists" may love the concept of a "pure" experiment, but the majority of the rest of us will just think, "Buggah this for a joke!" ...
Frank

People who get bored and tired too easy are often diagnosed with ADD if they are school children.

If they are adults, do we call them audiophiles? ;-)
 
I did miss your point. Sorry. Yes. Harman trains listeners. The also get the same statistical results from untrained listeners that they get from the trained ones. The trained listeners just help them reach conclusions faster, more efficiently. A very useful thing for a commercial endeavor.

Tim

It also addresses the reasons why some people find DBTs to be frustrating. If you get reliable results faster, you are less likely to become tired and frustrated, no?
 
Odd to argue what method reveals differences best when you steadfastly argue that there are no differences. That's why everyone fails, right? Is that a fair argument?

The idea that there are objectivists who believe that there are no differences is a straw man or excluded middle argument that we often hear from audiophiles.

It is not that hard to find some audible differences. What is hard to confirm is the belief that some audiophiles seem to have that everything sounds different.
 
Tim,
My main concern about training is that when you have to evaluate something you always have to choose what you consider the most important aspects and them score these aspects according to some weighting scheme.
The tests you refer only prove that for Dr. Olive evaluating system trained or untrained people get the same results in some specific tests. They also prove they can do it faster with increased reliability. Nothing else.

I have read some testimonies of people who proudly refer that after doing the Olive Sean training they are permanently identifying the faults and non natural things in recordings. I am not interested in becoming this type of listener - INHO exactly as are not interested in developing extra-sensitivity to some aspects audiophile praise a lot.

I think that what you just said is that you want to hear the differences that you want to hear and you don't want to hear the differences that you don't want to hear.

Unfortunately, the real world isn't that selective. If you seek for truth, you will no doubt find a mixture of what you want to know, and some things that you really didn't want to know.
 
Both sides of the ABX debate have been argued ad nauseam. One man's bias is another mans' truth. It is the rule of this forum that anybody is permitted to present the results of an actual ABX test. Attacking others because they have not done an ABX tends to be frowned upon. HydrogennAudio is an excellent site for those who demand ABX proof. As shown above even they tend to be skeptical of ABX tests that don't support there previously held beliefs.
 
Sorry about that ...

Except that the debate now appears more about the level of competence of equipment and the area where attention should be focussed in system setup. As regards ABX, I would again emphasise that an essential ingredient for me, that barely seems to cause a ripple in such discussions, is how interested and involved the people are in the process, that are doing such a test. If that is not taken into consideration then I for one would consider the results of such a "test" as just a bit of paper junk ...

Frank

ABX was invented by people who were so interested and involved in the process of doing listening tests and obtaining sensitive and reliable results from them, that they invented ABX at a very signficiant personal cost in terms of time and effort.
 
Both sides of the ABX debate have been argued ad nauseam. One man's bias is another mans' truth. It is the rule of this forum that anybody is permitted to present the results of an actual ABX test. Attacking others because they have not done an ABX tends to be frowned upon. HydrogennAudio is an excellent site for those who demand ABX proof. As shown above even they tend to be skeptical of ABX tests that don't support there previously held beliefs.

Their skepticism often turns out to the well-founded.
 
Just a quick look though and it does not match what I mentioned as a requirement about listening preference/cognitive habits/anchoring/etc, but focuses on the usual critical decision factor although in this instance only as "clean" or "dirty" - still an interesting investigation though and goes into one aspect of the masking and sensitivity debate about lengthy listening.
TBH I am not too surprised that listeners failed it, but consider that modern equipment and even much equipment back then have negligible distortion, this is not necessarily the right factor to be looking at (in the same way random jitter tests are done to investigate sensitivity to jitter), although raises the discussion about what is different between modern amps when it is not distortion as they tested (no answers please as we have other threads on that).
Thanks for the link and will look in further detail, but it definitely does not touch on what I feel I have raised in other threads here specifically on this subject of ABX which all of us have discussed.
Ah well seems a ABX rises every month as a new thread :)
Cheers
Orb

Edit:
Ok brief read was bad idea as I mainly caught Tom mentioning a previous experiment, need to read further down and will do, bad me and hope others do not read quickly as well :)
 
Last edited:
Just a couple of links on heuristics/cognition bias/anchoring/etc.
Please appreciate though that these so far have been applied together in other disciplines and not audio, so this is more about background information than actural audio related study, although it still does apply.
From page 4 onwards:
Hidden Traps of Decision Making; http://www.scribd.com/doc/24477698/Hidden-Traps-of-Decision-Making

Please note this one is a power point presentation and needs to be viewed in slideshow mode.
Hueristics, logic traps, cognitive illusions and statistical errors; http://www.iocp.org.uk/sites/default/files/Heuristics 2010b.ppt

There are more detailed papers-studies but these are useful for anyone to get a background summary.

Cheers
Orb
 
Thanks Arny,
will take a look at it later today.
Cheers
Orb

A cursory read does indicate a compelling case. However normally one would want to keep as many variables constant as you can. The addition of a training session, education by repeated tests and weaning of the participants form gross distortion to the target distortion, were things not afforded to the home listeners. Could these things have skewed the results?

P.S. Of course that would explain why the home listeners did poor but not negate the fact that the short term listeners did well. To conclude that short term listening is the way to go we would have to remove all variables except the length of listening.
 
OK it still seems more anecdotal and investigative than scientifically conclusive to me.
The 1st test mentioned was a bit relaxed for several reasons; primary one that the switch box listeners were given a 45 minute training session 1st and importantly a reference point/anchor, while the extended listeners were told to use whatever technique they wanted to, and as Tom mentions that group were not controlled.
Also critically the switch box users had an anchor with that training in 1st listening to the worst level of distortion of 13%.
In the 2nd test done by Tom, we have the anchor type training situation again where a previously failed listener of the extended listening is used for the quick switch and 1st listens again to 13%, but those given a disc for lengthy listening had no baseline/anchor/or training it seems (can only go by whats in the article) on the distortion signal.

The issue for the home listeners anyway is simple; they had no reference point and no training with the worst distortion, and critically no repeated trial runs involving both the clean and dirty CD that would be important for several reasons.
So while it is an interesting article currently to me it is more about being investigative rather than scientifically evident or conclusive IMO, and I appreciate not everyone will feel the same way.
But I can only go what is in the article, and maybe more has been done as a follow-up to it.
I just want to stress though that I am not being critical of the article because doing extended/long term listening as a controlled variable while also managing listening training and having usable test data at the end would be a nightmare and freakingly expensive.

With all that said even if one does feel different about the test it does not help with the factors I have raised in other posts.
Cheers
Orb
 
Last edited:
I think that what you just said is that you want to hear the differences that you want to hear and you don't want to hear the differences that you don't want to hear.

Unfortunately, the real world isn't that selective. If you seek for truth, you will no doubt find a mixture of what you want to know, and some things that you really didn't want to know.

It is here we diverge. I do not want to put my words in any one mouth, (I hope people can keep the straw-man, negative prove and other nice words out of the argumentation) but I am mostly looking for an emotional response to systems based on my live experiences. This is my truth. I have never been in a control room and it is not in my future plans to go there. But I often go the live concerts, mostly classical and a few jazz sessions. Why should I train myself to overcome my current perceptions?

This emotional main objective seems accepted by many people. Even F. Toole considers it and discusses it in his book "Sound Reproduction" - I found very interesting his idea that better in sound reproduction will result in having a better statistic in triggering this emotional response.

BTW, when I look to the recordings Sean Olive used to carry his tests for validating his training tests (I have three of them) If find that none or them seems a good test for the aspects I appreciate mostly in my listening. Perhaps I am wrong, but this also does not motivate me to carry the training.
 
Status
Not open for further replies.

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu