Amir, this question suggest to me that you have missed the whole point of testing, and can't distinguish between the separate purposes of preference testing and difference testing. If that isn't true, you desperately need to repair your image.
Arny, I will give you this warning once: please do not make this discussion personal -- with me or anyone else. Stay on the technical topic please. There was no need for the last sentence above. Let your logic speak for itself.
Each item on your list is a slam dunk if one does an ABX test for differences. I have done several of them myself, and I see no reason why the rest are any harder.
Then it should be easy to tell us that in your household, you purchase nothing like that without first performing a blind test.
The answer must be an obvious "no" or you have already given it.
Fact is that we take these audio discussions far more seriously than we do other things in life. Choice of bottled water at home is not subject to winning a debate on a forum whereas audio is. So we go on with that conflict in life. Best is to be open about it and say it.
You don't need an ABX test to tell the difference between say LED, DLP, and plasma TC sets. Consumer Reports says that barbecue sauces taste different and that audio amplifiers sound the same within their power capabilities. Are they wrong about one and right about the other?
I didn't say anything about ABX. I said *DBT*. Why not test three different bottled waters blind and decide which tastes better to everyone in your family?
When I was working at Sony, we had a big fight over which coffee grind to serve to our group so we actually set up a blind test. That was a useful tool because the most opinionated was the president of the division!
Some of those items aren't bought based on their differences. Good case in point is bottled water. I buy Aldi's bottled purified water because I'm already in the store, it is just fine, and its the cheapest thing around. Fool that I am I don't split hairs when I'm thirsty.
How do you know it tastes better than your free water coming out of the faucet? Let that water sit for a few hours and then test both. Surely the faucet water is even cheaper. No?
If you are talking TVs, then read this:
http://www.gizmag.com/go/4138/
If you are talking in general..
For what reason? That is a marketing report. Not a method to design products. We do that very often in the industry. Let me tell you how it works. You contract out with a third-party to do a study like they did but include in it a confidentiality clause. One of two outcomes result:
1. The results show your product is better. You then issue a press release and make a lot of noise about it.
2. The results don't show your product to be better. In that case, you throw it out and go about your business. Or, run it again until you get the results you want
.
I can assure you that Philips does not do LCD research by hiring a third-party to run around different parts of the world to see if their products work. They would do it in-house as Harman, etc. do. Here is some useful excerpt from the article:
"The ‘masked’ comparison was done with all brand names and distinguishing design features covered up, and only the actual screens were visible to the retailers who took part in sessions conducted by Philips from March through to May.
These sessions were held across Australia in venues in Sydney, Melbourne, Brisbane, Sunshine Coast, Gold Coast, Perth, Adelaide and Canberra. Brands and models for the comparison were chosen based on recommendations by key retail groups who were asked to nominate the best performers in the categories tested.
The models were displayed without any changes to their “out of the box” settings, as experienced by any consumer purchasing the product, with all models connected via component input with identical cabling.
In the research, a series of still and video clips were played simultaneously on each of the units in both standard and high definition, with participants asked to rank from first to last the screens they felt provided the optimal picture quality.
The figures, which have been independently analysed and processed by the
market research company Omnicom Research, showed that the Philips 42PF9966 was chosen as the number one Plasma TV by 74% of participants, while over 60% nominated the Philips 32PF9966 as the number one LCD TV."
You think Philips would prefer to do their research in Australia instead of Eindhoven? I have been to their research lab and there was no blind testing for TVs. I have also visited a number of major LCD manufacturers in Japan and again, none ever talk about blind tests.
That said, I would not be surprised that they would do surveys to see what customers like to see in showrooms and homes. That is not the same as doing formal blind tests.
BTW, note how they used the default settings for TVs to evaluate them.
Amir, are you having mini-strokes? Where did *that* come from?
As I said, you have been warned Andy. No more personal remarks like that. Robert's blind tests are always very popular and heavily discussed on forums. If you watched the videos you would see Joel Silver being in one for example. It is puzzling that you would find an issue with it one way or the other when a guy has gone through so much trouble to set up a blind test for avid video enthusiasts to run. It is expensive and difficult thing to do.
So Amir according to you, the difference between the DACs in high end AVRs is in the same range as the difference between a LCD, a Plasma, and a DLP HDTV? Are you that blind?
This is the third warning Arny
.
Answering anyway, which is better:
1. Black levels that are higher.
2. White levels which fluctuate with video content?
The former is an LCD characteristic. The latter a Plasma. Some viewers may prefer one artifact to the other. You seem to be saying just because there is a difference, there is no need to determine preference. Yet, blind tests are run against speakers with similar characteristics above. Maybe part of the confusion comes from thinking the only test worthy is a binary ABX test?
The audio equivalent of the frozen frame is the carefully selected critical song snippet.
That is a very crude approximation. I can freeze a single temporal sample in video: the frame. As a matter of science, we cannot do that with audio. In no way is that the "equivalent." I can spend an hour staring at a video frame, looking at every pixel. I can't do that with audio. I can also do side-by-side tests of video with two displays frozen in time. I can't do that with audio as playing both at the same time doesn't allow us to examine each.
Equipment with matched FR and distortion below threshold is as well matched or better matched than any two TVs.
If our eyes were so forgiving, you would have all the same fights about video
. Remember, our video signals only have 8 bits in dynamic range or ~48db! Imagine how good your audio would sound that way. That is 20 to 30 db less than cassette tape!
That's [device meets the performance criteria] what people like Ethan and I have been saying all along.
That is not my experience with you. I showed that if jitter is below 500 picoseconds peak to peak, then we have fully preserved the 16-bit audio samples at 20 Khz. You fought me for weeks, claiming we should accept far higher values. That jitter spec is what I define as "meets the performance criteria." Our CDs have 16-bit sand 22 Khz bandwidth, so that is the jitter spec it needs to be meet to be transparent. And once there, we are free from the requirement of running a blind test. Go above that, and you get the nasty job of characterizing all jitter profiles in the world!