Some More Evidence that Kids (American and Japanese) Prefer Good Sound

One thing I find very interesting in this long-running, multiple-thread campaign to discredit the Harman results -- it has been all about the Martin Logansand their incredible, literally, defeat at the hands of a cheap midfi speaker. No love for the B&WS? Why no vigorous defense of the biggest high-end speaker brand in the world?

Tim
 
Because B&W was not represented in the test perhaps?

I didn't know B&W was the biggest high-end speaker brand in the world or are they a speaker company that makes high end models but not everything is high end? What's the cut off for "High-end" anyway?
 
HI

Having read some of the post here .. I would repeat this about blind tests: They can be very humbling, often humiliating. Out go some tightly held beliefs. We , audiophiles are often disturbed by the results of our inabilities in blind tests. I see this here. I would also say that the preferences aren't absolute metrics but we will have to admit somewhere deep inside that we knew that there has to be a correlation between accuracy and preferences when some biases are weeded out, this is after all High Fidelity not High Preferency (I made up that term).

A question I would like to see answered would be: How accurate are some of the highly touted and most expensive speakers out there? Objectively accurate? You would then see all the vitriol we audiophiles are capable of spewing when our core beliefs are challenged ...
 
Because B&W was not represented in the test perhaps?

I didn't know B&W was the biggest high-end speaker brand in the world or are they a speaker company that makes high end models but not everything is high end? What's the cut off for "High-end" anyway?

Anything that I don't own!;):p
 
Because B&W was not represented in the test perhaps?

They were represented in the original test that Amir posted about, that started all the Martin Logan fervor, not this one.

I didn't know B&W was the biggest high-end speaker brand in the world or are they a speaker company that makes high end models but not everything is high end? What's the cut off for "High-end" anyway?

Depends on how much you think high-end has to do with high-fidelity, defined as fidelity to the source signal, vs high-end defined as high dollar, and/or appealing to the tastes of high-end audio consumers. By the first definition, a $400 pair of infinitys beat 800 series B&Ws and big ML hybrids and finished in a statistical dead heat with Revels. By the second? It completely depends on who you're talking to and what their specific interests are. Vintage Tannoys can be more "high-end" than Magicos.

And a nod is as good as a wink to a blind horse.

Tim
 
One thing I find very interesting in this long-running, multiple-thread campaign to discredit the Harman results -- it has been all about the Martin Logansand their incredible, literally, defeat at the hands of a cheap midfi speaker. No love for the B&WS? Why no vigorous defense of the biggest high-end speaker brand in the world?

Tim

I didn't see it as discrediting their results, just an interesting take on speaker preference. The other speaker was an older model B&W 800, which did not sound good in this situation. Honestly the $500 Infinity speaker was smoother and more natural.

However, as someone who owns the current 802D's, I have noticed that the current Diamond has a smoother mid/treble response than the past model and it is relatively amplifier sensitive. After playing my 802D's with about 40 different amplifiers over the last year and a half, I've had very varied results - as I have with the MartinLogans that I've owned.

Oddly, when I did the same blind test at McIntosh last summer, I was able to identify the 805D (which I then owned) from behind the curtain, picking up on its "sound" immediately, much to the surprise of the folks in the test.

I think the work done at Harman is solid and thorough, however, as I'm sure many of you know, every speaker needs different setup to give it's best. This is why we've never done product "shootouts", especially with speakers, because the setup that provides great sound with one speaker, may not be what is optimum with another. Which also explains why you've heard a speaker sound fantastic at one show, yet not so much at another.

Personally, I find the work they are doing at Harman pretty amazing, especially considering how many variables go into this!
 
HI

Having read some of the post here .. I would repeat this about blind tests: They can be very humbling, often humiliating. Out go some tightly held beliefs. We , audiophiles are often disturbed by the results of our inabilities in blind tests. I see this here.

FrantZ,

Sorry but IMHO there is something wrong with your logic. Only people who do not understand the limitations of the tests we are referring can be humiliated with their results. This type of tests show a statistical preference in the conditions outlined - no fine tuning for each speaker, deliberate complete ignorance of system matching, a short selection of music highlighting some aspects that the organizers feel that are important to define preference. Perfect, it is their purpose.

I choose not to debate in public the main subject of this thread - it is a noisy environment, where I would be an immediate looser facing experienced and dedicated defenders. As the best of them have strong trade connections, any imprudent comment could result in loosing friends or suggesting I want to hurt their business or defend others.

BTW, these blind tests are not humbling - reading the "Sound Reproduction" of F. Toole, yes that can be humbling.
 
It will come as no surprise that I take a very different view of the system-matching issue raised here. If you're testing four speakers, the amplification should be suitable for the most difficult load among the four. If the playing field is still uneven because one of the speakers requires a very specific voice in system components, or an off-standard spec, or very specific program material to sound its best, I consider that a fatal flaw in that particular speaker that the test has revealed. Regarding the purpose of Harman's testing, the purpose is clear and simple: to confirm the relationship between comprehensive FR measurements and preference, with the objective of using that information to design and refine future products. Toole and Olive are good scientists and have not only constructed a very good method for this test, but good methods to test all the points at which the methodology could break down (predjudices of trained listeners, correlation between preferences when testing in mono and preferences for the same speakers in stereo, etc.). They've been very thorough. I don't think a single substantive objection has gone unanswered. And any suggestion that they would deliberately skew the results of testing meant to drive the marketability of their own products is pretty absurd.

Oh and by the way, regading the matching issue, the Revels are a very difficult load, and they fared well.

Tim
 
The graphs I saw posted had Infinity, Polk, Klipsch and Martin Logan.

As for the it depends answer.............huh? Relativism to new heights Tim. What's gotten into you?

There have been accusations and Sean has answered them. There have been valid questions that haven't been directly answered. Whether the answer was sufficient is not up to us.

Like I said, I have no stake in this but I have observations about the methodology that makes me raise an eyebrow or two. The biggest being that these are results of preferences for off axis response not based individuals made to sit at different positions but rather an average of rankings for a set of individuals sitting at static positions on or off axis. Those guys sitting 20 degrees of axis only listened to these speakers at that angle while others sat at other angles then their rankings were summed and averaged. Second, sample sizes differed. Why is this an eyebrow raiser? The bigger the sample the bigger the variances in listening distance and angle. Third, the exclamation point was the trained listeners who unlike all the others were arrayed all listened on axis. Dude. What's wrong with that picture? They could have been lined up just like the others but weren't. Lastly, the devices under test are meant to be used in pairs and not singly yet they were tested that way.

One more thing the conclusion is that gen Y prefers the more accurate speaker. Who determined THAT and under what conditions? Are they more accurate off axis? Maybe so but what does that really mean in PRACTICAL terms for a customer who listens within the narrow listening window of a PAIR of loudspeakers with the said narrower listening windows? You tell me. What it implies in practical terms isn't even that you can walk around a bigger zone of your room with the Infinity's and Polks and get more consistent sound but rather that if you've got other people in scattered around your room, there is a probability that THEY will like what they hear more. Remember, the subjects sat at different angles from each other and did their rankings from their respective positions. They were not made to rank the speakers from different seats. In other words this tested group preferences and not individual preferences.

Does this mean there is no correlation between smooth axis response and preference? No, not at all. The statistics still look sound so it is a fair conclusion. However, there is no law that says you can't question the manner in which the data was gathered and presented. It is clear that each set of subjects were not given identical tests yet the results were put in a chart which overlaid their respective results.

I bowed out of this thread but honestly Tim, your tone just irked me. Before you dismiss those asking questions that Sean and company just might benefit from and spin up some conspiracy theory, learn to look beyond the numbers and the charts. A lot of those questions have merit.
 
HI

Having read some of the post here .. I would repeat this about blind tests: They can be very humbling, often humiliating. Out go some tightly held beliefs. We , audiophiles are often disturbed by the results of our inabilities in blind tests. I see this here. I would also say that the preferences aren't absolute metrics but we will have to admit somewhere deep inside that we knew that there has to be a correlation between accuracy and preferences when some biases are weeded out, this is after all High Fidelity not High Preferency (I made up that term).

A question I would like to see answered would be: How accurate are some of the highly touted and most expensive speakers out there? Objectively accurate? You would then see all the vitriol we audiophiles are capable of spewing when our core beliefs are challenged ...

Yes, on the surface, I would agree with you that high preference ratings does not necessarily imply high fidelity, yet that is what the listening test results and objective measurements so far are telling us.

Dr. Toole's loudspeaker research at the National Research Council up until late 1980's measured listener ratings of "Fidelity" -- not preference -- yet his results and conclusions did not change when we switched the scale from fidelity to preference. The evidence so far indicates there is high positive correlation between subjective measures of preference and fidelity, and the objective measures of the loudspeaker's accuracy/linearity.

To answer your question regarding how accurate are some of the highly touted and most expensive speakers out there: first, we would have to identify and agree on which speakers fit that criteria. Second, we would need to get samples and test them with objective measurements known to accurately predict listener preference. Not many manufacturers have access to such measurements, but I would be happy to provide objective measurements if the samples are provided.
 
Last edited:
The graphs I saw posted had Infinity, Polk, Klipsch and Martin Logan.

As for the it depends answer.............huh? Relativism to new heights Tim. What's gotten into you?

There have been accusations and Sean has answered them. There have been valid questions that haven't been directly answered. Whether the answer was sufficient is not up to us.

Like I said, I have no stake in this but I have observations about the methodology that makes me raise an eyebrow or two. The biggest being that these are results of preferences for off axis response not based individuals made to sit at different positions but rather an average of rankings for a set of individuals sitting at static positions on or off axis. Those guys sitting 20 degrees of axis only listened to these speakers at that angle while others sat at other angles then their rankings were summed and averaged. Second, sample sizes differed. Why is this an eyebrow raiser? The bigger the sample the bigger the variances in listening distance and angle. Third, the exclamation point was the trained listeners who unlike all the others were arrayed all listened on axis. Dude. What's wrong with that picture? They could have been lined up just like the others but weren't. Lastly, the devices under test are meant to be used in pairs and not singly yet they were tested that way.

One more thing the conclusion is that gen Y prefers the more accurate speaker. Who determined THAT and under what conditions? Are they more accurate off axis? Maybe so but what does that really mean in PRACTICAL terms for a customer who listens within the narrow listening window of a PAIR of loudspeakers with the said narrower listening windows? You tell me. What it implies in practical terms isn't even that you can walk around a bigger zone of your room with the Infinity's and Polks and get more consistent sound but rather that if you've got other people in scattered around your room, there is a probability that THEY will like what they hear more. Remember, the subjects sat at different angles from each other and did their rankings from their respective positions. They were not made to rank the speakers from different seats. In other words this tested group preferences and not individual preferences.

Does this mean there is no correlation between smooth axis response and preference? No, not at all. The statistics still look sound so it is a fair conclusion. However, there is no law that says you can't question the manner in which the data was gathered and presented. It is clear that each set of subjects were not given identical tests yet the results were put in a chart which overlaid their respective results.

I bowed out of this thread but honestly Tim, your tone just irked me. Before you dismiss those asking questions that Sean and company just might benefit from and spin up some conspiracy theory, learn to look beyond the numbers and the charts. A lot of those questions have merit.


Jack,

I've explained before why we test single speakers versus pairs, and there is a section dedicated to this topic in Dr. Toole's book. In simple terms, listeners are most discriminating when listening to a single source. You add multiple sources and their discrimination is reduced. In stereo vs mono loudspeaker comparisons the conclusions don't generally change, but there is simply more noise in the data, and more interaction effects with the source material. The recording mixing/methods largely dictate the spatial illusions listeners hear.

Regarding the variable seat location: I don't think it's invalid to test a loudspeaker in multiple seating positions since that is a test of its real-world use. A lot of consumers don't toe in their loudspeakers and sit in the sweet spot, which means they are not sitting on-axis. If they have friends over to watch movies, many of them are not sitting on-axis. If you are a bachelor audiophile with no friends, then you probably do most of your listening alone on-axis, but that is a statistically small portion of the audio community. Maybe Speaker D is designed for such people, but in my view that is an anti-social loudspeaker.

If you ever participated in loudspeaker tests at NRC, there were 4 listening seats ( 2 seats in each of the two rows) used. Other companies like B&O also test their speakers in multiple seating positions. When we do informal evaluations, listeners get up and compare the speakers in different seats to get a measure of their sound quality on versus off-axis. To only listen on-axis in an acoustically dead room will guarantee that the results are not very meaningful and cannot be generalized to real-world conditions and usage, where listeners sit both on and off-axis and hear a combination of direct sound and reflections produced by the loudspeaker at off-axis angles.

Since we track seating as a variable, I'm able to statistically test the influence of seat on loudspeaker preference, and balance the sample size across seat so that there is no bias related to seat position. So far, seat position has only shown to have an interaction with loudspeaker D because its frequency response is so variable as you move off-axis. In summary, most of your complaints can be addressed in the statistical treatment of the data. I've noted that the data from the Harman trained listeners were based on listening in the same seat, so that direct comparisons between their results and the students should be viewed with this caveat in mind.
 
Last edited:
Come on now, do you really think that if you build products that are technically superior to their competitors they will naturally rule the marketplace? Don't you think that marketing, advertising, industrial design. distribution, and other factors unrelated to sound quality matter?

Of course I do think that marketing and advertising matter. That’s why I said that if Harman is building speakers that are technically superior to the competition and with Harman’s ability to fund marketing and ad campaigns, you should be able to rule the speaker world in terms of sales. Who could afford to outspend Harman in the advertising and marketing realm?
 
One thing I find very interesting in this long-running, multiple-thread campaign to discredit the Harman results -- it has been all about the Martin Logansand their incredible, literally, defeat at the hands of a cheap midfi speaker. No love for the B&WS? Why no vigorous defense of the biggest high-end speaker brand in the world?

Tim

Because I have never owned a B&W speaker and therefore I can't comment on their sound quality. I owned a pair of the original Martin Logan Aerius for years and I know how they sound.
 
Of course I do think that marketing and advertising matter. That’s why I said that if Harman is building speakers that are technically superior to the competition and with Harman’s ability to fund marketing and ad campaigns, you should be able to rule the speaker world in terms of sales. Who could afford to outspend Harman in the advertising and marketing realm?

Yeah look at Bose! Shows you what a couple hundred million dollars in advertising will buy you. While they're may only be one or two Revel owners coming forward, can pretty much bet there aren't any Bose owners. The 901 mk.4 were quite possibly the worst speaker I've ever heard.
 
Hi Sean,

I didn't say the your tests were invalid. In fact I said the opposite. Neither did I make any conclusions which I think we both agree would be foolish of me. What I pointed out is why the preference chart raised questions in my mind and some others. The tests are all valid individually. The thing is they do differ. I didn't accuse you of misleading either because you did supply n and you did point out the trained panel listened on axis. In fact these triggered the request for SD and my questions about how the panelists were positioned because when looking at the chart I noticed the difference in preference rating for two groups regarding the two loudspeakers with the narrower listening windows and the one that differed most from other groups had the largest n. A larger n might mean more rows, more participants in those windows. Amount of training might not have been the only factor. It might not even be the more significant factor. In any case it might be worth investigating.

So, am I attacking you and your work? Of course not. Remember, aside from Amir I don't know anybody else in this forum who has proudly owned a source to loudspeaker Harman Specialty Group system other than myself. Heck, I'm always moving around my room and I love to entertain. My room looks more like a club than a listening room. That's the reason I moved away from planars. Let's not call folks that like them anyway anti-social but I'm glad you acknowledge that small as their market is, in that window their speakers can and do provide satisfaction.

Supposing my hunch does have some merit? Harman's bread and butter when it comes to loudspeakers are professional sound reinforcement products, products that are highly directional. Further study could yield all sorts of benefits such as more detailed recommendations for fixed and mobile installations in their manuals. What harm did I do by asking?

So, was it wrong to overlay the results even if the tests were not identical? Again no. You stated differences in key variables for each group. My thing is some guys here are talking down to those of us seeking clarification without seeing what is right in front of their noses. That is what irked me.

With regards to testing mono vs stereo, I guess I need more convincing. I still find it counter intuitive even if using mono yields less noise.

Best,

Jack
 
Last edited:
I find the tests incomplete if not partially flawed, for a number of reasons:

  1. No context wrt upstream electronics and, most importantly, to musical content
    1. For example, would the top rated speaker falter with complex music, where the ML should pass with flying colors??? I would strongly suspect so.
    2. If the ML is less colored while the others much more so and the recording wasn't high quality, no one would favor a raw(er), bland-sounding speaker
    3. Perhaps people simply just like the presumably mellow sound of these cheap dynamic speakers???
  2. What do we know about long-term listening fatigue, if any??? Nothing, except that ML owners on martinloganowners.com usually keep their speakers for very long periods of times and/or are repeat customers
  3. No reference has been made to the break-in periods of all speakers involved
  4. It's HIGHLY suspect that a panel would be chosen for the comparison as the poster child, considering that panels of most kinds sound considerably different than dynamic speakers - they just don't fit in the test, and instead, an expensive dynamic speaker should have been chosen. Very few people are exposed and/or like panels, and it's unknown what prior exposure to panel speakers those "trained listeners" have had
Sorry, but I can't help but be highly suspect of [strong?] biases in this test.
 
I bought my ML Aerius speakers new and kept them for years. Long enough to go through the original set of panels and order a new pair from the factory. I had years of enjoyment from these speakers. You can talk about how the dynamic bass driver doesn't integrate perfectly with the panel. You can talk about how the ultimate dynamic range is restricted. But, the purity of electrostats in the mids and highs is something to behold. And that is why I have a hard time buying off on the fact that a pair of Infinity budget speakers being world beaters and making far more expensive ML speakers than the ones I owned sound bad.
 
I find the tests incomplete if not partially flawed, for a number of reasons:

  1. No context wrt upstream electronics....

  1. I post the info before from Sean's AES paper. Here it is again:

    "The program signals were reproduced from the hard
    disk on the control computer equipped with a digital sound
    card (SEK’D ProDif 96). The AES-EBU signal was fed to
    a digital switcher–distributor (Spirit 328 digital mixer) and
    converted to four analog signals using an eight-channel
    Studer D19 digital-to-analog converter. Precise level
    matching between loudspeaker was done by adjusting the
    trim controls on each analog output. Each loudspeaker was
    amplified with a Proceed AMP3 amplifier."


    I don't know if they have updated them since that report came out (2003). And oh, don't say anything bad as the Proceed AMP-5 is powering part of my home theater. :D

    and, most importantly, to musical content
    I list this also from the report in the other thread. Here it is again:

    "James Taylor, “That’s Why I’m Here” from
    “That’s Why I’m Here,” Sony Records.
    Little Feat, “Hangin’ on to the Good Times” from
    “Let It Roll,” Warner Brothers.
    Tracy Chapman, “Fast Car” from “Tracy
    Chapman,” Elektra/Asylum Records.
    Jennifer Warnes, “Bird on a Wire” from “Famous
    Blue Rain Coat,” Attic Records."


    [*]It's HIGHLY suspect that a panel would be chosen for the comparison as the poster child, considering that panels of most kinds sound considerably different than dynamic speakers - they just don't fit in the test, and instead, an expensive dynamic speaker should have been chosen. Very few people are exposed and/or like panels, and it's unknown what prior exposure to panel speakers those "trained listeners" have had
    I am not quite following your comment here :). But in the comparison I heard, there was a B&W in addition to ML and JBL. I rated the B&W below JBL and ML much lower than both. I had never heard the JBL speaker in question before. And as you say (?), the sound of ML was very distinct.

    Sorry, but I can't help but be highly suspect of [strong?] biases in this test.
    Can you please help me, as someone who sat through this test, to understand your point here? I sat in a room not knowing what to expect. A curtain was in front of me. Speakers came and went and I scored them. What biased me?
 
boy oh boy. this thread is amazing in it's own way.

or was it one of the many parallel ones??

anyways, it is good to find someone who preferred the MLs in the blind test. The point being none of this is an absolute, plenty of wiggle room for preference, it merely being a predictor of preference (90% was the figure I think)

Like all things in life, it has a bell curve attached to it.


Very few people are exposed and/or like panels, and it's unknown what prior exposure to panel speakers those "trained listeners" have had
[/LIST]
Sorry, but I can't help but be highly suspect of [strong?] biases in this test.

If that is acknowledged, why the fuss if the same results come up blind??
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing