Some More Evidence that Kids (American and Japanese) Prefer Good Sound

Amir, I did not miss your post or the relevant articles you quoted. Like I said - I have no problem with the statement that listeners prefer more accurate speakers. My beef is with this particular study which reads suspiciously like a piece of Harmann marketing. I have said the same thing three times now, only this time I have bolded the relevant statements.

And BTW I might go back and take a closer look at the stats quoted in the other paper, since it appears to be the only full paper to be cited on this thread. I am not willing to pay $20 to download an article from the AES.

Yeah. Shameless huckster....

Mep:
Does speaker A, B, C, and D follow the picture show from left to right with the Infinity speaker being speaker A?

Sean Olive:
I've purposely kept the brand identities out of the test results since that is the accepted protocol for AES publications.

Oh, by the way, a page later Mep pressed Sean, saying it was unfair not to tell us which speakers were which. I still don't see a post where Sean revealed the rankings of the speakers.

Tim
 
Amir,

I do not want to enter your fight for accurate speakers, but it seems you never owned electrostatics for a reasonable long time. I have experience with Quad's, Soundlab's, Audiostatic, Final, Martin Logan and even the old B&W DM70. The effects of burn-in these speakers are very noticeable, and have a slow and long time component. It seems your comfort and stability criteria for a domestic speaker out-rules this type of speaker. OK, let us accept it without denigrating the manufacturers. And may be debate it in thread about burn-in, not about preferences.
I was told that these speakers sound like crap but after 24 hours that goes away. I am asking if that is such a significant failing, the speaker manufacturer does not burn them in for that amount of time. Is your experience the same that in the first 24 hours they sound that bad? How would any company survive such a user experience? If I bought a new car that hesitated and died all the time in the first 24 hours, how would I ever be a successful company?

If this point is not related to this test being valid, then sure, let's have it be in another thread. But to the extent it is brought up to denigrate this specific test/company/research, then we should continue it unless the point is being taken back.
 
There's at least one designer who disagrees here:

http://www.stereophile.com/content/speaker-break-1
Hmmm. I see no report there but a forum discussion started with John Atkinson asking: "So, do you all find speaker break-in to be real?" If the article they published, which I can't see, was definitive, why is he still asking the question?
That shows variations in drivers, not measurement of audio coming out of a speaker. I assume the latter is what we care about if we want to use them for music, as opposed to looking at them :). If I changed the temps in the room by 10 degrees, mechanical changes would occur in the device just the same. That doesn't translate into the speaker sounding different. At least let's hope not :). That is the type of measurements I have seen -- anechoic measurements.

This reference is curious Myles. It says this: "However, let me be perfectly clear – if a loudspeaker that you’re auditioning, demoing, or have purchased doesn’t sound excellent right out of the box (excluding optimizing placement and final tweaking), immediately pack it up and send it right back where it came from, period! My offerings included!" If you are going to use this article as authoritative, does it mean you also accept this part and with it, return the electrostatics that you said sound like crap in the first 24 hours?

He goes to say: "Now, if you decide to purchase that drop-dead gorgeous pair of $20k High Jinks and choose to follow their strictly outlined homage-inducing burn-in protocol, here’s where it really gets serious. You are about to be sucked into the biggest psychoacoustic hoax in all of audio! I promise you that all of the burn-in and break-in time in the world isn’t going to make an inferior loudspeaker a world-class loudspeaker. What will happen is that over a period of time, the loudspeaker will continue to sound terrible. However, the “break-in” transition is really taking place within your brain. Through long term exposure, your ear/brain is literally psychoacoustically transitioning this inferior device to into a palatable and pleasant experience, and pretty soon, even the worst and most obvious design flaws are fully compensated or masked by the brain. An auditory illusion is far from world class audio! So, you don’t believe in auditory illusions?! Here’s an eerie one for you: http://en.wikipedia.org/wiki/Shepard_tone.

Timid bass/poor bass weighting, midrange shout (a well-known full-range manufacturer’s infamous attribute), a forward-sounding image, directional sound, and poor off-axis power distribution (all of which are common attributes of many full-range loudspeaker designs), are simply the product of poor or even intentional design, and have absolutely nothing to do with break-in."


Seems like there is agreement with research being presented here! Indeed, Dr. Toole provides the same explanation as to why people accept poorly designed speakers. That they adapt to them over time just like the brain puts aside constant environmental noise and such.

He goes on:

"Once again, if it is a full-range speaker and if it sounds bad out of the box, don’t wait for a miracle to happen; send it straight back to whomever you got it from because the speaker has serious problems. If it sounds quite good but has a very subtle graininess, or a few mild distortions, or the sound is crisp but still pleasant, then you’ve got a keeper! Sit back and be patient, and you will be pleasantly rewarded within 15-20 hours of moderate playback levels – this amount of break-in time will get you to about 80%-complete break-in. Beyond that, a functionally full and nearly complete break-in period of 200+ hours is quite reasonable and is to be expected."

Taking this at face value, it is hard to imagine a speaker losing such shoot outs due to these subtle differences. That said, I say again, this is not the type of data I was asking about. I like to see the measurements of the sound we hear, not a description of what should happen. Such distortions should be readily visible in measurements of the speaker. Why isn't it shown?
 
When doing the blind test at Harman I and one of my colleagues both picked the Martin Logan speaker on all three tests, and least liked the B&W speaker on all three. However, the Infinity speaker we heard was incredibly good for $500. In my notes, the thing that struck me about the speaker that turned out to be my choice was its coherence. Having been an ESL lover for many years, it was interesting that that was the trait that I picked up on the most...

I have to say that the folks at Harman really went the extra mile to dial in their methodology and it would be fun to spend more time investigating. It certainly was more scientific than trying to remember what you heard months or years ago!
 
Amir - once again, my issue is with this particular paper, or at least what has been presented in public so far. I would need to read the entire original article to give a more balanced viewpoint. However - from what I see - I am doubtful if the numbers reach statistical significance, and if they don't - then there is no point publishing the paper.
Keith, if we don't have some data, isn't it proper to withhold judgement and go and get the data? And what do you mean from what you have seen? 70 speakers tested with 300 subjects is not enough? What is enough? 1000? 10,000? Something tells me the doubt will always be there. That is cool but let's not couch that as a scientific point. Science requires us to have the data and making a specific argument.

I do not have an opinion on the 1986 paper, because I have not read it beyond the abstract which you linked to. You can not form an opinion on a paper from reading the abstract. I do not want to pay $20 to download the paper either. I trust that you have read it and found the data and analysis to be to your satisfaction?
Well, I have read it fully and am representing that it says the same thing that these later tests do. So you are left doubting my ability to understand it or representing it truthfully. Both of which can be remedied by spending a fraction of money we often spend buying wires for these speakers to get the study. Spend $44 and buy Dr. Toole's book and you see summary of all of this research. Why don't we want to do that and instead like to make assumptions without reading the material? Don't you think it is unfair to say you won't spend the money yet you are sure there are some shenanigans going on here? How is this fair to the reputation of people or companies?

I should also make the point that just because a paper is sponsored by a commercial entity, does not mean its results can be summarily dismissed. The paper should still be evaluated on its merits, but one should pay attention to any parts of the experiment that suggest that a conflict of interest may have skewed the result. Conversely, just because a paper was funded by a university or government department, does not mean it is free from bias. All papers should be evaluated on what is presented (and questions asked about what has not been presented). This is much more nuanced than simply accepting or rejecting papers based on who sponsored them.

Ergo, I have no opinion on Dr. Olive's previous work because I have not read the papers.

Once again: I support the position Dr. Olive is advocating. But I do not think this paper is particularly convincing.
It is great that you apply scrutiny to research papers. We should always do that. But please, let's be fair to people and not say we smell a rat when plenty of evidence is put forward otherwise that such is not the case. The fact that we don't want to spend $20 to read a 50 page report on acoustics and speaker design, or $44 for a 700 page book, is more our problem than the author's :).
 
One more thought on the concept of break in, with speakers or components...

On the occasion that I've been able to get two samples of the same component, and run one for a few hundred hours - then do a side by side A/B listening, the component with hours on the clock always sounds more natural. Some components have a wider range of change, but there's always been enough of a difference that even the most untrained ear could hear it.

Another really interesting thing that Dr. Olive pointed out when he gave us the demo, was that even the most untrained ears could tell the difference between really good sound and awful sound. However, it took the trained listener to pick out and quantify smaller differences between components.

All good stuff!
 
When doing the blind test at Harman I and one of my colleagues both picked the Martin Logan speaker on all three tests, and least liked the B&W speaker on all three. However, the Infinity speaker we heard was incredibly good for $500. In my notes, the thing that struck me about the speaker that turned out to be my choice was its coherence. Having been an ESL lover for many years, it was interesting that that was the trait that I picked up on the most...

I have to say that the folks at Harman really went the extra mile to dial in their methodology and it would be fun to spend more time investigating. It certainly was more scientific than trying to remember what you heard months or years ago!

See guys? Somebody picked the MLs. At least two people picked the MLs. I'll even bet some of the folks in the Harman study picked the MLs. Statistically they came in dead last, two cars behind a pair of cheap midfi towers, but just tell us you don't listen to statistics and you can stop shooting holes through the messenger.

Tim
 
The one thing that the Harman folks (from their employees that have been trained as educated listeners) don't have though is user bias. I admit I've always gravitated towards the sound of an ESL since the first time I heard a pair of Acoustats back in 1980. While I currently have a pair of cone speakers that I enjoy as much, and in some ways even more than my CLX's, the sound of an ESL, whether it be ML, Quad, SoundLabs, Acoustat... Always grabs me. Though I've owned Magnepans on and off over the years and think they are excellent speakers, they never hold my attention in the way that a great ESL does.

I think that's part of the ghost in the machine that the measurements don't really explain... As it is with a listener that really wants pinpoint imaging, killer bass extension, etc etc.

While I think the measurements can reveal a lot, they just don't tell me what a great speaker for ME will be.

That's why the review process is so difficult in terms of being useful for our readers. We can only tell you what we've heard to the best of our ability. No matter how excited we are about a particular pair of speakers, they may not be right for you.

But it does keep me employed. :)
 
I was told that these speakers sound like crap but after 24 hours that goes away. I am asking if that is such a significant failing, the speaker manufacturer does not burn them in for that amount of time. Is your experience the same that in the first 24 hours they sound that bad? How would any company survive such a user experience? If I bought a new car that hesitated and died all the time in the first 24 hours, how would I ever be a successful company?

If this point is not related to this test being valid, then sure, let's have it be in another thread. But to the extent it is brought up to denigrate this specific test/company/research, then we should continue it unless the point is being taken back.

Amir,

I have no experience with virgin ML's, but I have direct experience with several pairs of new Soundlab speakers - and I can assure you they really sound very poor during the first few days. Happily they are heavy and are not easy to move, otherwise some users would may immediately send them out trough the window, or at less take them back.

Electrostatic panels combine high resistivity coatings with very thin dielectrics that have complicated mechanical properties, every type of coating and film may have a different behavior. I am not prepared to debate its influence in sound quality and I do not hope to find scientific papers on it :), but it one of the cases where I expect to have significant burn-in effects. OK, now you can suggest it is all bias expectation. ;)

And, IMHO, as usually, car analogies do not bring any value to the debate.
 
I had the same experience with my Acoustats, they took a few days to really sound good. At the time the dealer told me that the panels needed time to fully charge. Once when returning from a vacation and having had the speakers unplugged for about two weeks, they again sounded fairly dead and needed to be plugged in for a while to sound their best.
 
We are digressing, but here are some facts about electrostatics to keep in mind, based on my experience: 1) The mylar needs to be broken in; 2) The stators take time to charge; 3) They are prone to humidity - MLs sound best at 45%-55% humidity, 60%+ you start to hear rolled off highs, and it's quite evident at 75% and higher. Just run a Windex-ed piece of cloth over a stator to clean it and hear the sound disappear for a few seconds.
 
Last edited:
I have taken the test twice and both times I sat fully on-axis with the speakers being presented with ML among them. In both cases I gave ML much lower scores than others.


Per above, I heard it on axis so I assume that is how the other testers heard it. It sound was very unnatural as compared to the others under test.

I wanted to address these two comments... The devil is always in the details (and why A/B tests are meaningless to me unless all factors and variables are accounted for): what were the speakers involved, the room, speaker impedance curves and amps driving them (extremely important with ML and all 'stats due to deleterious/fatal impedance curves), humidity (see above), etc. No one should discount the ability (or lack thereof) of an amp to drive a difficult load _accurately_. Newer ML speakers have more dramatic impedance drops - e.g. Summit, Summit X and Montis (0.5 to 0.8 ohms) - than older class - e.g. Progidy, Odyssey (1.0 ohms). I've tested mine with a number of hefty and otherwise able SS amps that were just not rated below 4 ohms, and consequently sounded crap. So there are a variety of reasons I chose the amps I am using now, and at the heart of it is their ability to properly drive them.

But I will also grant you that depending on the ML model you listened to (and everything else being optimal), their crossovers might just not have been that good (hence again the request for details).
 
When doing the blind test at Harman I and one of my colleagues both picked the Martin Logan speaker on all three tests, and least liked the B&W speaker on all three.

Are you saying that you and your friend picked the Martin Logan speaker as the best sounding on all three tests? If so, that is quite interesting given all the disparaging remarks others have had for the ML in the same Harman tests.
 
I was told that these speakers sound like crap but after 24 hours that goes away. I am asking if that is such a significant failing, the speaker manufacturer does not burn them in for that amount of time. Is your experience the same that in the first 24 hours they sound that bad? How would any company survive such a user experience? If I bought a new car that hesitated and died all the time in the first 24 hours, how would I ever be a successful company?

If this point is not related to this test being valid, then sure, let's have it be in another thread. But to the extent it is brought up to denigrate this specific test/company/research, then we should continue it unless the point is being taken back.

Once you unplug the ML, it discharges. Then it takes another 24 hrs for it to return to form. So how is a company going to do that? Supply a battery?

And as far as cars goes and as a former mechanic, you don't start driving a brand new car at 100 mph, do you? You have to break a new car in for a given number of miles to allow all the metal surfaces in the engine to wear in.
 
70 speakers tested with 300 subjects is not enough? What is enough? 1000? 10,000? Something tells me the doubt will always be there. That is cool but let's not couch that as a scientific point.

You do a power calculation and decide how many subjects you need to test to demonstrate statistical proof. You can do this while you are running the trial - calculate how close to the p-value you are getting and then extrapolate how many more subjects you need to recruit.

Well, I have read it fully and am representing that it says the same thing that these later tests do. So you are left doubting my ability to understand it or representing it truthfully.

Please point to a single statement I made where I said that I am doubting your ability to understand it or represent it truthfully? In any case, I have no more interest in discussing this with you when you are attributing things to me which I have not said.
 
You do a power calculation and decide how many subjects you need to test to demonstrate statistical proof. You can do this while you are running the trial - calculate how close to the p-value you are getting and then extrapolate how many more subjects you need to recruit.
As I mentioned, this is already done in the report and extensively so. Indeed, I have not seen a double blind test report presented at AES that is devoid of it. Here is a sample extract from Sean's report:

"In both tests there was a highly significant difference
in preference between the different loudspeakers; F(3,
258) 231.0, p < 0.0001 for the four-way test, and F(2,
292) 149.2, p < 0.0001 for the three-way test. A Scheffé
post-hoc test performed at a significance level of 0.05
showed a significant difference in the means between all
pairs of loudspeakers in both tests."


Please point to a single statement I made where I said that I am doubting your ability to understand it or represent it truthfully?
I provided references to research that showed the same conclusions as the Harman report while the researchers worked at NRC. You said from the abstract alone, you could not accept that reference. Since I have read the paper and presented is conclusions as such, all that is left is doubting my opinion of it.

In any case, I have no more interest in discussing this with you when you are attributing things to me which I have not said.
I am sorry you feel that way. I thought the line of reasoning was very clear that you were not accepting the evidence that I was putting forward as genuine.
 
Are you saying that you and your friend picked the Martin Logan speaker as the best sounding on all three tests? If so, that is quite interesting given all the disparaging remarks others have had for the ML in the same Harman tests.
It is not the "same" Harman test. The ones I took did not have the Infinity but JBL speakers.

Are you interested in the science that comes out of this test or only which speaker brand/model won and which one lost?
 
It is not the "same" Harman test. The ones I took did not have the Infinity but JBL speakers.

When I said the “same” test, I merely meant it was the Harman run/controlled test of speakers that included the ML speakers. Not that the other 3 speakers outside of the ML speakers were exactly the same. What I found interesting was that someone actually picked the ML speakers to be the best sounding of the four speakers in three different runs of the test. This surprised me after what has been said about how poorly the ML speakers sounded in the other Harman tests that have been talked about on this forum.

What also surprised me Amir was that you said prior to the Harman tests, you liked the sound of ML speakers (and if that is not exactly what you said or meant, please correct me). After the Harman tests you were involved in, your opinion of the quality of the ML speakers dropped like a rock. For me, I think that would make me question how I could think highly of them prior to Harman’s test and then come away from the test and wonder how I could have possibly been fooled before into thinking I actually liked them and thought they sounded good.

Are you interested in the science that comes out of this test or only which speaker brand/model won and which one lost?

Both. I’m having a hard time separating the marketing from the science. I just saw a commercial for the first time last night from Harman. They are using Jennifer Lopez in their commercial with a look of ecstasy on her face as she listens to a Harman HT system with Harman speakers. Prior to the post I commented on, I haven’t seen one person who declared the ML speakers to sound better than the other speakers under test.

And yes, I’m highly interested in knowing how many times listeners prefer the sound of the competition over the Harman speakers. Awhile back I made a comment about how the Harman trained listeners were given the best seats and the unwashed masses had to sit wherever they could. You said that wasn’t the case when you were there, but numerous people have stated what I previously wrote including one of the papers that Sean wrote. And that is that Harman trained listeners sit on axis and the rest of the test population sits off axis.

I’m not vested in the outcome of any of Harman’s tests because I neither own Harman speakers or B&W or ML speakers. What doesn’t surprise me is that if someone designs speakers to have a particular response both on and off axis and that people are trained to identify speakers that have the response of the house brand speakers that the trained listeners would home in on that and say they sounded the “best.” It shouldn’t be shocking to anyone that if you control the testing and you train the listeners what to listen for that the trained listeners will pick out the speakers with the attributes (or non- attributes) that you trained them to listen for.

I was/am also interested to know how many of our forum members actually own Harman speakers because if they are truly better than the competition, surely lots of people would own them. That doesn’t seem to be the case on this forum at least. I was and am skeptical that a pair of cheap Infinity speakers with less than $50 in parts per speaker (including enclosure) is going to make a pair of ML speakers sound fatally flawed. It doesn’t smell right to me. And I’m not the only one obviously.

Back to the basic point here: If the science proves that Harman is building the best sounding speakers, how come they aren’t ruling the marketplace? Surely they have the resources to pull off marketing campaigns that the other major audiophile brands could only dream of.
 
When I said the “same” test, I merely meant it was the Harman run/controlled test of speakers that included the ML speakers. Not that the other 3 speakers outside of the ML speakers were exactly the same. What I found interesting was that someone actually picked the ML speakers to be the best sounding of the four speakers in three different runs of the test. This surprised me after what has been said about how poorly the ML speakers sounded in the other Harman tests that have been talked about on this forum.
I noted in the other thread (I think) that people do pick low scoring speakers at times and no one is surprised or offended during the Harman testing. I suspect the same thing has happened here. That is why I keep saying this data is directionally very useful. We are talking about a compass telling us north, instead of a mapping GPS system telling us exactly which turns to take :).

What also surprised me Amir was that you said prior to the Harman tests, you liked the sound of ML speakers (and if that is not exactly what you said or meant, please correct me). After the Harman tests you were involved in, your opinion of the quality of the ML speakers dropped like a rock. For me, I think that would make me question how I could think highly of them prior to Harman’s test and then come away from the test and wonder how I could have possibly been fooled before into thinking I actually liked them and thought they sounded good.
I explained earlier that I had never heard them side by side. One's feedback does change when you can switch back and forth while playing the same song and observing the difference. A ton of learning happens this way that is not possible when we hear the speakers disjointed.

Both. I’m having a hard time separating the marketing from the science. I just saw a commercial for the first time last night from Harman. They are using Jennifer Lopez in their commercial with a look of ecstasy on her face as she listens to a Harman HT system with Harman speakers. Prior to the post I commented on, I haven’t seen one person who declared the ML speakers to sound better than the other speakers under test.
I have said it generically in the other thread:
If you are asking if 100% of the people agreed with one speaker being the best, no. As the data above shows, there is high correlation but not absolute conclusions. Whether that is due to people being poor judges of quality at times, or some other factors in play, it is hard to say. What is not hard to say is that those factors do not in any way trump the research results presented. If you deviate from them, you better have darn good reason and research to back your counter approach. A glossy brochure and impressive looking speakers don't do it.

And yes, I’m highly interested in knowing how many times listeners prefer the sound of the competition over the Harman speakers.
I wish Sean was here to give you the exact answer. But having just looked at the graphs in the Journal of AES paper, I see no instance of even the best score from ML reaching up that high. In one group, they barely got above the next worse scoring speaker but never reached the heights of the higher scoring ones. And in all other groups, the score was consistently below all other speakers tested.

Oh, I just found a version of that graph in Sean's blog:

TrainedvsUntrained.jpg


If you look at the third group on the x-axis and look up vertically, you can see the M scores slightly encroaching on speaker B but otherwise, both the average and min/max are below others. At least that is my read of it :).

Awhile back I made a comment about how the Harman trained listeners were given the best seats and the unwashed masses had to sit wherever they could. You said that wasn’t the case when you were there, but numerous people have stated what I previously wrote including one of the papers that Sean wrote. And that is that Harman trained listeners sit on axis and the rest of the test population sits off axis.
I think Sean runs one of these tests every week :). So there are many instances of these tests. I have seen pictures of them having a single seat in the listening room on-axis. The ones I attended was informal group of people and there were doze plus seats. I tried to sit on axis but clearly others did not.

I’m not vested in the outcome of any of Harman’s tests because I neither own Harman speakers or B&W or ML speakers. What doesn’t surprise me is that if someone designs speakers to have a particular response both on and off axis and that people are trained to identify speakers that have the response of the house brand speakers that the trained listeners would home in on that and say they sounded the “best.” It shouldn’t be shocking to anyone that if you control the testing and you train the listeners what to listen for that the trained listeners will pick out the speakers with the attributes (or non- attributes) that you trained them to listen for.
You are forgetting that when Sean and Floyd came, there were a lot of Harman people who loved their speaker designs yet when testing blind, they gave poor scores to those! Good thing about Sean is that he leaves no assumption untested. Here is their analysis of whether their expert testers are biased toward their speakers: http://seanolive.blogspot.com/2008/12/loudspeaker-preferences-of-trained.html

The graph above came from the same test. That gray bar represents the expert testers. You see that other than being far more critical, their order of preferences for speakers matched that of many other groups which included audio reviewers and such.

I was/am also interested to know how many of our forum members actually own Harman speakers because if they are truly better than the competition, surely lots of people would own them.
I own them of course and bought them years before being in the business. Likely if you are driving a mid to luxury car, you also have Harman speakers in them and that business was earned using this type of methodology.

That doesn’t seem to be the case on this forum at least. I was and am skeptical that a pair of cheap Infinity speakers with less than $50 in parts per speaker (including enclosure) is going to make a pair of ML speakers sound fatally flawed. It doesn’t smell right to me. And I’m not the only one obviously.
Obviously :). At some level you either believe the data or your gut.

Back to the basic point here: If the science proves that Harman is building the best sounding speakers, how come they aren’t ruling the marketplace? Surely they have the resources to pull off marketing campaigns that the other major audiophile brands could only dream of.
That is not the science. The science is not telling you Harman makes the best speakers. The science tells you that there is high correlation between speaker preference and smooth on and off-axis response. That is what the science is telling you. What people go and buy, and what is "best" in their views are not the same things necessarily.

I will tell you this: if I ran 100% of the members here through that test, 80% of them will change their mind on how they evaluate speakers!
 
Back to the basic point here: If the science proves that Harman is building the best sounding speakers, how come they aren’t ruling the marketplace? Surely they have the resources to pull off marketing campaigns that the other major audiophile brands could only dream of.

Come on now, do you really think that if you build products that are technically superior to their competitors they will naturally rule the marketplace? Don't you think that marketing, advertising, industrial design. distribution, and other factors unrelated to sound quality matter?

What is the best selling consumer loudspeaker ( hint: it's not a Harman brand but a brand that used to spend 20x more money on advertising) hamburger, wine, automobile or music artist? Does top marketshare imply these products have the best technical performance in their category ? I dare say that probably neither you nor I own products that are leaders in terms of market share because the top-selling products are designed for the largest market segment/demographic of which we are not members.

And for the record, I have never said that the "science proves that Harman is building the best loudspeakers." Those are your words (or someone else's ) - not mine. The purpose of these listening tests has been two-fold: a) to test whether a new Harman product is competitive against its targeted competitors (not all competitors) and b) to study the relationship between listener preference and a set of objective measurements so that a set of engineering design guidelines/specifications can be developed to optimize the sound quality of a loudspeaker at any given price point.

We've also conducted specific tests to study the relationship between the preference and performance of trained and untrained listeners looking at different ages, levels of training, and more recently culture. This is done routinely to ensure that we are not designing loudspeakers that only satisfy the tastes of our trained listening panel. So far, the evidence indicates the MAJORITY of trained and untrained listeners prefer the most accurate, neutral loudspeakers. That doesn't preclude someone from liking a loudspeaker that isn't the most accurate or neutral.
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing