Geoffrey Morrison, an audio writer at CNET and Sound & Vision has posted a nice summary of my recent AES paper "Some New Evidence that Teenager and College Students May Prefer Accurate Sound Reproduction" presented at the recent 132nd AES Convention in Budapest, Hungary.
The paper should be available shortly for download at the AES E-library, but in the meantime, I have provided a YouTube video and a PDF of my presentation slides that summarize the main points of the research.
The abstract of the paper reads as follows:
A group of 58 high school and college students with different expertise in sound evaluation participated in two separate controlled listening tests that measured their preference choices between music reproduced in (1) MP3 (128 kbp/s) and lossless CD-quality file formats, and (2) music reproduced through four different consumer loudspeakers. As a group, the students preferred the CD-quality reproduction in 70% of the trials and preferred music reproduced through the most accurate, neutral loudspeaker. Critical listening experience was a significant factor in the listeners’ performance and preferences. Together, these tests provide some new evidence that both teenagers and college students can discern and appreciate a better quality of reproduced sound when given the opportunity to directly compare it against lower quality options.
The effects of culture and trained versus untrained listeners on loudspeaker preference are topics that have been discussed in previous postings on Audio Musings. To further shed some light on this topic, I also ran 149 native speaking Japanese college students through the same loudspeaker preference test along with 12 Harman trained listeners. The graph below shows the mean loudspeaker preference ratings for these two groups of listeners along with the four different groups of high school and college students from Los Angeles.
Not surprising, (at least to me) I found that the Japanese college students on average preferred the same accurate loudspeaker (A) as did the 58 Los Angeles students and the trained Harman listening panel. The main differences among the different listening groups were related to the effect of prior critical listening experience: the more trained listeners simply rated the loudspeakers lower on the preference scale, and were more discriminating and consistent in their responses. This result is consistent with previous studies. The least preferred and least accurate loudspeaker (Loudspeaker D) generated the most variance in ratings among the different listening groups. This was explained by its highly directional behavior combined with its inconsistent frequency response as you move from on-axis to off-axis seating positions. This meant that listeners sitting off-axis heard a much different (and apparently better quality) sound than those listeners sitting on-axis.
While the small sample size of listeners in this test does not allow us to make generalizations to larger populations, nonetheless it is reassuring to find that both American and Japanese students, regardless of their critical listening experience, recognized good sound when they heard it, and preferred it to the lower quality options.
It would appear that the reason kids don't purchase better sounding audio solutions has nothing to do with their so-called "deviant" tastes in sound quality, but more to do with factors (e.g. price, convenience, portability, marketing, fashion accessory) that have nothing to do with sound quality. Music and audio companies should take notice that kids can indeed discriminate between good and bad sound, and prefer the more accurate version, despite what the media has been falsely reporting for the last few years. With that out of the way, we should focus on figuring out how to sell sound quality to kids at affordable prices and formats they desire to own.
The research suggests that if we cannot figure out how to sell better sound to kids, we have no one to blame but ourselves.
Sean, why no SE or SD of the data pts? ESP. Given such small sample sizes?
That was the first thing I thought when I saw the data points as well. The average rating means nothing if the SD is very wide or p-value does not reach statistical significance.
If the speaker test was conducted by having listeners also off-axis, it wasn't a fair comparison. Martin Logan in particular is a very directional speaker and people who buy this speaker listen mainly on-axis. This test should have been done only with listeners seated on-axis for any fairness. I'm a bit dissapointed by this and it seems to more marketing for Harman products rather then a research.
I'm not a big fan of Martin Logan hybrid speakers which I experience lack macro dynamicks and well integrated bass, but I believe the controlled and limited dispersion is an advantage in most living rooms. Except for the backwave from the dipoles, they will have less high gain early reflections. In a normal living room and with listeners seated only on-axis, we could have seen a very different outcome.
That is true: speaker position was kept constant. For the Harman listeners, seat was also kept constant. For the groups of students, the listening seats were distributed around the speaker under test.I believe Sean indicated loudspeaker position was kept constant, i.e. a variable which is controlled.
It's reasonable to wonder, scientifically, if said position influenced qualitative results.
I'm unacquainted with details of this presentation...was an hypothesis advanced
Were the speakers stationary? If that is the case why does off-axis response figure in the supporting data at all? If the speaker turntable was actually being moved during the actual listening (no way for the listener to know) please disregard the next question.
What happened to the output of the rear of the panel because... if the Vista was situated in a position with less boundary support than it would in a typical domestic installation, would it be fair to ask if they were being driven harder than usual to attain the 79dB B-weighted levels of the test which might point to something other than off axis response as the culprit?
Finally, we see two groups where the MLs did not finish last. One was the HS student group the other group Uni students from Japan. How was it arrived at that the Uni group had less experience and training than the other Japanese group also University students.
For the record I am not trying to debunk anything, I'm just really curious.
Sean, why no SE or SD of the data pts? ESP. Given such small sample sizes?
I've left them out for the sake of visual clarity given how many data sets are plotted on one graph.
This is nonsense. Are you saying if speakers sound bad 30 degrees off-axis it's not fair to listen at that spot? What about listeners not sitting in the sweet spot? And to be fair, the speakers were tested with listeners sitting both on and off-axis.
Also, in the case of the ML speaker, it actually was rated lower when listeners sat on-axis versus on-axis because it's spectral balance is actually better sitting off-axis compared to on-axis. If you look at the anechoic measurements, the first two curves from top to bottom represent the sound received on-axis and slightly off-axis (we call it the listening window) Both curves ndicate an elevated mid-treble relative to the bass, making it sound very bright, harsh and thin. As you move off-axis, the third curve from the top (first reflections) show a more balanced, albeit slightly dull" frequency response. For listeners sitting off-axis the speaker is more balanced. The reason the Harman listeners rated it so low compared to the other groups is because the Harman listeners were all sitting on-axis, whereas the other listening groups were distributed in seats both on-axis and slightly off-axis.
So, if anything, we were doing the speaker a favor by including listeners sitting off-axis. If we included only on-axis listening results it would have been rated even lower.
The speaker position was the same for all speaker tested. Off-axis response is a factor via the reflected sounds received by the listener, and for listeners sitting off-axis, the direct sound and the reflected sounds.
The speakers were adjusted for equal loudness at the listening position.
The Japanese students were tested as a group, and a significant portion of them sat off-axis where the ML's performance is actually better than it is on-axis (based on measurements and listening test results). The UCI and LMU students were studying recording arts at the undergraduate and masters level, respectively. Thus they were deemed to have more experience in critical evaluation than the other student groups (high school, Cal Arts and Kenshu College).
Thank you Sean the added information was useful. Just a couple more if you please but not all at once.
Why weren't the Harman listeners distributed the same way as the groups with similar n's and if you have done it what were the results?
Which interestingly is NOT how ML speakers sound at all unless set up badly (as for any speaker), not allowed to charge for 24 hours before listening (or bein brand new) or listening to bad digital recordings. Then the speaker is only as good as the source (as should be any good speaker). I'd be glad to have you over someday Sean and demonstrate that fact.
And Bjorn is right. People who buy a ML know it is a one person listening area. And according to the owners manual, spectral balance is attained in most situations when the speaker's inner third is pointed toward the listening position.
To be honest, the Harman listening results are from an earlier benchmarking test, and were not originally intended as part of these Generation Y listening tests. But I added the results here show that there is a correlation between preferences of trained vs. untrained listeners. The agreement confirms an earlier study where we tested trained vs untrained listeners using a different set of speakers (You can download the previous paper for free here: http://www.aes.org/tmpFiles/elib/20120514/12206.pdf ).
Ideally, I would re-run the Harman listeners through the same test but situated in different seats..Alternatively, in the analysis I could select only the students sitting in the primary seat and compare their results with those from the Harman listeners.
Do you have trouble dealing with this DVD player too? http://www.amazon.com/Sony-DVPSR200...f=sr_1_14?s=tv&ie=UTF8&qid=1337047216&sr=1-14Here is what I’m having a hard time dealing with. The Infinity speaker retailed for $500 per pair. That means the dealers paid $250 per pair. That means Infinity (Harman) made a profit on the $250. So let’s take a look at the number of drivers, the crossover, the cabinet, and the shipping box, packing material and labor.
Wish price could explain things this way Mark. In that regard, we would sort all the speakers based on that and buy without ever listening to them. But we don't do that. What we do is to listen to them and decide if they are good. Well, listen folks did, including myself. And the ML simply sounds bad compared to others. And I am not just talking about Harman speakers. I compared it to B&W blind and had scored it down in that comparison too. At some level you have to accept the data.And yet somehow we are led to believe the ML design is so flawed that a cheap pair of speakers that has way less than $100 in parts per speaker including the cabinet sounds better. It defies logic to put it simply.
I heard them on-axis although there were others who heard it off-axis. It fared poorly for all in the two tests that I participated in.I am a former owner of a pair of ML Aerius speakers. Even though they were limited in bass output, I thought these speakers sounded really good. I enjoyed them for years and had many happy hours of listening to them. I can’t imagine a pair of speakers with less than $100 per speaker in parts (and we never discussed the labor involved to assemble the speakers which has to be done in China at a pay scale that wouldn’t meet minimum wage requirements in the U.S.) is going to sound better than the Aerius speakers let alone a $3800 pair of ML speakers. And finally, I have never heard a pair of ML speakers that sounded better off-axis than on-axis.
Here is what I’m having a hard time dealing with. The Infinity speaker retailed for $500 per pair. That means the dealers paid $250 per pair. That means Infinity (Harman) made a profit on the $250. So let’s take a look at the number of drivers, the crossover, the cabinet, and the shipping box, packing material and labor.
There are four drivers in each cabinet for a total of eight drivers. That means if the entire $250 budget only went to the drivers, the drivers would average $31.25 each. But we don’t have $250 to buy eight drivers because profit came out of the $250 and we haven’t paid for the cabinets, crossovers, packaging, and labor. For this kind of money, I don’t know how you buy quality drivers, quality crossover components, quality cabinets, and packaging materials. And yet somehow we are led to believe the ML design is so flawed that a cheap pair of speakers that has way less than $100 in parts per speaker including the cabinet sounds better. It defies logic to put it simply.
I am a former owner of a pair of ML Aerius speakers. Even though they were limited in bass output, I thought these speakers sounded really good. I enjoyed them for years and had many happy hours of listening to them. I can’t imagine a pair of speakers with less than $100 per speaker in parts (and we never discussed the labor involved to assemble the speakers which has to be done in China at a pay scale that wouldn’t meet minimum wage requirements in the U.S.) is going to sound better than the Aerius speakers let alone a $3800 pair of ML speakers. And finally, I have never heard a pair of ML speakers that sounded better off-axis than on-axis.
I honestly don't know the break-down cost of the Infinity speaker but I could certainly find out. But the main issue here is that you have trouble accepting the untruth that there is a linear relationship between the cost of the speaker and its sound quality. To me that sounds like an expectation bias.
I understand your point and I agree that well-engineered speakers should sound better than poorly- engineered speakers. As one ascends the retail price scale, I think it would be easier to find examples of well-engineered speakers that sound better than some other speakers on the market that sell for more money. However, when you are talking about a cheap pair of speakers that can’t have more than $50 in parts per speaker including the cabinet, that isn’t a lot of money to work with. What quality of parts can you really purchase when your budget is limited to $50 per speaker including the cabinet?I've been testing speakers for 26 years and have seen many examples of a well-engineered loudspeaker beating a poorly-engineered speaker costing 10x or more its price. What quality controls are in place to stop companies from building under-performing loudspeakers and charging a lot of money for it? Zero.
There are no audio federal agencies that require loudspeakers to pass some basic meaningful sound quality standard, or be submitted to clinical listening trials to show that they cause no adverse sounds or effects on your enjoyment. The current industry loudspeaker specifications are entirely useless in terms of indicating how the speakers sound, and the audio review process is sighted, biased and largely ineffective. Consumers today cannot reliably find a store where they can do an A/B demonstration of the product they're interested in purchasing.
So companies are free to design and manufacture speakers and charge whatever they want because there are virtually no meaningful specifications or controls in place that indicate how good the loudspeaker performs and sounds.