Seriously, Amir, you're just stressing yourself. You're arguing climate science with corporatists. They will not waver in the face of data, no matter how compelling.
Tim
Tim
In anechoic chamber?You do realize there is no consensus on the ways to properly measure dipoles-even down to heir efficiency?
That is the good part: no one dictates that. You sit there and a loudspeaker plays. You wonder how well it is playing. You have no idea really at first. After all, it is not like you hear the live recording first, then the same thing through the loudspeaker. Audio is broken that way.BTW what does preference mean? Tonality, frequency response, imaging, dynamics, etc? The same to everyone?
In anechoic chamber?
That is the good part: no one dictates that. You sit there and a loudspeaker plays. You wonder how well it is playing. You have no idea really at first. After all, it is not like you hear the live recording first, then the same thing through the loudspeaker. Audio is broken that way.
Then you play the next loudspeaker. Oh wow. It sounded different. Which is real? You go back to the first one. Ah, one is more real. The other one well, doesn't sound right. The voices, hmmm, sound too flat in one.
Next comes the third loudspeaker. Now you have yet another dimension. You are asked to quantify the difference. You need to give a score. Here is my score sheet in progress (please pardon the shaky image, the room was dark):
I rated loudspeaker A low because it just sounded bad to me. It was flat, "phasey" sound for the lack of a better word. The others sounded so much warmer, and fuller, and well, real to me.
As I have told the story, the curtain opens, we all compare notes and majority of us voted the same way. So while not to "everyone," to most of us the relative ratings were the same. I did not ask however what the other people heard. I just know their rankings being similar to mine.
How is this possible? If we are different in how we perceive fidelity, our scores should not have correlated at all let alone to this high level. The answer is in Dr. Toole's book and my favorite part of it all:
"Fortunately, it turns out that when given the
opportunity to judge without bias, human listeners are excellent detectors of
artifacts and distortions; they are remarkably trustworthy guardians of what is
good. Having only a vague concept of what might be correct, listeners recognize
what is wrong. An absence of problems becomes a measure of excellence. By
the end of this book, we will see that technical excellence turns out to be a high
correlate of both perceived accuracy and emotional gratification, and most of us
can recognize it when we hear it."
I must confess that I would been incredulous of this being true had I not taken the test. But I tell you, every word is correct. I could identify what I thought was real because of absence of what was wrong in one loudspeaker versus another.
It seems we have this incredible cognitive ability to cut through vast differences between sound of loudspeakers and find common distortions that most of us dislike similarly. I can't explain it. I can't rationalize it too well. But ultimately, I have to trust the cross section of my subjective assessment, and incredible body of research that it says it is all true. My intuition be damned. Hopefully you would have concluded the same thing, had you been in my shoes .
So let us ignore what Amir has noted before, and start round the circle again? (the Bill Clinton tactic)
Harman ones absolutely are. Here is the text from the AES paper that graph came from: Some New Evidence That Teenagers May Prefer Accurate Sound ReproductionI don't think that the measurements posted were in an anaechoic room.
No let us accept everything like Moonies. Must be the difference between true scientists and engineers.
Harman ones absolutely are. Here is the text from the AES paper that graph came from: Some New Evidence That Teenagers May Prefer Accurate Sound Reproduction
Sean E. Olive, AES Fellow
"3.3.2. Correlations between Preference Ratings
and Acoustical Measurements
Fig. 7 shows the comprehensive anechoic measurements
of Loudspeakers A through D, which have been shown
to correlate well with listeners’ preference ratings in
controlled listening tests [19]-[23]. Each loudspeaker
was measured at a 2 m distance. The original
measurement had 2 Hz frequency resolution that was
post-smoothed to 1/20th octave resolution [23].
Measurements were taken at 10-degree increments over
both horizontal and vertical orbits of the loudspeaker,
and then spatially-averaged to allow characterization of
the direct, early and late reflected sound in a typical
listening room. The curves in each graph represent from
top to bottom: the on-axis response, the listening
window, the first reflections, the sound power response,
and the directivity indices of the first reflections and
sound power."
In other words, he is describing the "spin data" I explained earlier.
JA tests them in his backyard I think to remove the effect of everything but the floor.
Dr. Olive goes on to say:
"There are some clear visual correlations between the
subjective preference ratings of the loudspeakers, and
shape and smoothness of their measured curves. The
most preferred loudspeaker (Loudspeaker A) has the
flattest and smoothest on-axis and listening window
curves, which is well maintained in its off-axis curves.
In contrast to this, the less preferred Loudspeakers B, C
and D all show various degrees of misbehavior in their
magnitude response both on and off-axis. Loudspeaker
B has a “boom-and-tizz” character from the overemphasis
in the low and high frequency ranges,
combined with an uneven midrange response.
Loudspeaker C has a similar mismatch in level between
the bass and midrange/treble, in addition to a series of
resonances above 300 Hz that appear in all of the
spatially averaged curves. Loudspeaker D has a
relatively smooth response across all of its curves but
there is a large mismatch in level at 400 Hz between the
bass and the midrange/treble regions.
Together these irregularities in the on and off-axis
curves are indicative of sound quality problems that
were reflected in the lower preference ratings given to
Loudspeakers B, C and D."
Here lies the ultimate problem with your approach. Full sets of measurements are facts. It's the subjective, poetic nonsense that is fiction.
Tim
They are only facts if the measurements were done correctly by trained professionals.
Accordng to the ABX DBT crowd and Harmon's own speaker testing, observations also count as "facts" if done in a controlled environment and repeated with the same results.Actually they're only facts if they were done by trained professionals in a controlled environment and repeated with the same results. And they were.
Sorry, but is not easy to take valid and reliable information from these graphs. It is much more than just Ohm's law! You have to consider the speaker efficiency that is given in dB/W ( a logarithmic unit), its dispersion , the voltage and current limits of the amplifier , your room gain and your musical and listening preferences.
Amateurs will probably oversimplify and make erroneous assumptions.
That is precisely the problem. In the last magazine I worked on, we had to actively deflate the whole measured performance section, because our published measurements were being used (more accurately, abused) like 'Top Trumps' cards. Before we came to this decision, we regularly devoted pages (every month) explaining why the measurements existed, what they were there for, how to read them, what they related to, and so on.
We came to this decision because feedback from reader surveys, focus groups, interpreting questions from readers, and reports from manufacturers and dealers all pointed to a readership singularly and repeatedly failing to correctly interpret what we published. No matter how you tried to explain and educate the readers, the technologically advanced readers would lock all of this down to three basic figures: power handling, impedance, and frequency response. A few really bright sparks would replace 'power handling' with 'sensitivity' and one or two (who knew their way around the topic) would ask whether we should be measuring 'sensitivity' or 'efficiency'. Most, however, would boil this down to just power handling - 'how many watts in those speakers?'
The problem you will face here is someone will buy a loudspeaker with a power handling rating of 200W (but with an amp crushing sub-two ohm impedance dip in the upper bass) because it is 'better' than one with a power handling figure of 100W (despite that loudspeaker having a completely benign impedance plot). They then wonder why their 50W amplifier starts behaving like a toaster. If you warn the reader about this in the measured performance, you often 'bury' that warning.
Yes, that means pandering to a lowest common denominator, and it's ultimately an education issue, but education requires a willingness to learn the subject matter, and going that deep into audio is not what everyone signed up for. They just want something that sounds good.
Forums create strong opinions, but I think the handful of people who think an audio magazine should be the JAES with more pictures need a fairly serious reality check.
That is precisely the problem. In the last magazine I worked on, we had to actively deflate the whole measured performance section, because our published measurements were being used (more accurately, abused) like 'Top Trumps' cards. Before we came to this decision, we regularly devoted pages (every month) explaining why the measurements existed, what they were there for, how to read them, what they related to, and so on.
We came to this decision because feedback from reader surveys, focus groups, interpreting questions from readers, and reports from manufacturers and dealers all pointed to a readership singularly and repeatedly failing to correctly interpret what we published. No matter how you tried to explain and educate the readers, the technologically advanced readers would lock all of this down to three basic figures: power handling, impedance, and frequency response. A few really bright sparks would replace 'power handling' with 'sensitivity' and one or two (who knew their way around the topic) would ask whether we should be measuring 'sensitivity' or 'efficiency'. Most, however, would boil this down to just power handling - 'how many watts in those speakers?'
The problem you will face here is someone will buy a loudspeaker with a power handling rating of 200W (but with an amp crushing sub-two ohm impedance dip in the upper bass) because it is 'better' than one with a power handling figure of 100W (despite that loudspeaker having a completely benign impedance plot). They then wonder why their 50W amplifier starts behaving like a toaster. If you warn the reader about this in the measured performance, you often 'bury' that warning.
Yes, that means pandering to a lowest common denominator, and it's ultimately an education issue, but education requires a willingness to learn the subject matter, and going that deep into audio is not what everyone signed up for. They just want something that sounds good.
Forums create strong opinions, but I think the handful of people who think an audio magazine should be the JAES with more pictures need a fairly serious reality check.
Amir wants to bring everyone in this hobby up to a very high level in terms of technical knowledge which isn't going to happen. Further, Amir wants all reviewers to acquire test equipment and take measurements in addition to writing their reviews. This has become the theater of the absurd.
and it's ultimately an education issue, but education requires a willingness to learn the subject matter, and going that deep into audio is not what everyone signed up for. They just want something that sounds good.
A noble aspiration,Stereophile seem to manage a subjective and objective review.
Keith.
Sterophile's reviewers aren't taking measurements and the reviews don't mention the measurements that JA takes so you are mixing apples and hubcaps.
Back to defeatist attitude. I must say, a remarkable thing has happened in just a couple of days in this forum that counters that. Members are learning the topic. I am pretty sure you have learned something too, no? Most everyone now knows what the research says regarding importance of frequency response, the way to test loudspeakers in a controlled manner, etc. They are not all accepting of the conclusions, but the education has happened. Just like it did before the moment I experienced it and believed.Alan-as expected from you, another thoughtful and insightful post that succinctly said in a few paragrahs what I have been trying to express in many. Amir wants to bring everyone in this hobby up to a very high level in terms of technical knowledge which isn't going to happen.
The only thing absurd is to claim as a reviewer you need to remain a total non-expert, with no abilities, tools, or knowledge above the readers of said review. You are trying to agree with Alan but he just got done saying they ran measurements. Until such time that you too learn to do that, understand it as well as he has, you are not situated as he. Also, his readership is far broader than yours I imagine.Further, Amir wants all reviewers to acquire test equipment and take measurements in addition to writing their reviews. This has become the theater of the absurd.
Accordng to the ABX DBT crowd and Harmon's own speaker testing, observations also count as "facts" if done in a controlled environment and repeated with the same results.
The larger problems are what to measure and then what the measurements mean. I don't think "we" have enough data to realistically address those problems. To use another (disliked) analogy to car enthusiast testing, few people would choose a street car based on any measured performance data, there are just too many unquantifiable subjective areas of car preference. Although we're looking at an entirely different type of choice here, I do think similar principles apply; as many appropriate, useful measurements as possible should be made, to be combined with subjective impressions described in as much useful detail as possible without veering off into a reviewer's ego tripping to prove himself worthy of the job.