What you say is true Jack but the answer to a different question. Namely, the routinely stated point that audiophiles have different tastes than general public. As you say, research shows that just like other specialized groups of listeners, their preference *overall* is no different. A speaker that does well with trained listeners also does well with them.
Michael asked a different question. He asked about *listening skills*, not preference for loudspeakers. For that, we can go directly to the source of the research:
Differences in Performance and Preference of Trained versus Untrained Listeners in Loudspeaker Tests: A Case Study
Sean E. Olive, AES Fellow
First this graph:
So clearly nothing here is a matter of scale. Trained listeners have far better skills here. So there is no implication of me changing the research again, here is the original words in the paper:
"The performance of the trained panel is significantly
better than the performance of any other category of listener.
They are about three times better than the best group
of audio retailers, five times better than the reviewers, and
27 times better than the students. The combination of
training and experience in controlled listening tests clearly
has a positive effect on a listener’s performance. The students’
poor performance is likely due to the student’s lack
of training and professional experience in the field of
audio. The reviewers’ performance is somewhat of a surprise
given that they are all paid to audition and review
products for various audiophile magazines. In terms of listening
performance, they are about equal to the marketing
and sales people, who are well below the performance of
the audio retailers and trained listeners."
So there is no manipulation of research. It is very clear they did poorly. But how did this data come about? The answer is the previous paragraph where a statistical analysis is performed on the results of each group. The analysis shows the consistency with which different groups rate the same product. The same loudspeaker is presented multiple times in the study. An objective instrument would rate it the same every time. Humans are not that consistent but ideally they would be close to the instrument. But the research shows that without training, groups such as audio reviewers are highly inconsistent:
"To examine listener performance in view of occupation
more clearly, the mean listener FL values were plotted as
a function of occupation for both tests (see Fig. 8).
In the four-way tests the listener performance of the different
categories based on the mean FL values from highest
to lowest was trained listeners (94.36), audio retailers
(34.57), and audio reviewers (18.16)."
In other words, they cannot be counted to tell the "truth" in a single trial like trained listeners are. You have to test them over and over again and then look at the overall sum. As a group, they simply lack the ability to spot a problem and consistently point that out in every comparison to other loudspeakers.
This is the data Jack. And unfortunately not so easy to understand and parse out of the sea of research over a 30 year period. One has to read every bit of it and over and over again to get a consistent view. Or you can trust that I am not trying to screw you when I summarize them
.