Hi Tim. First, let me say that members should not keep addressing each other and saying they don't understand this and that. Let's just express our own opinion as Tim is doing above or the technical topic.
Back to Tim's post, we do research in a specific way based on collective experience we have had in the field. It might seem odd at first but you have to trust us that we know what we are doing Tim
. Here is an example how we have learned these lessons the hard way from JJ's presentation that Arny linked to the video of:
JJ is recounting his first music lossy codec he designed. How it sounded good, sounded good, and then it didn't. Notice that it sounded good on far more tracks than the ones it didn't. That doesn't matter. The codec must have even response across ALL content and ALL listeners. That is of course is not remotely practical. So what to do? We test difficult tracks using trained listeners for the core R&D. We do occasionally do larger scale tests with broader set of music and listeners but in general, science is made from the former.
As JJ explains we aim to find the threshold of hearing for these artifacts. That is the
science we want to discover. It goes without saying that there are people who are not that good, and content that is not as revealing. We don't need to keep testing for that. I can test a million people who think MP3 at 128 kbps is the same as CD. When developing a new codec, why would I want to bother testing those people?
Also, a point of correction which hopefully was not created by me
. My training is in finding audio compression artifacts. Those artifacts I know forward and backward. What I used here is not that training. Going into Arny's test or that of Scott/Mark, I did not know what to look for compared to the clear picture I have of compression artifacts. What did help me, is being a critical listener, and not one experienced in hearing these specific artifacts (again JJ's video explains the same).
Think of me as a detective at a murder scene. Every case is new so he doesn't walk in knowing who dun it. He knows how to gather evidence and hopefully solve the case. The other people who found differences without formal training either have critical listening abilities or developed the first phase of it through this experience. If they passed these tests, they are above ordinary listeners.
One of the "proven" values of these tests/thread is just that. That it is important, as is done routinely in the industry to deploy difficult content and critical listeners. Data otherwise for small differences is of no value. We know most people can't hear those artifacts or care to hear them.
So your wish to have more music samples, and more ordinary listeners is just something that researchers in this field don't care about for reasons I have explained.