Conclusive "Proof" that higher resolution audio sounds different

IMO the worst thing that Meyer and Moran did is get blinded by hype of the segment of the recording industry that was attempting to profiteer by retreading existing recordings.
That is a spin Arny. The record industry did not commission such a test. The authors did. It is their job to make sure they learn and follow proper protocols.

They were not alone as the whole high end audio industry was just as blind.
Once again, the "whole high end audio industry" did not commission such a test. The Boston Audio Society did. Well before they did that they could have looked at MPEG and how it had selected specific tracks out of the millions of pieces of music to be revealing of lossy codecs. They knew their job was to find difficult and appropriate tracks for the task and not have the creating industry wake up one morning and create test tracks for them.

That excuse is like blaming the pencil for you doing poorly on a multiple choice quiz. :D

I wasn't consulted, either formally or informally on any phase of the Meyer Moran tests or the article so I absolutely refuse to take any responsibility for it. I also have been very clear when asked, as I am now that the current value of that work related to the ongoing controversy about high resolution, high sample rate audio is pretty limited.
I didn't say you were responsible for the test. I asked if you share any responsibility for carrying the banner of their results for years and years and across many countless posts/forums. This is an example: http://newsgroups.derkeiler.com/Archive/Rec/rec.audio.opinion/2007-10/msg01666.html

The AES Repudiates SACD, DVD-A, and the high resolution audio myth

From: "Arny Krueger" <arnyk@xxxxxxxxxx>
Date: Mon, 29 Oct 2007 09:44:12 -0400
The AES Repudiates SACD, DVD-A, and the high resolution audio myth

2007 September, Volume 55 Number 9

Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio
Playback
E. Brad Meyer and David R. Moran 775

"Conventional wisdom asserts that the wider bandwidth and dynamic range of
SACD and DVD-A make them of audibly higher quality than the CD format. A
carefully controlled double-blind test with many experienced listeners
showed no ability to hear any differences between formats. High-resolution
audio discs were still judged to be of superior quality because sound
engineers have more freedom to make them that way. There is no evidence that
perceived quality has anything to do with additional resolution or
bandwidth."

Because the AES likes to behave in public like gentlemen, they just didn't
come right out and call the high end audiophile press and many of the
audiophile suppliers, charlatans and liars. But there it is, right between
the lines!


John Atkinson, read it and weep! ;-)

Well, it isn't just John who reads and weeps after seeing such remarks from you Arny. Some 7 years earlier you posted ITU BS1116 on your now defunct web site. A year or so later added your "commandments" for the right way to do such a test. Some six years later you read this report and wave it in front of people's nose when it did not remotely follow those documents and the advice you yourself had given people.

Now you say when "asked" you say it had "limited" value? Just 7 years ago you were full of praise and creating conclusions beyond the paper itself. And attributing the DIY test to that of AES itself.

I don't take any responsibility for things I had nothing to do with (e.g. Meyer and Moran), and I won't take any responsibility for or support any attempts to inflate the global importance of minuscule, questionable, and fragmentary results.
You should take responsibility for championing Meyer and Moran and trying to paper over the test results you yourself created.
 
So why do you insist the IM ultrasonic tone measurements JA did reflect the keys proving IM is a possibility :)
In reality it was taking the existing 19+20khz 0dbfs IM stress-duress test and pushing it to ultrasonic FR range.
As I mentioned and so has JA, the really good hardware still handles even this severe ultrasonic test, the two products that had difficulty with 0dbfs (causing -50db IMD from signal in audioband) had no problems when it was backed off from 0dbfs and closer to what the jangling keys were.

So we can agree then IMD is not an issue and not heard by anyone doing this test unless their hardware was faulty (although this has been proved not to be the case with them doing further tests).

BTW several times you have insisted JA's measurements shows IMD being a possibility, which is opposite JA's conclusion,mine, and several others who also used other equipment/did further testing.

Because it is my best judgement to do so given the relevant facts, no matter who made the measurements that constitute the relevant facts in the matter. I really think that JA knows how to use his test gear as far as he goes with it. We've always differed about where to go and how to interpret what is uncovered once we are there. JA has generally seemed to be from the school of "If it measures different, it sounds different" with some exceptions. This appears to be one of the more mystifying exceptions.

I have always been from the school of thought that tries to unify the results of standard perceptual tests with corresponding test equipment results.
 
That is a spin Arny. The record industry did not commission such a test.

Everybody who is surprised by that given the widespread fraud and deception that has been uncovered since then might want to grab a pointed cap and sit in some corner! ;-)

The authors did. It is their job to make sure they learn and follow proper protocols.

I challenge you to point to where you have previously written or write anew, the known deviations between Meyer and Moran and proper protocols. Not the ones that you speculate existed based on the false philosophy of "Absence of evidence is clear evidence and proof of absence". Let's hear about the differences that you can document with quotes from standards group documents and quotes from documents that Meyer and Moran actually wrote themselves.


Once again, the "whole high end audio industry" did not commission such a test.

Again, everybody who is surprised by that given the widespread fraud and deception that has been uncovered since then might want to grab a pointed cap and sit in some corner! ;-)

The Boston Audio Society did.

Really? I thought that the BAS was more than two people.

Well before they did that they could have looked at MPEG and how it had selected specific tracks out of the millions of pieces of music to be revealing of lossy codecs.

That would be cherry picking test data to whitewash the deceptions by the high resolution segments of the record industry and consumer audio.

They knew their job was to find difficult and appropriate tracks for the task and not have the creating industry wake up one morning and create test tracks for them.


The High res proponents implicitly claimed that with every album released in a high rez format, and the high end audio reviewers were complacent in that deception. I'll willing to allow that the high end reviewers were innocent dupes.

The fallacy being pursued above appears to be the as yet unproven idea that selecting appropriate program material would change the outcome of any credible attempt to prove that high rez formats reliably make a difference.
 
This laptop has surprisingly good fidelity given the fact that I have been able to find many differences such as the 16 vs 24 bit files and 320 kbps MP3 vs CD.
Like you I always believed that higher fidelity systems allow better discrimination of artifacts, until I learned from lossy audio experts that unmasking is an important aspect of revealing artifacts. Some not so good or even broken systems might do a better job at unmasking than high fidelity systems.
Same for 16 vs 24 bit. Suppose your earphones have a 6 dB boost around 3 kHz. That would give them an advantage in detecting noise (e.g. dither) compared to "flat" headphones.
 
Like you I always believed that higher fidelity systems allow better discrimination of artifacts, until I learned from lossy audio experts that unmasking is an important aspect of revealing artifacts. Some not so good or even broken systems might do a better job at unmasking than high fidelity systems.
Same for 16 vs 24 bit. Suppose your earphones have a 6 dB boost around 3 kHz. That would give them an advantage in detecting noise (e.g. dither) compared to "flat" headphones.
I ran my tests with three headphones so that part is invariant. But yes, if you are a trained listener, you can hear artifacts readily without having the best system. I was commenting on the potential for IM distortion, early clipping, etc.
 
I challenge you to point to where you have previously written or write anew, the known deviations between Meyer and Moran and proper protocols. Not the ones that you speculate existed based on the false philosophy of "Absence of evidence is clear evidence and proof of absence". Let's hear about the differences that you can document with quotes from standards group documents and quotes from documents that Meyer and Moran actually wrote themselves.

You can't manufacture such rules that have no foundation in science Arny. If I said I had tested these files and found one of them to sound night and day better than the other, what would you ask me? First question would be if the test was blind or not. Correct? And if I refused to answer, you would conclude that was the case and results invalid. If you asked me if the levels were matched and again I said nothing, you once more would declare that absence of evidence as fatal and dismiss the results. Clearly absence of data was probative there.

Now the tables are turned and seems the rules too. I am supposed to read into a test all the controls that are missing from the paper?

Let's review what you have written on this front:

Here are some guidelines to follow:

Ten (10) Requirements For Sensitive and Reliable Listening Tests

(1) Program material must include critical passages that enable audible differences to be most easily heard.

(2) Listeners must be sensitized to a audible differences, so that if an audible difference is generated by the equipment, the listener will notice it and have a useful reaction to it.

(3) Listeners must be trained to listen systematically so that audible problems are heard.

Is it your opinion that they have complied with this part of the list to generate "reliable" and "sensitive" results?
 
Everybody who is surprised by that given the widespread fraud and deception that has been uncovered since then might want to grab a pointed cap and sit in some corner! ;-).
The people who uncovered that "deception" were not your or my camp Arny. It was the opposition force. Our camp believed high res files were high res and ran off running tests and got them published in AES Journal. We wanted to believe so we did. Had there not been for people we can't stand, we would not know the truth. So if I were you, I would not be so proud of this talking point.
 
The people who uncovered that "deception" were not your or my camp Arny.

AFAIK the first whistle blower was David Greisinger who wrote this paper:

"Perception of mid frequency and high frequency intermodulation distortion in loudspeakers, and its relationship to high-definition audio." and first presented it in 2003 at the AES 24th International Conference, Banff, Alberta, Canada June 26-28. http://www.davidgriesinger.com/intermod.ppt. I will take the liberty of putting he and I in the same general camp of audio. Clearly an objectivist.
 
Is it your opinion that they have complied with this part of the list to generate "reliable" and "sensitive" results?

arny said:
Ten (10) Requirements For Sensitive and Reliable Listening Tests

(1) Program material must include critical passages that enable audible differences to be most easily heard.

This was the thesis of the test, so it could not be complied with. It was the independent variable.

arny said:
(2) Listeners must be sensitized to a audible differences, so that if an audible difference is generated by the equipment, the listener will notice it and have a useful reaction to it.

Since the difference is entirely in the ultrasonic range, our best science says that it is not an audible difference. Again it is part and parcel of the thesis of the experiment.

arny said:
(3) Listeners must be trained to listen systematically so that audible problems are heard.

Again the difference is entirely in the ultrasonic range, our best science says that it is not an audible difference. Again it is part and parcel of the thesis of the experiment.

Is it your opinion that they have complied with this part of the list to generate "reliable" and "sensitive" results?

The question above is the question that the experiment attempted to develop relevant evidence to perhaps be used to address it. Therefore trying to comply with these guidelines is itself a conundrum or impossible mission.

It is not uncommon for people who are hypercritical and don't really understand experimental design or the problem at hand to unknowingly add impossible requirements to a difficult experiment.
 
The people who uncovered that "deception" were not your or my camp Arny. It was the opposition force. Our camp believed high res files were high res and ran off running tests and got them published in AES Journal. We wanted to believe so we did. Had there not been for people we can't stand, we would not know the truth. So if I were you, I would not be so proud of this talking point.

In case it wasn't obvious, I could well be one of those people you can't stand. :eek: I do not consider myself to be in your camp, i.e. I do not consider myself to be an "objectivist". If were forced to choose, I would place myself in the "subjectivist" camp, but I don't consider myself to be a typical "subjectivist". Indeed, I believe that a substantial fraction of these people fit into the category of "audiophools". (Unfortunately, it is not always easy to tell from Internet posts which category a particular poster fits into.) If I were to categorize myself I it would be "music lover" and "unpaid part-time recording engineer". If pressed, I would admit to being a bit of a chameleon, since my personal metaphysics includes a monism with two sides of the same coin, one subjective and the other objective.
 
AFAIK the first whistle blower was David Greisinger who wrote this paper:

"Perception of mid frequency and high frequency intermodulation distortion inloudspeakers, and its relationship to high-definition audio." and first presented it in 2003 at the AES 24th International Conference, Banff, Alberta, Canada June 26-28. http://www.davidgriesinger.com/intermod.ppt. I will take the liberty of putting he and I in the same general camp of audio. Clearly an objectivist.
Sorry, no. David's presentation was in 2003 as noted above. The Meyer and Moran test was conducted in 2007: http://www.aes.org/e-lib/browse.cfm?elib=14195

Authors: Meyer, E. Brad; Moran, David R.
Affiliation: Boston Audio Society, Lincoln, MA, USA
JAES Volume 55 Issue 9 pp. 775-779; September 2007
Publication Date:September 15, 2007

So clearly David did not analyze the poor choice of content by Meyer and Moran. Citing that paper is quite damaging to your case though Arny. It shows how early this knowledge was there that SACDs did not necessarily have ultrasonic content. Yet Meyer and Moran assumed they did and proceed to run with their tests. Perils of being a hobbyist who doesn't attend industry conferences and read papers from the same.

Any deeper we should dig this hole Arny?
 
It is not uncommon for people who are hypercritical and don't really understand experimental design or the problem at hand to unknowingly add impossible requirements to a difficult experiment.

If a set of requirements are necessary to have a valid experiment it is irrelevant if available experimenters are incapable of meeting these requirements due to their own incapacity, the availability of necessary experimental technology or the availability of a suitable and generally excepted scientific paradigm. The requirements still have to be met for the experiment to be valid, i.e. produce relevant results.

If the "impossible" requirements are a logical consequence of the experimental question and experimental method then it may be that the original experimental question was ill formed or the experimental approach inappropriate. In my opinion, it is not up to critics to prove that objections are valid -- it is up to the formulators of an experimental question/method to justify the validity of their approach. While this is just my opinion, in over half a century I have yet to find a person whom I would characterize as "first-rate" who did not agree with me on this point. The best people followed this approach as a matter of course and did not need convincing. The worst people were ones who applied a high standard to others while following a lower standard for their own work. One of my jobs was to take junior people and turn them into first-rate engineers through careful mentoring and to weed out the bottom category if all else failed.
 
Sorry, no. David's presentation was in 2003 as noted above. The Meyer and Moran test was conducted in 2007: http://www.aes.org/e-lib/browse.cfm?elib=14195

The above post is inscrutable. It starts out with "Sorry, No" and then appears to agree with facts that I obviously know about, believe, and mention whenever relevant including a discussion of who first detected the SACD/DVD-A High Resolution Fraud.

So clearly David did not analyze the poor choice of content by Meyer and Moran.

I have it on good authority that if you twist a post far enough beyond recognition it can appear to be self-contradictory. The above is a straw man argument.

If restated to ask the question, "Were Meyers and Moran aware of Greisinger's 2003 paper when they did the work for their 2007 JAES paper? then that might be a proper question that you might ask them Amir since it seems to be of such great interest to you.


Citing that paper is quite damaging to your case though Arny.

What case might that be, Amir? As far as I'm concerned it is just a relevant fact that stands on its own merits.

It shows how early this knowledge was there that SACDs did not necessarily have ultrasonic content.

OK. So what?

The title of Meyer and Moran's paper is "CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback". Resolution and bandwidth are orthogonal attributes of an audio signal, so there is no requirement for a recording to have extended bandpass to qualify for investigation of alleged higher resolution.

Yet Meyer and Moran assumed they did and proceed to run with their tests.

Seems reasonable given that they were testing resolution and not necessarily bandwidth.

Any deeper we should dig this hole Arny?

The hole that appears to have been dug appears to be one related to excess concern about bandwidth when the paper being discussed refers to resolution. Or are you unaware of the fact that resolution and bandwidth are orthogonal properties of a signal?
 
If a set of requirements are necessary to have a valid experiment it is irrelevant if available experimenters are incapable of meeting these requirements due to their own incapacity, the availability of necessary experimental technology or the availability of a suitable and generally excepted scientific paradigm. The requirements still have to be met for the experiment to be valid, i.e. produce relevant results.

If the "impossible" requirements are a logical consequence of the experimental question and experimental method then it may be that the original experimental question was ill formed or the experimental approach inappropriate. In my opinion, it is not up to critics to prove that objections are valid -- it is up to the formulators of an experimental question/method to justify the validity of their approach. While this is just my opinion, in over half a century I have yet to find a person whom I would characterize as "first-rate" who did not agree with me on this point. The best people followed this approach as a matter of course and did not need convincing. The worst people were ones who applied a high standard to others while following a lower standard for their own work. One of my jobs was to take junior people and turn them into first-rate engineers through careful mentoring and to weed out the bottom category if all else failed.

Exactly correct & something I have been saying since the start of this thread - results from these flawed experiments should be treated as anecdotal evidence. The pretense of rigour is just pseudo-science & is now exposed as the sham that it has always been. This pretense is very damaging to uncovering truth & advancement in understanding. Failure to face up to this only further damages advancement. I had hoped that these results might act as a wake-up call to drive more rigour into such listening tests but my optimism was ill-founded & naive. Maybe it's too soon to expect such a discussion/response to improving such tests & maybe in time we might see some movement along these lines but so far, nada.
 
Exactly correct & something I have been saying since the start of this thread - results from these flawed experiments should be treated as anecdotal evidence. The pretense of rigour is just pseudo-science & is now exposed as the sham that it has always been. This pretense is very damaging to uncovering truth & advancement in understanding. Failure to face up to this only further damages advancement. I had hoped that these results might act as a wake-up call to drive more rigour into such listening tests but my optimism was ill-founded & naive. Maybe it's too soon to expect such a discussion/response to improving such tests & maybe in time we might see some movement along these lines but so far, nada.

The crux of the issue seems to be related to the question: Is there is such a thing as ultrasound?

Testing for the audibility of ultrasound seems to be a contradiction in terms.

How does one test for the audibility of ultrasound?
 
The crux of the issue seems to be related to the question: Is there is such a thing as ultrasound?
Eh, yes there is????

Testing for the audibility of ultrasound seems to be a contradiction in terms.

How does one test for the audibility of ultrasound?
So you set up a test and you don't know what you are testing for?
As Amir says, time to put that shovel away, Arny that's an inescapable hole you have dug at this stage.
 
The crux of the issue seems to be related to the question: Is there is such a thing as ultrasound?

Testing for the audibility of ultrasound seems to be a contradiction in terms.

How does one test for the audibility of ultrasound?

I made that mistake about this thread too Arny. This thread was never about ultrasound, or even high rez. It was an attempt to critique how forum ABX tests are done and that they have some deficiencies when used as "proof". I agree with that premise, but regret taking part in what seems a cultivated misdirection to introduce one idea pretending to talk about another.
 
The crux of the issue seems to be related to the question: Is there is such a thing as ultrasound?

Testing for the audibility of ultrasound seems to be a contradiction in terms.

How does one test for the audibility of ultrasound?

You are chasing your semantic tail. I don't know if you are merely confused or deliberately trying to befuddling your opponents. In either case, you defeat your own credibility, just as you did in an earlier post in creating FUD around the term "resolution" vs. "bandwidth". If one wishes to conduct informed scientific discussions one must be precise in one's terminology. One must either use commonly agreed upon terminology or one must provide a complete definition of ones unique terminology.

There will be no confusion if one says "frequencies above NN kHz" rather than "Ultrasound".

There will be no confusion if one uses numerical bit depth when characterizing a PCM digital channel or S/N ratio in dB when characterizing an Analog channel. There will be no confusion if one uses sampling rate when describing a digital channel and bandwidth when describing an analog channel. Those familiar with the theory and practice of noise shaping and dither will understand that there are tradeoffs between bit depth and sampling rate of PCM channels so as to achieve a desired signal to noise ratio in various frequency bands. There are relationships between all of these parameters that have a theoretical basis in information theory and various practical implementations in audio and telecommunications applications. Since you claim to have expertise in this area, then surely you are familiar with the technical literature, which comes in the form of journal articles, Masters and PhD Thesis, monographs and text books and more recently on-line versions of University courses.
 
As to anyone planning additional SRC comparison tests. Simply avoid using Sonic Solutions SRCs or any others that do not provide correct gain. There is no point in a 0.1 dB reduction. This reduction is unnecessary and serves no useful purpose, but it does complicate any SRC experiments. Recording engineers have no basis in shaving headroom so close as 0.1 dB unless they have already sold out to the loudness wars. While I'm knocking Sonic Solutions, let me put in a comment about iZotope's otherwise excellent "64 bit" SRC. It has subsample delays as a result of the filter lengths used. This prevents a good null even when linear phase filter parameters have been selected. I agree with the designer's comment that this has no "sonic significance" (private email) but it is a pain in the ass when it comes to using this converter as a test/measurement tool in experiments such as these.
IMO you're a bit hard on Sonic. AFAIK the gain reduction is a leftover from the days that Sonic used fixed point processing. Any clipping in the process would be permanent, unlike floating point. I don't know if Sonic is still using fixed point though. Even the highly respected Weiss Saracon SRC has a user adjustable gain setting and the manual shows a -0.2 dB default setting (confirmed by a Gearslutz post). I wouldn't dismiss excellent SRCs just because of gain reduction.
I'm also a great fan of iZotope. Have you noticed that 352.8->44.1 SRC doesn't have subsample delay and nulls to about -150dBFS? Same for 96->48. It's reasonable to assume that 96->44.1 performs equally well, but you can't prove it with a null test. The developer told me that the subsample delay will be removed in a future version, without sacrificing SRC quality. Apparently the imminent RX4 should have this.
 
Last edited:
IMO you're a bit hard on Sonic. AFAIK the gain reduction is a leftover from the days that Sonic used fixed point processing. Any clipping in the process would be permanent, unlike floating point. I don't know if Sonic is still using fixed point though. Even the highly respected Weiss Saracon SRC has a user adjustable gain setting and the manual shows a -0.2 dB default setting (confirmed by a Gearslutz post). I wouldn't dismiss excellent SRCs just because of gain reduction.
I'm also a great fan of iZotope. Have you noticed that 352.8->44.1 SRC doesn't have subsample delay and nulls to about -150dBFS? Same for 96->48. It's reasonable to assume that 96->44.1 performs equally well, but you can't prove it with a null test. The developer told me that the subsample delay will be removed in a future version, without sacrificing SRC quality. Apparently the imminent RX4 should have this.

If Sonic is still using relics of ancient fixed point technology then that's their problem. I have no problem with a default gain parameter being provided, provided it can be defeated.

Unfortunately, I am still using iZotope RX2 Advanced and haven't yet upgraded to RX3, let alone RX4. So I don't have DXD sample rate capability. If truth be told, I don't think I could use this speed for audio restoration without getting a new computer that is faster, otherwise I would run out of patience. Most of the work that I have done recently has been based on live recordings made by others at 44/24 or cassette tape transfers that I have made at 96/24 and 88/24 and I don't think the higher sampling rates are really needed for material that has already been band-limited to slightly more than 20 kHz by the cassette format (on my Nak Cr-7). But some of the noise reduction features, particularly for vocal material, look like real time savers. I have a lot of lecture tapes that were recorded in the 1970's in India and these have horrible noise, tape problems, etc. and requires many hours of work per hour of program material. Thanks for the info, it may push me over the edge to upgrade my iZotope RX.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu