Conclusive "Proof" that higher resolution audio sounds different

The words were 'weasel words' because there was no substance in the body of the paper to back them up. This is not science, this is BS. i am not going to rehash the arguments, they should be clear to anyone who has read and understood the thread. However, you are right, a lot of scientific papers put stuff in their conclusions that aren't justified by the meat of the work. Science is like any other human field, there are a lot of second raters, BS artists and a few outright scam artists. (The problem is often worse in engineering fields where there are likely to be commercial pressures involved as well.)

Congrats on completely misreading my meaning, as well as coming up with a novel meaning of the phrase 'weasel words', which I have always understood to mean, 'words that try to have it both ways' or words 'that purposely fail to be clear in their meaning' -- not 'words that aren't supported by what came before'.

The commonplaces in conclusions that I am thinking of are 1) language that admits the possibility of the present conclusions being incomplete , or even wrong, leading to 2) calls for more research to clarify/verify the conclusions presented herein

While easily satirized or dismissively characterized -- possibly as being 'weasel words' - both are worthy practices and actually a sign of *good science*.

Then again someone who thinks PEAR was actually science that demonstrated ESP might disagree.

One of several things that strikes me as funny about the 'Meyer&Moran stink' crowd is that before M&M, few were making a fuss about the plain fact that many praised-to-the-skies 'high rez' releases were just rereleases of analog (tape) recordings*. Reviewers would still go ga-ga over the new improved 'high rez' sound -- and attribute it to the format. But when M&M used those (as well as 'pure' high rez) recordings in their tests , suddenly they'd committed an invalidating fail.

( M&M actually allude to this in their paper, putting forth at the very end, the proposition that when there is improved sound , it is more likely due to better mastering.)
 
Yes & have you read their two conclusions?
"Two main conclusions are offered: first, there exist audible signals that cannot be encoded transparently by a standard CD; and second, an audio chain used for such experiments must be capable of high-fidelity reproduction. "

Yes, if you understand the limits of Redbook, you know there 'can be' audible signals that cannot be encoded transparently by a 'standard CD'. I can think one example of such a signal right off the bat. I consider it rather unimportant for home audio enjoyment, though. So I'm eager to see what Meridian claims those signals are and what it defines as a 'standard CD'.


So I'm not exactly sure why you are citing an upcoming paper that seems to contradict your argument? Maybe it's to ensure a balanced view?

You don't understand what I'm getting at. Maybe it's best for you to watch from the sidelines, or go back to arguing your incoherent positions on sighted listening versus 'flawed' DBTs with Tim.
 
One of several things that strikes me as funny about the 'Meyer&Moran stink' crowd is that before M&M, few were making a fuss about the plain fact that many praised-to-the-skies 'high rez' releases were just rereleases of analog (tape) recordings*. Reviewers would still go ga-ga over the new improved 'high rez' sound -- and attribute it to the format. But when M&M used those (as well as 'pure' high rez) recordings in their tests , suddenly they'd committed an invalidating fail.
Even funnier is the same version of that going on there. Constant challenges to run DBTs as presented. When all of a sudden positive results are generated, the very people who created the tests started to shoot holes in their own tests! So I say let's clean up our own house before we worry about others. Humans are humans. Pretending you are a more superior form than the other camp just doesn't pass muster.

( M&M actually allude to this in their paper, putting forth at the very end, the proposition that when there is improved sound , it is more likely due to better mastering.)
Allude? I don't call this alluding:

Though our tests failed to substantiate the claimed advantages of high-resolution encoding for two-channel audio, one trend became obvious very quickly and held up throughout our testing: virtually all of the SACD and DVD-A recordings sounded better than most CDs—sometimes much better. Had we not “degraded” the sound to CD quality and blind-tested for audible differences, we would have been tempted to ascribe this sonic superiority to the recording processes used to make them.

[...]These recordings seem to have been made with great care and manifest affection, by engineers trying to please themselves and their peers. They sound like it, label after label. High-resolution audio discs do not have the overwhelming majority of the program material crammed into the top 20 (or even 10) dB of the available dynamic range, as so many CDs today do."


This says audiophiles had every right to think the new formats produced superior sound.

What is funny is that their assessment of better fidelity than the CD was no doubt a sighted one! It is not like they offer any double blind test results for that. In other words, they were part-time vegetarians. Steak on sundays and rice and beans for the rest of the week....
 
Yes, if you understand the limits of Redbook, you know there 'can be' audible signals that cannot be encoded transparently by a 'standard CD'. I can think one example of such a signal right off the bat. I consider it rather unimportant for home audio enjoyment, though. So I'm eager to see what Meridian claims those signals are and what it defines as a 'standard CD'.
Forget Meridian, for the moment, I'm interested in what you claim are those audible signals that can't be encoded transparently in RBCD standard & how they are unimportant for home audio listening?


You don't understand what I'm getting at.
Please tell us all!
 
Sasully, play if you will but I appeal to your sense of decency to do so in a manner appropriate for a forum that has no age or gender restrictions.
 
Hey Amir, I've been avoiding this for a while because of the noise, but here goes.
Constant challenges to run DBTs as presented. When all of a sudden positive results are generated, the very people who created the tests started to shoot holes in their own tests!
Not just a positive DBT result, but an ABX DBT, no less. That's exactly what's been lacking all these years, and nobody has ever done it before IIRC (and I'm as guilty as anyone). And ABX tests are very good at confirming positive results - less so at confirming negative results.

The skeptics have been able to hide behind the Meyer & Moran results for a long time, and its about time somebody overturned them. I think this sort of science has set HiFi back many years, rather like the THRESHOLD FOR DISTORTIONS DUE TO JITTER paper by Ashihara et al did for digital audio jitter. Since 2006, people have been able to boldly state that normal levels of jitter found in audio equipment are quite inaudible, without realising how wrong they were. Just because something looks scientific doesn't mean that its right.

I don't think Meyr & Moran were nearly as misguided as Ashihara, but there was something very odd about their procedure that many people don't pick up on. IIRC, they didn't simply compare 16-bit vs 24-bit audio, they actually converted analogue audio to digital and did the 16/44.1 limiting in the digital domain, then converted back to analogue. And then said that process was transparent - 16-bit audio is one thing, but I can't believe two conversion stages are really transparent.
In summary, this is in my opinion a watershed event.
Anyway, I agree this is quite a momentous result, Amir, and congratulations for doing the test! You've made my year.

Nick
 
Last edited:
Congrats on completely misreading my meaning, as well as coming up with a novel meaning of the phrase 'weasel words', which I have always understood to mean, 'words that try to have it both ways' or words 'that purposely fail to be clear in their meaning' -- not 'words that aren't supported by what came before'.

The commonplaces in conclusions that I am thinking of are 1) language that admits the possibility of the present conclusions being incomplete , or even wrong, leading to 2) calls for more research to clarify/verify the conclusions presented herein

While easily satirized or dismissively characterized -- possibly as being 'weasel words' - both are worthy practices and actually a sign of *good science*.

Then again someone who thinks PEAR was actually science that demonstrated ESP might disagree.

One of several things that strikes me as funny about the 'Meyer&Moran stink' crowd is that before M&M, few were making a fuss about the plain fact that many praised-to-the-skies 'high rez' releases were just rereleases of analog (tape) recordings*. Reviewers would still go ga-ga over the new improved 'high rez' sound -- and attribute it to the format. But when M&M used those (as well as 'pure' high rez) recordings in their tests , suddenly they'd committed an invalidating fail.

( M&M actually allude to this in their paper, putting forth at the very end, the proposition that when there is improved sound , it is more likely due to better mastering.)


There seems to be a fundamental difference between the two camps. One camp sees things as a debate, using terms such as "proven" and "burden of proof". The other camp is simply interested in learning how things are. This difference carries over to the question of what it means, or doesn't mean, for something to be "science". As I see it, this distinction makes little sense. There is the scientific method, which is a means of observing and learning about the natural world. This method is most effective when done as a social process, because there is more velocity and direction behind progress. This social activity is often referred to as Science. However, Science does not define truth, that is a function of individuals reaching their own conclusions. As I am interested in ascertaining truth and not in winning debates, I am not going to further discuss matters of semantics, such as what I meant by "weasel words". This would be a complete waste of time, because in doing so there would be additional words that are misunderstood that would also have to be debated.

If you will go back to my comments on PEAR, you will see that I discussed only the statistical evidence behind the PEAR experiments. I did not say I believed in ESP. What I do believe is that these experiments show is that something unusual was going on that can not be explained by random chance. It may be a subtle form of experimental error, it may be a new type of physics, or it may be a clever fraud. However, if someone starts with a prior belief that ESP is impossible, then no amount of statistical evidence will convince him otherwise.

As to those "tape recordings". They are what they are. I am interested in hearing them this way. One way I have found that works is to have one of these tapes and play it back, or possibly a second generation copy. This works well, but is very expensive. Another way is to have a high resolution digital copy, such as at 192/24 or DSD. This works well, but is far less expensive. In my experience, I have not found that 44/16 recordings work as well and there is essentially no difference in cost between 44/16 and 192/24 or DSD. (I am talking manufacturing and distribution cost, not pricing which involve marketing and business issues not under discussion in this thread.)
 
Hey Amir, I've been avoiding this for a while because of the noise, but here goes. Not just a positive DBT result, but an ABX DBT, no less. That's exactly what's been lacking all these years, and nobody has ever done it before IIRC (and I'm as guilty as anyone). And ABX tests are very good at confirming positive results - less so at confirming negative results.

The skeptics have been able to hide behind the Meyer & Moran results for a long time, and its about time somebody overturned them. I think this sort of science has set HiFi back many years, rather like the THRESHOLD FOR DISTORTIONS DUE TO JITTER paper by Ashihara et al did for digital audio jitter. Since 2006, people have been able to boldly state that normal levels of jitter found in audio equipment are quite inaudible, without realising how wrong they were. Just because something looks scientific doesn't mean that its right.

I don't think Meyr & Moran were nearly as misguided as Ashihara, but there was something very odd about their procedure that many people don't pick up on. IIRC, they didn't simply compare 16-bit vs 24-bit audio, they actually converted analogue audio to digital and did the 16/44.1 limiting in the digital domain, then converted back to analogue. And then said that process was transparent - 16-bit audio is one thing, but I can't believe two conversion stages are really transparent.

Anyway, I agree this is quite a momentous result, Amir, and congratulations for doing the test! You've made my year.

Nick
Hi Nick. First, it is great to hear from you again. I recall our first encounter talking about HDMI jitter.

The points you raise are right on. In a private communication with a number of industry heavyweights, what you say is what people were lamenting. How the DIY Meyer and Moran has been the only published test of this type and has gotten so much press. No other field of science would ever rely on one such test, let alone one run by amateurs devoid of many of the good practices we have in audio testing.

And yes, this was not any old test. But an ABX DBT test created by none other as the most vocal advocate of such tests, Arny. It is not like I cooked up such a test in my basement. The test was in public. Challenge was made that it could not be beat. But beat it was.

It was no fluke either. As I noted in my first posts, other positive outcomes came about just the same in completely independent audio samples and scenarios.

Yes, we analyzed and analyzed the reasons. My question is: where was that analysis with respect to Meyer and Moran? We can't duplicate anything they did. We can't measure what their systems did. Nor did they present measurements of their own (making a mockery of anything called "audio science").

Folks who have 7+ years of carrying the M&M flag are naturally unhappy. What happened wasn't supposed to happen. Now a "man of science" would be taken back, and rethink everything up to here. Heck, even I was surprised at the outcome! That a 50+ year old person can hear such small differences still. All we get from folks is outrage and anger. What the heck for? Why not celebrate that we have another data point in this discussion? And one that we can solidly examine?

And how do we continue to avoid the unavoidable? We have proven, beyond any doubt whatsoever, that we don't hear the same. Anyone still in doubt should go and run the tests. This is a major advancement in these discussions. No longer can you impose your listening abilities onto another, unless you can prove you have above average discrimination ability.

This is no time to defend Meyer and Moran test. it really isn't. The more we do that, the more we show we are in denial as far as new data. Something we keep accusing the other side to do: i.e. ignore data.

There is absolutely nothing wrong with a fuzzy view of audio fidelity. I don't understand the need to define and defend absolutes here. It is OK to not win a battle of absolutes here.

Ah, you got me started :). Again, very well put post Nick.
 
Hi

The discussion has taken many forms and twists during its life in the WBF and it remains quite lively. Let me see if I can draw some conclusions:

  • Amir tested a given set of files and found differences blind.
  • Other people noticed also a difference
  • One of the progenitor of the files that should result in negatives commented that his files were not entirely genuine (Am I mistaken here?) and could thus account for the differences perceived. I must note that from my understanding said files were used as proofs that there were no audible differences...
  • Some (JKenny being the poster child of this movement) have taken an interesting position: Contending that DBT of the kind most often performed or reported on Audio Forums are not valid. In their opinion sighted evaluation are more valid than those. in the meantime they seem to agree that Amirm results are correct and valid. If I am wrong on this? Else please correct me.

The thing to me and I would not call it conclusive is that there are perceivable differences between hi_rez and Regular Redbook. It doesn't however come to explain the differences we hear with many hi-rez files when the mastering is different and/or superior (a subjective assessment) to the regular "lo-rez" fare. It leaves open the possibility that we could get better fidelity by using Hi_rez especially with the dropping price of storage and the zero difference betwen hardware prices. Whether this is deemed important by some is not relevant to the discussions, especially in an audiophile setting, we , as a constituency are willing to pay considerable amount of money and energy for differences that 99.9999% of the world population would deem insignificant. We share this with any enthusiasts be a cyclist or a philatelist.

I don't need or care to win debate. Learning is a fun endeavor to me and so far I have learn quite a bit and expect to learn more. Especially if it could further the reproduction of music in my home settings (which so far consist of various DAC headphones and amps but the speakers and the rest of the system are coming ...)
 
Last edited:
Hi


[*]Some (JKenny being the poster child of this movement) have taken an interesting position: Contending that DBT of the kind most often performed or reported on Audio Forums are not valid. In their opinion sighted evaluation are more valid than those. in the meantime he seems to agree that Amirm results are correct and valid. If I am wrong on this please correct me.

While the position is interesting, I don't believe that it contains a contradiction. There are two possible results of a DBT: positive or negative. A positive DBT is self-calibrating. It demonstrates that the subject heard a difference, and that implies that the experimental process was sufficiently powerful to detect the difference. A negative DBT is not self-calibrating. It is possible that the experimental process was not sufficiently powerful to detect the difference. The methods needed to provide the needed calibration are likely to be expensive and time-consuming and some people consider that other methods ("just listening") work better in practice.

The main advantage of DBT testing is that positive results provide strong evidence to third parties that a difference was heard. If one is working by oneself and doesn't need to convince anyone else, there may be more effective ways of reaching conclusions than DBTs.
 
Seeing as I'm being referred to, I should reply & clarify my position.

My personal experience is that I use technical information & know how, along with measurements & sighted listening to evaluate if I'm improving my designs. Generally, I'm not particularly interested in the type of negligible differences that require DBTs to evaluate as they really don't differentiate my products sufficiently to be of much importance. ABX also doesn't suit my needs, which is to go beyond difference into preferences.

However, small differences can have a cumulative audible effect which is significant (in the audiophile's scheme of things) - taking them out of the category of "I'm not sure if I can hear a difference" into "yep, definitely prefer this". I have many instances of this in my personal experiences & the experiences of others. So this is a bit of a quandary - to ignore such small differences until enough evidence is seen of their cumulative worth? I have to say I have fallen into the mistake of ignoring some of these in the past. I also find that "small" is a relative definition which depends on the level of transparency in your playback system - what was once small can become worthwhile as your system improves - which is something that always has to be borne in mind & an open mind maintained about previous conclusions.

I, like many others, judge my success or failure in following this route by how successful the product is & what others report hearing when using the products. This feedback, yes from users sighted listening, is of much more value to me & steers me towards improving my products. You would be surprised how much correlation there is in this feedback. I know some will say that this correlation is as a result of users having read a review which prompts them towards similar impressions & I don't discount this to account for some of it but I'm talking also about the correlation in users comments on other equipment that they are using in comparisons with mine.

So, yes, for my needs I find ABX testing a complete waste of time. These results of Amir's, however, are interesting as are the other positive ABX results I linked to from some years back.

I've said all along that until both positive & negative controls are used in such DBTs, I consider negative results completely worthless to the wider audience (not just to me) - much like presenting measurements from an instrument that hasn't been calibrated - the measurements can be less than worthless - they can be very misleading. They really do nothing to further our understanding of audio perception or even providing strong evidence of anything in particular.

Tony is exactly correct - a positive DBT result is self-calibrating - it proves that (in this instance) there is an identifiable difference which is audibly identifiable (for whatever reason). Some people can't seem to get their head around this. They seem to suggest that by accepting the positive results of a DBT & criticising the negative results (that lack controls) is cherry picking results & therefore being hypocritical.

It's unfortunate but true that a null result can be null for many, many reasons that have nothing to do with whether the difference is genuinely audible or not & the way to deal with this issue is to include positive & negative controls in such tests. So, why is there not a big discussion about such controls? Why is there not a search for ways to make better ABX, DBT tests than what has been foisted on the audio public? Why is the focus instead on tearing down the people, Like Amir, who are saying "get your house in order with some real testing & then some real progress in audio can be made"?
 
Last edited:
(...) There are two possible results of a DBT: positive or negative. A positive DBT is self-calibrating. It demonstrates that the subject heard a difference, and that implies that the experimental process was sufficiently powerful to detect the difference. A negative DBT is not self-calibrating. It is possible that the experimental process was not sufficiently powerful to detect the difference. The methods needed to provide the needed calibration are likely to be expensive and time-consuming and some people consider that other methods ("just listening") work better in practice.

The main advantage of DBT testing is that positive results provide strong evidence to third parties that a difference was heard. If one is working by oneself and doesn't need to convince anyone else, there may be more effective ways of reaching conclusions than DBTs.

Nice to read such a brief and clear statement. I have posted in similar directions several times, but never thought about this analogy with the self calibration.
 
(...)
[*]Some (JKenny being the poster child of this movement) have taken an interesting position: Contending that DBT of the kind most often performed or reported on Audio Forums are not valid. In their opinion sighted evaluation are more valid than those. in the meantime they seem to agree that Amirm results are correct and valid. If I am wrong on this? Else please correct me.
(...)

As I have taken this "interesting position" long ago, I would like to comment. You have to separate Amir results (that resulted in a positive) from this movement (about tests that almost always resulted in negatives).

IMHO the only real support for the sighted tests is the products developed using them and systems assembled in such conditions. The positive outcomes are the only support that these evaluations resulted in valid results and these people managed to overcome their intrinsic bias.

Another way of validating sighted tests is using independent listeners and carefully analyzing their opinions - it is in some way what audiophiles do when they refer that even their wife, cat and dog found the same characteristics in some equipment.

Oh, we should not forget the valuable help of the usual friend, great music lover, having such an ignorance about audio that he is not able to distinguish an amplifier from a microwave oven without reading glasses. :)
 
Seeing as I'm being referred to, I should reply & clarify my position.

My personal experience is that I use technical information & know how, along with measurements & sighted listening to evaluate if I'm improving my designs. Generally, I'm not particularly interested in the type of negligible differences that require DBTs to evaluate as they really don't differentiate my products sufficiently to be of much importance. ABX also doesn't suit my needs, which is to go beyond difference into preferences.

However, small differences can have a cumulative audible effect which is significant (in the audiophile's scheme of things) - taking them out of the category of "I'm not sure if I can hear a difference" into "yep, definitely prefer this". I have many instances of this in my personal experiences & the experiences of others. So this is a bit of a quandary - to ignore such small differences until enough evidence is seen of their cumulative worth? I have to say I have fallen into the mistake of ignoring some of these in the past. I also find that "small" is a relative definition which depends on the level of transparency in your playback system - what was once small can become worthwhile as your system improves - which is something that always has to be borne in mind & an open mind maintained about previous conclusions.

I, like many others, judge my success or failure in following this route by how successful the product is & what others report hearing when using the products. This feedback, yes from users sighted listening, is of much more value to me & steers me towards improving my products. You would be surprised how much correlation there is in this feedback. I know some will say that this correlation is as a result of users having read a review which prompts them towards similar impressions & I don't discount this to account for some of it but I'm talking also about the correlation in users comments on other equipment that they are using in comparisons with mine.

So, yes, for my needs I find ABX testing a complete waste of time. These results of Amir's, however, are interesting as are the other positive ABX results I linked to from some years back.

I've said all along that until both positive & negative controls are used in such DBTs, I consider negative results completely worthless to the wider audience (not just to me) - much like presenting measurements from an instrument that hasn't been calibrated - the measurements can be less than worthless - they can be very misleading. They really do nothing to further our understanding of audio perception or even providing strong evidence of anything in particular.

Tony is exactly correct - a positive DBT result is self-calibrating - it proves that (in this instance) there is an identifiable difference which is audibly identifiable (for whatever reason). Some people can't seem to get their head around this. They seem to suggest that by accepting the positive results of a DBT & criticising the negative results (that lack controls) is cherry picking results & therefore being hypocritical.

It's unfortunate but true that a null result can be null for many, many reasons that have nothing to do with whether the difference is genuinely audible or not & the way to deal with this issue is to include positive & negative controls in such tests. So, why is there not a big discussion about such controls? Why is there not a search for ways to make better ABX, DBT tests than what has been foisted on the audio public? Why is the focus instead on tearing down the people, Like Amir, who are saying "get your house in order with some real testing & then some real progress in audio can be made"?


well stated my friend
 
BTW, I've never been a poster child before - I'm blushing :)
 
Hi Nick. First, it is great to hear from you again. I recall our first encounter talking about HDMI jitter.
There is absolutely nothing wrong with a fuzzy view of audio fidelity. I don't understand the need to define and defend absolutes here. It is OK to not win a battle of absolutes here.
Ah, you got me started :). Again, very well put post Nick.
:) Actually our first encounter was at PJ HiFi in Guildford, Surrey, UK, when you were on the HDDVD World Tour. I'm afraid I was a Blu man even then (very simple decision), but we've had much more in common since.

I remember the thread in question (Advanced Concepts in Audio..?) which talked about what affected digital audio quality apart from jitter in the signal itself, and I was hooked on how player decoding could sound worse than processor decoding. I never got to the bottom of that.

There was a fateful day when I heard processor decoding for the first time. I had wound myself up over a long period of time (ignoring the highly vocal bits-are-bits brigade), and was convinced that it would sound better. But when I finally tried it at home, I couldn't tell the difference, and admitted that to everyone who would listen. It was a big let down and rather embarassing - cloth ears and all that. It was an extreme instance of expectation bias.

However, I had goofed, and bunged myself a blind dummy control without realizing it. I played a blu-ray of Happy Feet, forgetting that it had an LPCM soundtrack, rather than lossless. Therefore there was no processor decoding, so it should indeed have sounded the same. The next day I played The Golden Compass, with a DTS HD MA soundtrack - which sounded distinctly better with processor decoding. What a bolt of lightening that was. It convinced me that I had a positive result, because I got a negative result when I thought I was listening to processor decoding (but wasn't). I got a lot more confidence in my hearing after that.

Sorry for rambling, but its funny the little things you remember after all this time, yet I forget to take my daughter to a school meeting this evening. :(

Nick
 
Last edited:
This says audiophiles had every right to think the new formats produced superior sound.


Post edited to remove personal insults

What it says is that mastering engineers (sometimes) used more 'audiophile' practices for these 'new formats'*. The rest of the paper provides evidence for the idea (which is thoroughly supported by the actual technical capacities of REdbook vs SACD and DVD-A) taht those 'audiophile' practices could be used on CD too, with virtually identical perceptual result. Which of course is true - 'audiophile' practices HAVE been used on CDs, to give wide dynamic range and broad bandwidth, sourced from good digital or original master analog recordings. Then along came the 'loudness wars mastering'.....

...which, btw, the 'new formats' were NOT immune to -- as demonstrated by several curious people, including *me*.

(In fact, at least one of the discs M&M used in their test, the Steely Dan Two Against Nature all-digital DVD-A, is pretty compressed too..though it sounds marvelous)

And btw, there's at least one infamous case where the mastering engineer left the SACD layer 'audiophile' but compressed the CD layer -- a bit *dancey* in itself, that move. The album was Dark Side of the Moon ...you might have heard of it.
 
Last edited:
Hey Amir, I've been avoiding this for a while because of the noise, but here goes. Not just a positive DBT result, but an ABX DBT, no less. That's exactly what's been lacking all these years, and nobody has ever done it before IIRC (and I'm as guilty as anyone). And ABX tests are very good at confirming positive results - less so at confirming negative results.

The skeptics have been able to hide behind the Meyer & Moran results for a long time, and its about time somebody overturned them. I think this sort of science has set HiFi back many years, rather like the THRESHOLD FOR DISTORTIONS DUE TO JITTER paper by Ashihara et al did for digital audio jitter. Since 2006, people have been able to boldly state that normal levels of jitter found in audio equipment are quite inaudible, without realising how wrong they were. Just because something looks scientific doesn't mean that its right.


We're actually still waiting for DBTs that demonstrate when and how 'normal levels of jitter' are audible, contra Ashihara. You got 'em? IIRC Amir's been asked but he won't tell.

(And actually Ashihara et al. is not the only piece of publication artillary brandished in the jitter wars. You're doing the brave fighters an injustice to imply that.)


I don't think Meyr & Moran were nearly as misguided as Ashihara, but there was something very odd about their procedure that many people don't pick up on. IIRC, they didn't simply compare 16-bit vs 24-bit audio, they actually converted analogue audio to digital and did the 16/44.1 limiting in the digital domain, then converted back to analogue. And then said that process was transparent/


Nope. They even documented situations where it was NOT TRANSPARENT. Hell, it's even right there in the *abstract*. Did you read what they wrote?

(It's just astounding how often you guys misrepresent what was actually written by M&M.)


- 16-bit audio is one thing, but I can't believe two conversion stages are really transparent.

Well, belief is a funny thing, innit?
 
Last edited:
As to those "tape recordings". They are what they are. I am interested in hearing them this way. One way I have found that works is to have one of these tapes and play it back, or possibly a second generation copy. This works well, but is very expensive. Another way is to have a high resolution digital copy, such as at 192/24 or DSD. This works well, but is far less expensive. In my experience, I have not found that 44/16 recordings work as well and there is essentially no difference in cost between 44/16 and 192/24 or DSD. (I am talking manufacturing and distribution cost, not pricing which involve marketing and business issues not under discussion in this thread.)

If Redbook mastering 'doesn't work well' in your hands, maybe you're doing it wrong?

Unless your claim is that there have been no CDs that sounded 'well', ever. In that case, well....:eek:

Which is another funny thing about the hi rez cheerleaders. Their claims range from 'CD just sounds awful' to 'some CDs sound extremely good...*very close to* hi rez'. Though that's as close as they'll get to admitting that maybe, just maybe, sometimes, CDs can sound *just as good as* their 'hi rez' counterparts. It's like the vinyl crowd, really.

And then there's that ten-ton gorilla I mentioned in response to Atkinson's post. So far no answer to my query there from any of you....
 
.......
And then there's that ten-ton gorilla I mentioned in response to Atkinson's post. So far no answer to my query there from any of you....

Yeah but that was no Silverback so no-one could be bothered :D
I jest just before you respond, just saying because you seem more wound up than usual with this debate.
Anyway I will look back to see what that was about.
BTW as mentioned the context is interesting; are you talking about recording and playback at 16-bit, or 24-bit recording downsampled/decimated to lower sampling rate/16-bit?
Both technically have different considerations and possible implications; think why 20-bit is technically recommended as an ideal minimum for music when taking the whole chain and processes into consideration (meaning studio chain-process-to-consumer).
A lot of discussions on 16-bit vs 24-bit are too generalised; resulting in it becoming too generic and context being overall a comparison/argument between 16bit and 24bit that has overextended conclusions/statements.
Case in point some of the papers already mentioned, or other arguments focusing primarily on consumer DAC output end but the technical aspects actually apply from studio ADC-DAW-chain-process-to-consumer chain-DAC output.

I can give two aspect where CD/redbook does have issues more generally; the challenge and problem implementing minimum phase/slow rolloff filters, and native 16bit is not enough when considering studio processes for editing-mixing-mastering, although only proven technically by many papers and acknowledged-credible engineers.
However I do need to say there are excellent CD/redbook recordings out there, but constraints-context are pretty important.
Late so apologies if the post does not make much sense.
Cheers
Orb
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu