Anyone heard about Meridian's new project called MQA

Since you have the AES paper Amir perhaps you could fill in the details.
Here is a useful chart from the paper:

i-pbDjQhF-X2.png


Normally the PCM encoding is a rectangle of bits in one direction and sampling rate in the other. The thesis from Meridian is that such encoding is wasteful. That by examining the source content, we can allocate just the number of bits and spectrum to track the content as opposed to random noise. You see an example of that and where the 10:1 compression number came from.

We see that the spectrum doesn't go much past 48 Khz, and that the amplitude of the useful data is dropping exponentially (the graph is a log so exponents show up as lines). As the amplitude drops we can reduce the number of bits we use to allocate to the signal as its dynamic range requires. Close to 48 Khz for example the difference between the signal and its noise floor is less than 10 db so 3 bits or so would be sufficient to represent it. Allocating 24 bits would hence be wasteful.

Of course different tracks will have different noise floors and peaks and hence the reason they require encoding of the content.
 
Here is a useful chart from the paper:

i-pbDjQhF-X2.png


Normally the PCM encoding is a rectangle of bits in one direction and sampling rate in the other. The thesis from Meridian is that such encoding is wasteful. That by examining the source content, we can allocate just the number of bits and spectrum to track the content as opposed to random noise. You see an example of that and where the 10:1 compression number came from.

We see that the spectrum doesn't go much past 48 Khz, and that the amplitude of the useful data is dropping exponentially (the graph is a log so exponents show up as lines). As the amplitude drops we can reduce the number of bits we use to allocate to the signal as its dynamic range requires. Close to 48 Khz for example the difference between the signal and its noise floor is less than 10 db so 3 bits or so would be sufficient to represent it. Allocating 24 bits would hence be wasteful.

Of course different tracks will have different noise floors and peaks and hence the reason they require encoding of the content.

Thanks Amir, that is one of the other things I thought they were doing from reading the info about it. Is good you made that clear as I wasn't sure.

Would it also be correct to think they don't believe the ultrasonic content is important other than the small amount of info recorded there can effect the time domain performance of the process of filtering? Some of their statements seem to imply that. I guess another way to put it from looking at the block diagrams in the patent is they are encoding the normal 20 khz bandwidth and recording the smaller amount of data that represents the difference info between 20 khz and bandwidth above 20 khz. So done correctly you can save data, but add the difference info to the 20 khz info to fully recover the wide bandwidth result upon playback.
 
(...) - no matter how good it is.

Being a curious audiophile, this is the question that keeps my interest - does really an MQA encoded recording sound better than even 192 kHz HiRez PCM? Yes, I know curiosity killed the cat ...
 
Last edited:
Being a curious audiophile, this is the question that keeps my interest - does really an MQA encoded recording sound better than even 192 kHz HiRez PCM? Yes, I know curiosity killed the cat ...

Seeing as how you can't create info that wasn't in the original in some form I would say the best it could do is match the source it is encoded from. That it may do so from reduced data rate is an accomplishment.
 
Sounds completely unnecessary. Storage is cheap and getting cheaper all the time. There is no need for this technology, unless you want to make money off of it.
 
Sounds completely unnecessary. Storage is cheap and getting cheaper all the time. There is no need for this technology, unless you want to make money off of it.

I think what they are looking at with this is streaming and for use where you store your music in the cloud.

Your points about cheap storage are quite valid and one reason I don't think this is likely to be a big splash that lasts when all is said and done.
 
I think what they are looking at with this is streaming and for use where you store your music in the cloud.

Your points about cheap storage are quite valid and one reason I don't think this is likely to be a big splash that lasts when all is said and done.

We already have lossless codecs. Unless MQA improves on these by a big margin I'm not sure why we should bother. Internet speeds are also going up all the time. In another 5 to 10 years, I would be surprised if the average download speed is less than 100 mbps.
 
We already have lossless codecs. Unless MQA improves on these by a big margin I'm not sure why we should bother. Internet speeds are also going up all the time. In another 5 to 10 years, I would be surprised if the average download speed is less than 100 mbps.

Yes, but not lossless codecs that can give equivalency with 192/24 at only 1 mbps data rates.

Not disagreeing with you anyway, I already pointed out FLAC 48/24 would provide about this same data rate without needing additional equipment.
 
Would it also be correct to think they don't believe the ultrasonic content is important other than the small amount of info recorded there can effect the time domain performance of the process of filtering?
Very much so. From the paper:

Higher-than-CD data rate doesn’t guarantee improved
sound quality, but doubling or quadrupling sample rate
from 44.1 or 48 kHz can show incremental improvements
in exchange for a rapidly increasing file size [77] [32].

It is now widely accepted that one key benefit of higher
sample rates isn’t conveying spectral information beyond
human hearing, but the opportunity to tackle the
dispersive properties of brick-wall filtering. Wider transition
anti-alias and reconstruction filters directly
shorten (proportionately) the impulse response and there
is also more opportunity to apodize to remove extended
pre- and post-rings [16].


I guess another way to put it from looking at the block diagrams in the patent is they are encoding the normal 20 khz bandwidth and recording the smaller amount of data that represents the difference info between 20 khz and bandwidth above 20 khz. So done correctly you can save data, but add the difference info to the 20 khz info to fully recover the wide bandwidth result upon playback.
This part is not in the AES paper but your guess seems very much right to me as the advantage of their coding comes strongly in the ultrasonic range. And as it turns out, mistakes in coding that is not going to have the consequences that the in-band (20 to 20 Khz) would.
 
Seeing as how you can't create info that wasn't in the original in some form I would say the best it could do is match the source it is encoded from. That it may do so from reduced data rate is an accomplishment.

The jazz demo reported in TAS was created from analog tapes and according to Robert Hartley had better quality in the treble than any existing digital system. It seems it is not just a new digital format - Meridian claims it is better than existing ones. "Bob Stuart, the pioneer behind MQA technology said: “Music lovers need no longer be shortchanged; finally we can all hear exactly what the musicians recorded. MQA gives a clear, accurate, and authentic path from the recording studio all the way to any listening environment—at home, in the car or on the go. And we didn’t sacrifice convenience.”
 
Being a curious audiophile, this is the question that keeps my interest - does really an MQA encoded recording sound better than even 192 kHz HiRez PCM? Yes, I know curiosity killed the cat ...
The AES paper is somewhat vague on this but it implies that the best capture ratio is 384 Khz and implied in that, 192 Khz is a lossy transformation:

To potentiate archives we recommend that modern
digital recordings should employ a wideband coding
system which places specific emphasis on time and
frequency and sampling at no less than 384 kHz.


This is the last paragraph in the paper.

What they are saying is since they are able to only capture what is there, one should sampling at the highest rate possible, and then use their encoding to discard noise, unused spectrum.
 
Seeing as how you can't create info that wasn't in the original in some form I would say the best it could do is match the source it is encoded from. That it may do so from reduced data rate is an accomplishment.

Correct. It accomplishes the same general aim as FLAC: compresses a high-res input so that it takes up less space when stored or streamed, and expands to the original resolution when decoded on playback. The difference with MQA is that you can run the "compressed" data stream into a DAC and it will play a reduced resolution version of the input. You can think of it as a bit like a hybrid SACD/CD. Two resolutions of the same audio in one carrier.
 
7553095548dee2b97491.jpg


Bob Stuart (with permission) said:
The graph in the background is an information plot (Shannon diagram). One thing it shows is that the peak information content of the music is about 1/6 of the 4.6Mbps single channel capacity at 192/24.

That fact speaks to efficiency, not to quality improvement. It illustrates that the channel moves much more 'data' than '(relevant) information'. Resolving that conundrum is a matter of lossless compression, which MQA achieves between encoder and decoder.

The headline to emphasise the result is not about efficiency, it's about the system end-to-end (i.e. analogue-through-digital-through-analogue) temporal blur or time-smear.

The inset upper right shows the impulse response of the entire chain (not just a converter), comparing MQA to a high-performance studio ADC/DAC at 192/24 delivering the output of a microphone feed. We can quantify this in a number of ways:
  • Uncertainty of leading edge: MQA = 4us compared to 250us
  • Total impulse duration: MQA = 50us compared to 500us
  • MQA has no post-ring
  • Perceptual smear (relating to the perceived envelope and loudness) MQA at under 10us is at least 10x better.
This is a quality improvement in temporal resolution; the headline of 10x is conservative and we hear the result.

And it can be transmitted at a lower data rate, but that's efficiency gain in part from the end-end nature of the coding and the other innovations.

The problem comes if the graph is taken out of context without the words I was using the the time.

Bob
 
I have heard these claims before but if it is a good as JA claims, I am all for it. I know that Mark Waldrep at AIX records has already given Bob Stuart access to some of his 24/96 recordings for the MQA treatment. While Mark is skeptical that it can make his stuff sound better, he is game to find out. I look forward to hearing more about it and experiencing it for myself.
 
I'm not sure there isn't more to this than a clever new compression technique - Stuart hints at some psychoacoustic principles by which I hope he means more than just high-res sample rates (although I'm with him on this, anyway)
 
I have heard these claims before but if it is a good as JA claims, I am all for it. I know that Mark Waldrep at AIX records has already given Bob Stuart access to some of his 24/96 recordings for the MQA treatment. While Mark is skeptical that it can make his stuff sound better, he is game to find out. I look forward to hearing more about it and experiencing it for myself.

It depends on how you define "make stuff sound better". They aren't promising that it will sound better than the original file, rather it is supposed to sound better than what has been delivered to the end customer in the past.
 
While many fixate on the encoding model and the debate on just how 'lossy' it may or may not be, I believe the breakthrough innovation here is getting buried. It is the ability of an MQA stream + MQA-enabled DAC to fully recover timing at high resolution giving an impulse response behaviour that is much better than previously available. All the way down to being able to model the originating ADC box and incorporate that timing information in the stream.

So I would summarize MQA as being TWO things:
  1. An elegant data folding scheme to allow high-rez audio to be transmitted at low bandwidth
  2. Addressing the temporal accuracy of digital audio by including timing data modeling the source and giving the DAC the ability to deliver much improved (10x) impulse response behaviour, now accurate within the thresholds of human detection.


The last one is the big item, and I'd love to hear others views on that aspect, both technical and perceptual.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu