Trinity DAC

They are delayed. Please read the patent if interested. Dietmar have supplied the patent #.

Thanks. From what I understand from the patent, the system is basically a linar-interpolation uplsampling DAC, but with the linear interpolation being done by summation of multiple phase-shifted copies of the oversampled signal. I am not sure what benefit this method offers over linear interpolation done in the digital domain.
 
Apparently it sounds better ......

Sure - but I am still interested in what the theory behind it is. Simple linear interpolation is simple linear interpolation, independent of whether it is done in the analog or digital domain. Basically you add the samples together and scale down. Adding is a lossless operation - there is no rounding errors or loss of precision possible. I can possibly understand wanting to do the scaling (division/shift) operation in the analog domain, but addition is a totally benign operation.
 
Thanks. From what I understand from the patent, the system is basically a linar-interpolation uplsampling DAC, but with the linear interpolation being done by summation of multiple phase-shifted copies of the oversampled signal. I am not sure what benefit this method offers over linear interpolation done in the digital domain.
Ond difference is analog filtering markets better. It also can be patented. Whether there are other benefits I don't know.
 
There are at least couple of reasons:

- the digital filter never is transparent to the signal;
- 1704 DACs are not fast enough to receive 44,1kHz data oversampled 64x (44.1 x 64 = 2,822Mhz and PCM1704 DAC maxes out at 768kHz).
- LIANOTEC is less sensitive to jitter.
 
the digital filter never is transparent to the signal

Why wouldn't a simple linear interpolation be transparent?

1704 DACs are not fast enough to receive 44,1kHz data oversampled 64x (and this is the virtual sample rate of the LIANOTEC)

Valid point - but that doesn't make analog summation better - it just allows you to use an older DAC chip.

LIANOTEC is less sensitive to jitter.

Why would that be? How does analog summation reduce jitter?
 
Analog summation does not reduce jitter (as far as I know). But digital oversampling increases it. Oversample the signal 8x, and suddenly the DAC becomes 8x more sensitive to jitter. This is the reason many NOS dacs (like the Audio Note for example) can have high jitter levels and still sound pretty good.

PCM 1704 is regarded by many DAC designers as the best sounding DAC currently available. If you settle for this DAC for sound quality considerations, then the Trinity solution is the only workaround that gives you an equivalent of sampling rates higher than the DAC would normally accept (768kHz limit).
 
Analog summation does not reduce jitter (as far as I know). But digital oversampling increases it. Oversample the signal 8x, and suddenly the DAC becomes 8x more sensitive to jitter. This is the reason many NOS dacs (like the Audio Note for example) can have high jitter levels and still sound pretty good.

Interesting.
 
Don't most dacs up sample to help with jitter ? I know you can turn the feature off with most. And with my MSB stack I keep it off.

Al
 
Don't most dacs up sample to help with jitter ? I know you can turn the feature off with most. And with my MSB stack I keep it off.

Al

Most DACs do upsample. But it isn't to help with jitter. It among other things is to make digital filtering easier to do and allow use of very simple analog filtering. Upsampling will make the effects of jitter worse in proportion to the upsampling. 8x upsample, and jitter is 8x times the problem in simple terms.

The Trinity solution would step around it somewhat. The delayed multiple DAC method of interpolating would mean each single DAC is running without upsampling and therefore not more sensitive to jitter than a single NOS DAC. You would get the effect of 8x sample rates, and in this case, at higher sample rate material you can get away with no filtering of the output at all. An elegant way to accomplish that though at the considerable expense of multiple DACs and the other hardware to keep the timing to them working properly.
 
Apparently it sounds better ......

Edorr,

Your simple answer it probably the only possible one in this debate. The sound of a DAC is made by the sum of many factors, most of them interacting between them, and the main choice reflects the personal views and sound preferences of the designer. IMHO, if the reported good sound of the Trinity DAC was mainly due to the technical aspects we are debating, any one with a data analyzer and a few DSPs would soon reproduce its technical performance in a short time without infringement of patents. But probably he would find his copy would sound nasty ... :).

For example one detail - what is being briefly called here "analog summation" is an old technique properly called "interleaving". Interleaving has been used with DACs since long for signal synthesis and is really a very complex subject - see http://www.evaluationengineering.com/articles/200912/dac-interleaving-in-ultra-high-speed-arbs.php. Considering Dietmar known curriculum vitae I have no doubts that he is an expert in these matters, and many choices of his DAC have been determined by dealing with the intrinsic technicalities of this mode of operation in a successful way.

And I will become silent again on this thread, reading listening reports as they became available, until I listen to the Trinity DAC by myself ...
 
Analog summation does not reduce jitter (as far as I know). But digital oversampling increases it. Oversample the signal 8x, and suddenly the DAC becomes 8x more sensitive to jitter...

I'm not so sure your point about oversampling and jitter with respect to the two approachs is correct. It seems to me, that the increased integrated jitter power due to oversampling will apply regardless of whether oversampling is performed digitally or via time interleaved analog summation. The window for jitter entry will be at every data conversion instant. There are the same total number of such data conversion instants, per unit of time, in analog oversampling as for digital. So, while each individual D/A converter unit within an interleaved analog oversampling system does not convert as frequently as does a single D/A unit based oversampling system, the total number of conversion instants themselves (those moments when jitter may enter) are the same.
 
I'm not so sure your point about oversampling and jitter with respect to the two approachs is correct. It seems to me, that the increased integrated jitter power due to oversampling will apply regardless of whether oversampling is performed digitally or via time interleaved analog summation. The window for jitter entry will be at every data conversion instant. There are the same total number of such data conversion instants, per unit of time, in analog oversampling as for digital. So, while each individual D/A converter unit within an interleaved analog oversampling system does not convert as frequently as does a single D/A unit based oversampling system, the total number of conversion instants themselves (those moments when jitter may enter) are the same.

Let us say a given clock is used with some given level of jitter. With say 8x oversampling DAC's, a given departure from perfect timing is 8 times the percentage of a sample period. For instance (rounding off) 48 khz is 20 microseconds. If you have a 1 nanosecond early clock sample it is a samller portion of the 20 microsecond sample period of 48 khz than it is the 8x oversampled 2.5 microseconds sample period at 384 khz. So the effect on the reconstructed waveform is less at lower sample rates. If you have 8 DACs running at 48 khz they will be effected at the smaller level by mis-timing than at the higher rate. Even though each DAC is interleaved with delayed sampling running off the same clock they would have the lower level of jittered clock effect.

But delaying such DACs from each other will have to be handled in a way not to cause additional timing problems. I don't know how Trinity does that,and that will make all the difference.
 
... But delaying such DACs from each other will have to be handled in a way not to cause additional timing problems. I don't know how Trinity does that,and that will make all the difference.

Ideally, the 7 clock times for the delayed DACs should be equally spaced between one "master" clock time and the next. Because of jitter, you don't know how long it will be until the next master clock. So you have to delay the conversion process until you do know. Easily solved, but as you say, the devil is in the details.
 
...If you have 8 DACs running at 48 khz they will be effected at the smaller level by mis-timing than at the higher rate. Even though each DAC is interleaved with delayed sampling running off the same clock they would have the lower level of jittered clock effect.

Hi, Esldude,

I agree with your comment right up until the above quoted part. That part seems, to me, to be a bit of a leap, not quite following the part before it. Yes, I agree, that each individual D/A unit would not produce any more jitter than the same D/A unit operated non-oversampled (at the native sample rate). Indeed, each D/A unit in analog oversampling is operated at the native sample rate. however, the summed array of them will produce the same number of total conversion instants, hence, the same amount of integrated jitter power.

Using your example of a given clock signal with 1ns. of jitter clocking a single 48KHz D/A unit oversampling at an 8x rate, there are, as you said, 8 conversion instants every 20us. where that 1ns. of clock jitter can manifest at the analog output. If, instead, there were 8 D/A units, each clocked at only 48KHz, while there would be only one 1ns. jitter instant per 20us.from each unit, the total number of 1ns. jitter instants per 20us. is still 8. The same as for a single D/A unit operating at 8x.

One other generic point about oversampling and jitter increase probably bears mentioning. While the opportunities for jitter entry increases by a factor equal to the oversampling ratio, the integrated power of that jitter will also be a function of analog output step size. So, the integrated power from a greater number of conversion instants is mitigated by a lower jitter power per each converted instant. An properly designed oversampled system will produce smaller step sizes than that supported by the native data being oversampled.
 
Last edited:
And where do you get source material with more than 96 dB dynamic range?

I just started a thread on the question if CD has sufficient dynamic range in "General Audio Discussions".
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing