Here's some interesting FAQ's on the Merging NADAC and HAPI:
https://confluence.merging.com/plugins/servlet/mobile#content/view/9338933
https://confluence.merging.com/plugins/servlet/mobile#content/view/9338933
Much cheaper option but no "best of the best" parts is the miniDSP 4x10hd can handle 3 way actives and subs
Digital and analog in , digital and 8 chans (l/r = 2 chans) output (So it has an ADC and a DAC)
x over , limiting , delay , 12 chans PEQ . 24/96 , sharc , infinite mapping per channel
all for $599... Add a raft of power amps and you are done.
I have done that , digital in to the sharc boxen (openDRC Di and my DDRC22 ) from a Squeezebox touch to my Devialet digital in, and to compare , optical out of my Squeezebox to the amp directly...
Sounded more or less the same to me when the box was in bypass.. what was I supposed to hear?
I was talking specifically about the DSP, where it matters not one jot if the calculations are done on a PC or dedicated DSP box (or boxes) - or an iPhone. However, the capacity for FLOPS is an issue. The more FLOPS the unit can perform, the larger the filters you can use, which has some implications for accuracy of response.It is relevant if it involves putting extra sound degrading components in the signal chain. Also part of the hardware is the DAC's, A/D, I/V stages, power supplies, low phase noise clocks, board layouts, etc.
A lot of people say there's a haze and genericness to the sound. But maybe you can't notice it because your comparing it to the optical out from the squeezebox.
I was talking specifically about the DSP, where it matters not one jot if the calculations are done on a PC or dedicated DSP box (or boxes) - or an iPhone. However, the capacity for FLOPS is an issue. The more FLOPS the unit can perform, the larger the filters you can use, which has some implications for accuracy of response.
I have compared optical to spdif out with the SB touch and they sound identical to me.
I will try another way .. cue up 2 squeezeboxes (I have a few) , sych them , one plays thru the DSP box , the other into the amp .. both would use SPDIF...
I don't think this anymore. I totally agree that to fully realize the ultimate potential for a digitally active system, one needs a speaker designed from the ground up to be digitally active. The reason for this is the measurement process. IMO, the best measurements can only be taken in very large quasi-anechoic spaces or real anechoic spaces.
An excellent example would be a speaker like the JBL M2.
But the problem is finding a dedicated DSP box solution that has the same grade of components as a high end stand alone DAC such as the NADAC. I don't know of any DSP chips with power like Intel processors. Also using Moore's law, Intel chip power grows much faster than the SHARC chips. I've talked to a lot of pros about this. I'm going to be using a real time operating system running on 12 core Xeon processors to handle my front end. SHARC chips might be at this level by 2025 if we are lucky.
I run my system on Linux and a small, fanless Intel-based PC. My speakers are three way, and I find 32 bit floating point maths and huge filters are comfortably handled (an unfortunate consequence of the large filters is latency - about a second). I haven't tried 64 bit maths, but I'm fairly sure I could convert the code if I thought it would make much difference. However, I think I'd rather have a slightly cooler PC.
All depends on what kind of algorithms you are processing. 64 bit floating point, and this kind of horsepower is required for upsampling 8 channels to DSD 256, and applying the FIR and IIR filtering for Xovers and room correction. 4 cores are going to run the real time OS, and each channel will have its own dedicated core for the DSP. And latency is around 1ms with the real time OS.
(...) The best speaker I have heard (Lotus Granada) was active but used pretty basic DSP (parametric EQ, delays, etc) and A/D, D/A converters. I guess the message is that the digital side is much lower priority than the electro-acoustical speaker engineering part.
I'm going to be using a real time operating system running on 12 core Xeon processors to handle my front end.
4 cores are going to run the real time OS, and each channel will have its own dedicated core for the DSP. And latency is around 1ms with the real time OS.
Is it worth upsampling from PCM to DSD? If DSD has any merit (I'm not convinced!) has it not been lost as soon as you convert (and process) it as PCM? Starting with PCM as a source, a DSD-ised version won't be true 'DSD'.
The latency of the raw hardware may be 1ms, but if you are going to be using FIR linear phase filters or similar to flatten the phase (this, I would suggest, is the outstanding benefit of this technology), you will need to allow latency which scales with the size of the filters you are using. I don't find it a problem at all, but if I were using my system with video I would need to delay the video to match.
That's absurd. 1ms latancy? And since when does a real time OS require 1/3 of the available CPU capacity?
In my experience, you get better performance when you let the OS manage its resources dynamically rather then trying to out-guess it and manually assign specific processes to specific cores.
The JBL's achille's heel is that it is ported - a defect that cannot be corrected even with DSP.
If that's the only defect, I'd be a happy camper. I would never run them full range anyway.
Are you referring to the speakers using the ultra-expensive artisanally built Feastrex drivers? A good friend of mine dreams about building a speaker around these drivers!