Is digital audio intrinsically "broken"?

I don't think the discussion of an architecture needs to have a proof point that way Tim. The example I used in my Widescreen Review article that just came out on Jitter gives an analogy. If you take a 1080p display and sit far enough it will look the same as a 720p display. I don't think anyone then advocates that we should ignore the difference in pixels in the two displays (2:1). Nor can we say that the difference does not matter with certainty.

On jitter sound, it has no sound. It is a data-dependent distortion. What sound it creates is jitter+src which makes it highly non-linear and challenging to spot due to extreme variations in theme. Further, in digital systems jitter can cause aliasing due to requirement that the system maintain fixed bandwidth which can be violated by jitter distortions. For an eye opening demonstration of this, go and listen to the clips in this thread: http://www.whatsbestforum.com/showthread.php?3808-The-sound-of-Jitter. There, we have a src frequency > 20 Khz and a jitter frequency > 20 Khz. The jitter level is at the very low levels that occurs in current equipment. I am pretty sure you will hear jitter there as an audible tone, even though none of the contributing frequencies were audible!

This is why when people ask me what does jitter sound like I tell them that question has no answer. Jitter is one of the least quantifiable things as far as what it sounds like.

Going back to what started this thread, because audio is so broken with no way to measure fidelity to the music heard in production, we have no easy way of pointing out the contributions it is making. And hence your comment.
 
(...) I have only seen one system do as you say and it was a DAC. It assumed audio-only solution and had a detection mechanism to guess the clock speed of the source and then use one of a number of frequencies. Seemed like a clever kludge but again, it would break if fed anything that was not pure audio source.

Amir,
Are you referring to the Mark Levinson 30.5? :) During many years it was one of my never fulfilled dreams - owning the ML 31.5- ML30.5 combo ...

"Madrigal engineers have developed a proprietary buffer management scheme
which reduces reproduced jitter to less than 20 picoseconds while maintaining
the synchronization of sound and picture in movies. It employs a buffer large
enough to absorb the jitter found in transports of reasonable quality, yet small
enough to have impreceptible delay. The rate at which data is released from the
FIFO buffer is controlled by software to track the long-term data rate of the incoming
signal, allowing the buffer to absorb all the short-term variations which
cause sonic degradation. This approach yields a “smart” FIFO buffering scheme
which rejects virtually all incoming jitter without requiring an enormous buffer
and the consequent audible delay. It also avoids the sonic penalties associated
with the usual strategies used when a buffer overflows or empties.
The “smart” FIFO operates at both 44.1 kHz and 48 kHz sampling rates. The
Nº30.5 reverts to non-FIFO (recovered clock) operation for 32 kHz sampling rates
(a proposed but rarely used standard for digital satellite transmission). It also re11
verts to the recovered clock when the long-term data rate from the transport is
extremely inaccurate. (Sorry—the digital output of your CD portable will not
sound as good as a fine CD transport such as the Mark Levinson Nº31.)"
.
 
Amir,
Are you referring to the Mark Levinson 30.5? :) During many years it was one of my never fulfilled dreams - owning the ML 31.5- ML30.5 combo ...
No I was thinking of the Naim DAC which got rather poor rating of performance otherwise. Shame on me for forgetting the ML as my NO 36S is supposed to use the same scheme! :)
 
> OK, answering the original question in the title of the thread, yes, digital audio is broken!!! Who knows why?


OK, here it is.

In standard consumer systems, the source is the master. This means it that it determines the pace with which the DAC consumes the data. Worse yet, the data does not arrive with a timestamp saying "play this then" but rather, the rate of delivery of the digital samples determines that. The DAC then has to sample that analog timing and track its rate. This results in increased jitter.

So we introduce jitter in the system when we didn't need to. We call this type of architecture "push" system where the source is pushing data out. The other model is a pull system where the target (or the "sink") asks for data at the rate it wants to play. Had we done that, then jitter upstream of the target would not have mattered.

This is probably an artifact of older architecture designed for cost as 30 years ago, memory was expensive and having the pull method means reading and storing data until you need it. The amount that needs to be stored can be potentially large if one accommodates a situation where the CD drive would need to seek and read data from a slow optical drive. Today this kind of memory cost next to nothing. But we continue to build devices the old fashion way.

Now, there are ways around this and computer based systems can approximate it (i.e. using asynchronous USB bridges) or outright implement it in the form of a dedicated player. PS Audio type devices are an example.

Alas, HDMI follows the same path. In its default mode it runs using the same push method. It does have a provision for the target to set the data rate though but it is rarely implemented and even there, one doesn't know how well it works.

Amir,

I found your posts in this thread to be way over generalized. You mention async to SPDIF adapters but do not admit the existence of async mode USB DACs which address the "push" method issues you mention. . Your references to "it" in the post I quoted and others are not clear. You seem to be lumped all methods of transferring digital data and you didn't make a distinction between an ADC application and a DAC application.

SPDIF is indeed an ancient interface definition. Its flaws tell you a lot about how sterile the high-end audio industry has been for decades. However, since SPDIF is here, we might ask a couple of questions about:

- How audible are the flaws with a certain kind and a certain level of jitter?
- what level of jitter related artifacts do we see in the analog output of a transport/DAC combination?

(We might ask similar questions of other interfaces as well.) I see a lot of chatter about the shortcomings of SPDIF but it is usually in the context of the need to sped thousands of dollars to get better sound.

One more point about SPDIF. When you are using SPDIF to transfer digital data from an ADC device to a storage device, the clock is in the right place.

Async mode USB DACs are now available from ~ $ 150 on up. With async mode transfers, the clock is in the right place for ADC or DAC applications. Async mode is an elegant solution for USB transfers and it need not add much to manufacturing cost of an ADC or DAC device. There was an async mode pro-audio DAC on the market before Gordon Rankin brought his implementation of async mode firmware to the market. It is no big deal if you are writing the driver as well as the firmware.

Async mode is also applicable to Firewire DACs.

PCI and PCIe soundcards can also have a clock local to the ADC/DAC chip that controls data flow. The issues with such soundcards are the possible effects of EMI inside the computer case, the soundcard's ability to get clean power to the ADC/DAC chip and the quality of implementation of the soundcard's circuitry.

An LAN (ethernet or WiFi) connected ADC or DAC can also have a purely local clock. However, since such a device almost always has a microprocessor and memory handling the layers of protocol, the question of how data is transferred from memory to the DAC is still relevant. That is the same issue faced by PCI and PCIe soundcard ADC/DAC devices.

You mentioned HDMI as an interface for transferring digital audio. That fact that it wasn't designed for low jitter says a lot about the importance of audio in home theater these days.

---
If your comments about digital audio were specific to a video + audio application , you have not said so in this thread.

---
In an earlier post, you talked about video having standards while audio didn't. The SPDIF standard clearly should have been tightened a long time ago. Both USB and Firewire have pretty functional interface definitions.

Your mention of standards made me think of calibration and of a huge difference between video and audio. Sending photons to your eyes doesn't have the issues of in-room differences that speakers and the room add to audio playback. The important calibration for audio playback is tuning the speaker/room combination. Digital audio fits much better with that calibration than does a purely analog playback path.

Bill
 
I found your posts in this thread to be way over generalized.
I intended them that way. This is a lighthearted thread and I am not trying to turn into a master's thesis :).

You mention async to SPDIF adapters but do not admit the existence of async mode USB DACs which address the "push" method issues you mention.
In in the interest if brevity, I did not try to be all inclusive. That's true. What I was describing was the general architecture of our systems as designed some 30 years ago and as followed by 99% of the world today.

Your references to "it" in the post I quoted and others are not clear. You seem to be lumped all methods of transferring digital data and you didn't make a distinction between an ADC application and a DAC application.
[
I am not following you. As consumers, we don't deal with ADC when we play music. That is implied focus in what we discuss in this forum unless someone specifically asks about recording music.

- How audible are the flaws with a certain kind and a certain level of jitter?
Tools to generate jitter on demand as to hear its effect exclusively are hard to come by. Indirect ways to hear though exists. For example, I used to have a pro sound card that had multiple outputs all active at once. I would hookup them up to my DAC that likewise had multiple input types and switch between them as music played. It was very surprising to hear the difference the first time. All digital interfaces. All push mode. But nature of sound varies.

Lately, I have had a better tool in the form of the Audiophilleo which has a switch selectable clock that changes between low and high jitter. The difference again is surprising. I encourage people to experiment with these methods and judge for themselves.

- what level of jitter related artifacts do we see in the analog output of a transport/DAC combination?
It is all over the place. Measurements exist on Stereophile web site and if you register, Paull Miller's. Distortion products at be as high as -75db or as low as -105db due to jitter. HDMI is routinely close to the former. Well designed, and sometimes pretty cheap devices are in the latter.

One has to be careful of the devices that use sample rate/asynchronous conversion (not to be confused by the above USB devices). These DACs are totally immune to clock jitter in the classic sense, but may have resampling artifacts that are not measured by above.

(We might ask similar questions of other interfaces as well.) I see a lot of chatter about the shortcomings of SPDIF but it is usually in the context of the need to sped thousands of dollars to get better sound.
It shouldn't be that way. Indeed, most hardware players have low jitter. Problem is become more acute all of a sudden as people move to computer servers which can have very high jitter due to all the activity in the computer and little regard for such things as low jitter in the stock toslink and S/PDIF connection. And of course, advent of HDMI. What we had worked to solve after 30 years and done so cheaply, has come back to haunt us again. :)

But even there, there are solutions as cheap as $300 that get us to very we need to be. To me, that is the point at which jitter distortion is less than one bit of your least significant audio sample. For 16-bit audio samples, this is -96db or jitter value of 500 picoseconds. In reality you probably want something better as that figure is computed using sinusoidal jitter but it is good enough for government work :).

Async mode USB DACs are now available from ~ $ 150 on up.
I had not seen one that cheap. Is there measurements available for them? Without it, one has to be cautious because an async adapter does not eliminate or fix jitter. But rather, eliminates it from the upstream PC. It still has to have a robust clock and electrical isolation. Without this you have traded one set of jitter for another.

PCI and PCIe soundcards can also have a clock local to the ADC/DAC chip that controls data flow. The issues with such soundcards are the possible effects of EMI inside the computer case, the soundcard's ability to get clean power to the ADC/DAC chip and the quality of implementation of the soundcard's circuitry.
And proprietary drivers. I have expensive pro cards I bought 10+ years ago that I don't dare plug into a modern machine and then go hunt around for their drivers as they are discontinued. USB devices for the most part use in-box PC/MAC drivers which means as long as the hardware keeps working, you don't have to worry about the manufacturer not supporting it in the future or going out of business.

An LAN (ethernet or WiFi) connected ADC or DAC can also have a purely local clock. However, since such a device almost always has a microprocessor and memory handling the layers of protocol, the question of how data is transferred from memory to the DAC is still relevant. That is the same issue faced by PCI and PCIe soundcard ADC/DAC devices.
That's right.

You mentioned HDMI as an interface for transferring digital audio. That fact that it wasn't designed for low jitter says a lot about the importance of audio in home theater these days.
Incredible attention goes into good audio for home theater when done right. Indeed, I would say we are doing more for it than vast majority of people do for 2-channel listening but I realize we may be the exception. You can read more about it in my recent presentation on Video for Audiophiles. Here is a sample of the simulations I showed. I have yet to see this type of analysis applied to 2-channel music listening:

Slide19.JPG


Slide20.JPG


Slide21.JPG


There is text that goes with those slides in the above link. I am also working on a detailed write up on just this one topic.

If your comments about digital audio were specific to a video + audio application , you have not said so in this thread.
Digital audio has to work for both. The DAC doesn't know that the source of audio samples is a movie or music. And many people use HDMI as the only connection to their source. There, there is blank video even when you are listening to audio only!


Your mention of standards made me think of calibration and of a huge difference between video and audio. Sending photons to your eyes doesn't have the issues of in-room differences that speakers and the room add to audio playback. ]
They don't to the same degree as audio but the effect is certainly there and very important. That is why a proper theater has neutral gray colors as to avoid the reflections from back to the screen to cause a color shift. Good projection material has uniform color balance but others do not and can cause color shifts. Next to my monitor where I type this I have a special light designed for color evaluation of pictures. It shows a radically different view than if I tried to use room lights. The problem is a complex one as we can't control the viewers lighting environment for pictures.

Good news again is that we have standards so we can produce what is "right."

The important calibration for audio playback is tuning the speaker/room combination. Digital audio fits much better with that calibration than does a purely analog playback path.

Bill
The first sentence is certainly true. I don't follow the second. The room and speaker interactions are in analog domain.
 
I don't think the discussion of an architecture needs to have a proof point that way Tim.

I don't think any proof is called for either, Amir, I just think it's always good to keep things in perspective.

Tim
 
Amir,

The optimized solution you show on the slide was obtained with two subs in the front floor and one sub on the ceiling at the back?
 
I intended them that way. This is a lighthearted thread and I am not trying to turn into a master's thesis :).

In in the interest if brevity, I did not try to be all inclusive. That's true. What I was describing was the general architecture of our systems as designed some 30 years ago and as followed by 99% of the world today.

You billed your statement the answer to the question " Is digital audio intrinsically "broken". I answered it as such.

Some comments:

push mode transfer - It is not longer necessary so if you are terrified of it, just use a different interface. I pointed out that that push mode works fine for an ADC to computer connection. Context is important. Some DACs are far less sensitive to jitter on an SPDIF input than others.

$ 150 async USB DAC - HRT Music Streamer II - Stereophile reviewed the previous version without async mode and had not followed up.

Clock quality in an inexpensive async mode DAC - We need measurements on the performance of DACs at various price levels. The lack of good measurments for inexpensive DACs seems a weak argument for spending thousands of dollars for a DAC.

async to SPDIF adapters - On one hand, you discuss the problems of a push transfer method. Then you suggest that a usb-SPDIF adapter (which uses a push method to transfer data to the DAC) is superior to an async mode USB DAC which avoids the push method entirely. Given the same clock implementation, clock signal in an async USB has only a inch to travel to the DAC chip compared to going through an SPDIF transmitter, feet of cable and an SPDIF receiver.

motherboard toslink and SPDIF outputs - I doubt that many readers on this forum would consider using motherboard coax or toslink outputs. Motherboard Toslink output from my Intel HB67 based motherboard produces very similar sound to a coax connection from an ESI Juli@ card to the same DAC. Motherboard audio can be quite reasonable now.

speaker/room correction - Applying DSP style algorithms to digital data seems the way to go. If the audio stream was analog when it gets to the point where you are applying correction, you'll convert it to digital audio to do the correction.

audibility of jitter - the recordings including jitter artifacts that I have seen have involved a much higher level of jitter than we are talking about in reasonable DACs.

I've done comparisons between DACs, toslink and coax interfaces from PCI soundcards and motherboard audio and computers for I own. The differences are not significant for the two SPDIF DACs I own right now.

proprietary drivers - I agree that hardware makers may stop updating drivers. When you were at Microsoft, did you supervise the replacement of the WDM style audio driver interface with Vista audio stack based interface? If so, you are partly to blame. The shift to laptop computers and slotless compact computers shrank the market for PCI / PCIe soundcards. Investing in driver development for Vista was not attractive for old products..

Some pro-audio audio gear is well supported. RME appears to be a very good bet. They provide solid Win 7 drivers for 10+ year old products and current products are getting good reviews. Some other vendors that seem low risk now: Lynx, Echo, TC Electronic. A number of pro-audio vendors are releasing USB based multi-channel interfaces now.

If Microsoft does not drastically change the driver model again in a few years, a product with solid Win 7 drivers now should be good for years.

home theater done right - what portion of home theater is that?

One has to be careful of the devices that use sample rate/asynchronous conversion (not to be confused by the above USB devices). These DACs are totally immune to clock jitter in the classic sense, but may have resampling artifacts that are not measured by above.

Yes, when you sweep the artifacts under the rug, where do they appear during playback?

But even there, there are solutions as cheap as $300 that get us to very we need to be. To me, that is the point at which jitter distortion is less than one bit of your least significant audio sample. For 16-bit audio samples, this is -96db or jitter value of 500 picoseconds.

That isn't too demanding.

I have yet to see this type of analysis applied to 2-channel music listening:

The material in first slide and the top part of the second slide have been around a long time. The simulation goes beyond usual discussions. It would be very useful if you had not built the listening room or if you were selecting speakers for an existing room and you could get some data on alternative choices.

Digital audio has to work for both. The DAC doesn't know that the source of audio samples is a movie or music. And many people use HDMI as the only connection to their source. There, there is blank video even when you are listening to audio only!

Your focus seems to be on your market as a retailer / installer of both 2 channel and home theater gear. In this post you talked about the context of this forum. Is a combined audio/home theater system using HDMI as the only connection relevant to readers of this forum?

I see home theater as a mature market. I doubt that audio playback through a home theater system is going to be central to the iPod generation and even less likely for the iPhone / iPad generation.

Bill
 
Off topic, but I wonder if the home theater market is growing? Do you have any industry figures, Amir? It doesn't seem nearly as common as stereo systems did in the 70s and 80s and it seems that the majority of the sales go to low end HTIBs. In the stereo heyday, it seems like every young guy or young couple had a big receiver and a pair of "bookshelf" speaker as big or bigger than today's mid-sized floor standers. I don't think HT ever caught on that strong and wonder if it is going to fade further. And in Europe, where living spaces are generally much smaller than in the US, I expect it's even less of a force. It's very cool. I don't think it will go away. But it's a big niche market, at best, and seems likely to get smaller. Back when I was selling, I had at least as many customers interested in whole house architectural audio as HT. They want the TV system hooked in, but the big surround sound thing was unimportant to many.

Tim
 
push mode transfer - It is not longer necessary so if you are terrified of it, just use a different interface.
I don't know how we keep talking past each other. The purpose of my commentary here was that digital audio as envisioned to be used by us is broken as an architecture. We have work-arounds but that is what they are: work around. If the systems were designed right from day one, we would not have to resort to asynch adapters/DACs. If you disagree with this statement, please say it and we will discuss it more.

Some DACs are far less sensitive to jitter on an SPDIF input than others.
Which would be a non-issue if the digital link was a data link, not a real-time synchronous push system.

The lack of good measurments for inexpensive DACs seems a weak argument for spending thousands of dollars for a DAC.
There is nothing in my post advocating expensive DACs. Indeed, I was clear that the cost of doing S/PDIF right is very low given how mass market products do it right. I went on to say that with computers, we are back to high jitter again.

async to SPDIF adapters - On one hand, you discuss the problems of a push transfer method. Then you suggest that a usb-SPDIF adapter (which uses a push method to transfer data to the DAC) is superior to an async mode USB DAC...
I did not say this at all. You asked me why I didn't mention async DACs and I said: "In in the interest if brevity, I did not try to be all inclusive. That's true. " Where are you reading in there that I am saying one is superior to another?

Given the same clock implementation, clock signal in an async USB has only a inch to travel to the DAC chip compared to going through an SPDIF transmitter, feet of cable and an SPDIF receiver.
You can accomplish the same with some bridges like Audiophilleo where the box can hook up directly to the back of the DAC without a cable. One school of thought here says that by keeping the digital connection to the PC externally, there is an advantage to pulling the same logic inside the DAC as in the case of asynch DACs you are talking about. How true that is, I have no data to share. My personal preference is to have it inside the DAC.

motherboard toslink and SPDIF outputs - I doubt that many readers on this forum would consider using motherboard coax or toslink outputs.
Your doubt is misplaced :). I speak to countless forum posters here and there who use that interface. "Digital is digital" to them.

Motherboard Toslink output from my Intel HB67 based motherboard produces very similar sound to a coax connection from an ESI Juli@ card to the same DAC. Motherboard audio can be quite reasonable now.
Is there a measurement I can see on that comparison?

speaker/room correction - Applying DSP style algorithms to digital data seems the way to go. If the audio stream was analog when it gets to the point where you are applying correction, you'll convert it to digital audio to do the correction.
I did not talk about DSP in the slides that I post here although the full presentation talks about that as the last and third optimization step. The first is computer modelling of multiple subs as to smooth the frequency response. Without that you can easily get fluctuations of 20 db in low frequencies. No EQ is needed to achieve the smoothing that occurs with CFD optimization of multiple subs.

But yes, if you apply EQ to an analog system, you need to digitize it. The trade off is that the optimization will create better in-room response. So linear distortions are sharply reduces. But some amount of non-linear distortion gets added in due to going in and out of analog domain. For movies, that is absolutely justified. For music, you have to make a choice.

audibility of jitter - the recordings including jitter artifacts that I have seen have involved a much higher level of jitter than we are talking about in reasonable DACs.
I don't follow this. If you are saying there is more jitter in recording than in playback so we should ignore what is in the latter, I disagree. Distortion is accumulative. Here are the steroophile measurements of the RME professional sound card feeding a low-cost DAC:

1210Halfig2.jpg

That level of jitter is alarming to me especially coming from a pro sound card. Here is the same with the Halide asynch USB to S/PDIF bridge:

1210Halfig3.jpg


If digital architecture for consumer reproduction of music was right, the two pictures would look the same.

I've done comparisons between DACs, toslink and coax interfaces from PCI soundcards and motherboard audio and computers for I own. The differences are not significant for the two SPDIF DACs I own right now.
They were for me. :) Hence the reason we have this forum to exchange experiences :) :). I have tested asynch adapters with DACs and in all but one case it made an improvement. In that one unit, it made no difference at all. So clearly there is variability there.

home theater done right - what portion of home theater is that?
Portion of market? Then next to zero. But that doesn't mean the domain is broken and no one is innovating as I thought you were saying.

Yes, when you sweep the artifacts under the rug, where do they appear during playback?
It depends on how the sample rate conversion is occurring. You need to look at the the filtering used and how it adapts to changing input rates.

That isn't too demanding.
Unfortunately it is. With very rare exceptions, no HDMI interconnect achieves that.

The material in first slide and the top part of the second slide have been around a long time. The simulation goes beyond usual discussions. It would be very useful if you had not built the listening room or if you were selecting speakers for an existing room and you could get some data on alternative choices.
I am not following you. The simulation shows multiple subs exciting a room with that drawing. There were no speakers selected in that modeling. The room was built after that simulation was done which determined precise location of the subs and the number to deploy. Yes, the innovation is quarter million dollar in custom and off-the-shelf software and days of run time to find the optimal location and number of subs (40,000 iterations in the case of that simulation). I didn't tell you every line item in that slide was new. It was the sum total of the approach that is new.

Your focus seems to be on your market as a retailer / installer of both 2 channel and home theater gear. In this post you talked about the context of this forum. Is a combined audio/home theater system using HDMI as the only connection relevant to readers of this forum?
Of course. This forum has broad coverage of all things audio/video and even beyond. Members are of course welcome to ignore what doesn't interest them.

I see home theater as a mature market. I doubt that audio playback through a home theater system is going to be central to the iPod generation and even less likely for the iPhone / iPad generation.

Bill
Well, that is quite mistaken. Just today we sold a $70,000 theater. There are a lot of innovations in projection technology. And I noted the advances in audio processing. Are you going some place with this bit about home theater being mature, uninteresting and such? I am not following your motivation there.
 
The purpose of my commentary here was that digital audio as envisioned to be used by us is broken as an architecture. We have work-arounds but that is what they are: work around. If the systems were designed right from day one, we would not have to resort to asynch adapters/DACs. If you disagree with this statement, please say it and we will discuss it more.
Well, I disagree, for a start. When you attach the term "broken" to a processing architecture then the import is that there is something fundamentally wrong or incapable with the design which stops it functioning. Which is obviously not the case with digital audio, it does work. Now, in less than optimum implementations there may be more distortion there than there should be but that that's an example of faulty engineering, not broken design.

Jitter arriving at a DAC in push mode can always be audibly eliminated if the engineering of the capture circuit is done well enough. The fact that it often isn't merely says that the engineers are not working cleverly enough to do it, or they have the wrong people trying to do the job. The implication in what you're saying is, that in any engineering situation where people find it hard to get something working straight off the mark, where they can't just grab an off the shelf solution to do the job, is that the fundamental concepts are somehow faulty.

What you call work arounds are fundamental to any engineering exercise, they are "proper" technical solutions to achieving a correct, and cost effective outcome ...

Frank
 
Well, I disagree, for a start. When you attach the term "broken" to a processing architecture then the import is that there is something fundamentally wrong or incapable with the design which stops it functioning.
Frank, you created the thread title, I didn't :). I just ran with it, realizing that no one would think that when I say it is broken, it means it doesn't produce sound.

In computer science at least, we use the term "broken" to also apply to things that work, but are not designed well. Not sure if it is used that way outside of US or outside of that field. But it is pretty common terminology meant to get people to pay attention to something could be a lot better. Someone would say "IE is broken." Clearly hundreds of millions of people browse the web with IE. Yet folks use that term.

But sure, let's consider it a design issue.

Now, in less than optimum implementations there may be more distortion there than there should be but that that's an example of faulty engineering, not broken design.
No it is not an implementation thing. It is a design thing. The design puts the source in charge of timing. That design makes implementations much harder.

Here is another example. Jitter is a fact of life in any communication channel. Everything inside your computer has to deal with jitter. Yet you don't hear a thing about it there because its manifestation there is harmless for the most part. Once the signal is captured, we don't care how much jitter existed. In audio reproduction, even though we capture the data, we are stuck with ramifications of jitter. It could have been completely like the data example in your computer but it is not. I call this a design problem.

Jitter arriving at a DAC in push mode can always be audibly eliminated if the engineering of the capture circuit is done well enough.
Please demonstrate this. Like to see detailed design explanation here and/or measurements.

What you call work arounds are fundamental to any engineering exercise, they are "proper" technical solutions to achieving a correct, and cost effective outcome ...

Frank
If we make the engineering problem hard, vs another solution that wouldn't have, then the design is not good. Just because engineers can work through tough problems doesn't excuse the less than optimal architecture.
 
Please demonstrate this. Like to see detailed design explanation here and/or measurements.
Okay. Start with something simple and straightforward like the TI DIR9001, a one chip solution, which claims a jitter performance of 50 psecs RMS. Is this not good enough, or are they not telling us something which is relevant, or are they outright lying? Why won't this do the job if all the guidelines are followed, and the engineers take due care?

Frank
 
Okay. Start with something simple and straightforward like the TI DIR9001, a one chip solution, which claims a jitter performance of 50 psecs RMS. Is this not good enough, or are they not telling us something which is relevant, or are they outright lying? Why won't this do the job if all the guidelines are followed, and the engineers take due care?

Frank
First, let me apologize for not seeing an important word in your post: "audible." For the purposes of this discussion, let's not dig into that but rather, let's stay with the guidelines I mentioned (i.e. preserving the full 16 bits).

As you note, the spec for that TI receiver is in RMS. You will note that I used peak-to-peak for my metric. In looking at audibility issues, we need that number converter to peak to peak. Conversion between RMS and PP requires knowing the waveform which unfortunately we do not know. Jitter can be sinusoidal, an impulse, periodical, data dependent, random, you name it.

Jitter is spec'ed in RMS typically when dealing with communication issues. There, the assumption is usually that the jitter is random with Gaussian distribution. The mathematics here can be rather complex still but using online calculators and assuming a BER or 1e-12, we arrive at a peak to peak value of 636 picoseconds peak to peak! Quite a big jump from that 50 ps RMS value.

But we are not done yet. The 50 psec value is "typical." Max value is spec'ed at 100 psec RMS. That doubles our jitter to 1,273 psec.

BTW, reading figure 3 in the spec shows that RMS value can actually exceed 100 psec RMS.

I am too lazy to read through the full spec :). But for now, the jitter reduction here may not be very good in low frequencies. Assumptions may have been made about nature of jitter frequencies the device filters.
 
The purpose of my commentary here was that digital audio as envisioned to be used by us is broken as an architecture. We have work-arounds but that is what they are: work around. If the systems were designed right from day one, we would not have to resort to asynch adapters/DACs.

Your post enlightening us about why digital audio is "intrinsically broken" would have been much clearer and more accurate with that qualifier.

> Which would be a non-issue if the digital link was a data link,
> not a real-time synchronous push system.

You wrote off digital audio based on an ancient interface (SPDIF) and one that isn't very good for audio (HDMI) while ignoring methods that are much better.


RME soundcard - I found the graph you included came from in which RME card was used. Here is what John Atkinson said about the DAC used in the jitter comparisons

"I chose the Assemblage because it appears to have the worst rejection of incoming datastream jitter of the DACs I had to hand."

Do you believe that any digital output source can produce high quality results with any SPDIF based DAC however cheap and poorly implemented.

Here is a link to Stereophile's review of the RME soundcard from 2000:

http://forum.stereophile.com/computeraudio/299/index.html

Overall jitter for toslink output to a Musical Fidelity X-24K D/A converter was described as " a low 248 picoseconds peak-peak". With a coax connection, " The jitter level has increased to a still-good 686ps." (I think that the Miller instrument Stereophile used in 2000 has been replaced by a higher precision model in the intervening years.)

For analog output from the RME card, The absolute level of the jitter dropped from 3.97 nanoseconds [for the first sample] to a superbly low 136.5 picoseconds [for the second sample], with almost no data-related components (red numeric markers) visible in this graph.

You said "That level of jitter is alarming to me especially coming from a pro sound card." I think your statement is not warranted given the context.

> Portion of market? Then next to zero.
> But that doesn't mean the domain is broken and no one is innovating
> as I thought you were saying.

My point is that the bulk of home theater is not the high end you target. My guess is that high end home theater companies have to accept what the mass market companies define as interfaces (as high end audio companies have.)

> I am not following you. The simulation shows multiple subs exciting a room with that drawing.
> There were no speakers selected in that modeling.

I made straightforward comments about possible applications of such a simulation. I believe that the sentences I wrote were clear.

> This forum has broad coverage of all things audio/video and even beyond.
> Members are of course welcome to ignore what doesn't interest them.

You were not clear in your post proclaiming that digital audio is broken that you were referring only to your very narrow application.




> There is nothing in my post advocating expensive DACs. Indeed, I was clear that the cost of
> doing S/PDIF right is very low given how mass market products do it right. I went on to say
> that with computers, we are back to high jitter again.

You seem to have a narrow context in mind in saying that we arew back to high jitter. Are your referring to motherboard SPDIF output, motherboard HDMI output but excluding PCI card based SPDIF output? Are you excluding async USB DACs?


> But yes, if you apply EQ to an analog system, you need to digitize it.

I made the point that digital audio was necessary. You first failed to see any connection. Now you do. Thanks.


> If you are saying there is more jitter in recording than in playback so we should ignore
> what is in the latter, I disagree. Distortion is accumulative.

You missed the point. Recordings of material with jitter added usually have a higher level of jitter added than the levels we might see in reasonably designed audio gear.


This has nothing to do with whether jitter is additive.


> Is there a measurement [on Intel H67 motherboard audio digital output]
> I can see on that comparison?

I have no measurements. You mentioned hearing differences between DACS and I'm reporting on a lack of differences in my listening.

> Well, that is quite mistaken. Just today we sold a $70,000 theater.

I don't see how selling a $ 70,000 theater says anything about home theater being a mature market or not.

> Are you going some place with this bit about home theater being mature, uninteresting and such?
> I am not following your motivation there.

In your preceding post, you said "Digital audio has to work for both [audio only and audio+video]. The DAC doesn't know that the source of audio samples is a movie or music. And many people use HDMI as the only connection to their source."

I'm making an argument that a home theater system will not be the most common environment for audio playback at home in the future. Note that Tim made a related comment.

Bill
 
Last edited:
Amir, I've done a bit more research and would now tend to see the DIR9001 as a bit of a dud. And the reason follows from the second point I made in my previous post, "they not telling us something which is relevant", which is how sensitive the chip is to input jitter, which, after all, is what the device should be all about. People mention getting 100's of psecs jitter from this chip in a real circuit.

So I'll bring in my 2nd contender -- the Wolfson WM8804. This key aspect has been addressed in the chip, to the degree that the chip can be used to de-jitter an S/PDIF stream, it becomes a simple pass-through device to "fix" the S/PDIF signal. So for extreme cleaning you could daisy chain these chips, to get intrinisc jitter down to 50 ps, also. A document on their website, http://www.wolfsonmicro.com/documen...e_of_spdif_digital_interface_transceivers.pdf, details their advantages, with some interesting measurement screen captures ...

Frank
 
.....

You wrote off digital audio based on an ancient interface (SPDIF) and one that isn't very good for audio (HDMI) while ignoring methods that are much better.

......
I'm making an argument that a home theater system will not be the most common environment for audio playback at home in the future. Note that Tim made a related comment.

Bill

I think it is interesting in the way Amir has stated the position for it being broken from what should be the ideal transmission architecture and he has a very good point.
Just wanted to come back to the architecture/interface aspect, now I appreciate in a discussion many of us can be looking at this from different perspectives because we have PC-download-DAC system or the more traditional CD-DAC, and this can give subtly different takes.

Coming back to the CD-DAC, I have to say Bill that from the point the CD is read (from mech-servo point) that data is transmitted in either SPDIF or I2s internally, so even using an alternative interface it started out as either of those.
It could be argued the most ideal would be I2s from mech all way to DAC, but this solution is rare and still does not meet the requirements outlined in Amir's ideal architecture and bit degeneration IMO.
The point is to bear in mind that one needs to consider the protocol used (such as SPDIF or AES3,etc) and also the physical media (coax balanced-unbalanced/opticalfibre/etc), either or both can have an affect or specific performance window-trait.

Bill, so I can get a feel on your perspective what do you feel does much better; it may match Amir if thinking of PC-download-DAC using Async USB or ethernet networking streaming.
Still, as mentioned one needs to split this into protocol and physical media when looking at this from the statement made by Amir.
As an example Toslink is great for electrical isolation but has other issues, and also may/probably using SPDIF transmission/receiver chips for protocol transmission in modern gear or anyway be very similar.

Regarding jitter and its performance, again we need to understand better how this relates to the technologies used and how it subtly changes.
I have not linked this before but I strongly recommend anyone really interested in the subject of Jitter to read the following primer from Tektronix;
This document cannot be skimmed and needs careful reading otherwise it is possible to take aspects out of context.
So please take time to read it:
Understanding and Characterizing Timing Jitter Primer
http://www.tek.com/Measurement/scopes/jitter/55W_16146_1.pdf

The examples in there match real world measurements seen with Paul Miller's measurement tools at Hifinews and Stereophile, and also touches on the great info provided by both Don and Amir in other tech threads.

Cheers
Orb
 
First, let me apologize for not seeing an important word in your post: "audible." For the purposes of this discussion, let's not dig into that but rather, let's stay with the guidelines I mentioned (i.e. preserving the full 16 bits).

I enjoy an academic exercise as much as the next guy :), but once you're using the system, listening to music, audibility is the only thing that matters. And for that thing, there is a whole generation of chips out there that, when implemented properly, seem to inexpensively eliminate jitter for all practical purposes. Can I prove that? Millions of listeners provide evidence enough, but the only proof would be beginning past the receiver chip, past the DAC, through the same, simple output stage (two that measure the same would be good enough for me) and the same playback system and into the ears of listeners; listening testing. Blind, of course, as we're all aware enough to know that a billet aluminum front and impressive logo will always sound better than a small plastic box otherwise.

Has anyone conducted such tests? Have other labs and scientists come behind them to verify? Of course not. Before that kind of effort and money is spent, there needs to be a substantive issue. Here, there is not.

As far as "broken" is concerned, in the sense that you're using the word, yes, digital audio is broken. Of course in that sense so is the speaker.

Tim
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu