I had a look at the lowest amplitude sample in Audacity, the jitter-80-20.wav, and you can see easily what it is, a 24kHz or so tone that cycles in amplitude between silence and -20dB in volume, in other words there is a beat frequency imposed on a still high amplitude ultrasonic tone, which is stressing the PC DAC, and it's spitting out out distortion -- in my world this has nothing to do with jitter ...Of course it is jitter. I created the tones using FM Modulation which is what jitter does: it modulates the clock. Of course, that is how FM Synthesis also works for creating musical tones. But just because you can do that, it doesn't mean it is not jitter. Since you are convinced that you can hear such modulations, then maybe you will doubt less that you will also hear jitter .
That doesn't make sense to me ...I addressed that point. Indeed, I took what you said as the assumption and showed you that the low level of high frequency spectrum actually makes it susceptible to increase in power due to injection of jitter sidebands in the same area.
His original distortion test was insufficient, once the right measuring tool was applied he could see it was defective. Sort of like how a lot of audio engineering is done perhaps, a quick round of the obvious tests, none of the tricky ones, takes too much effort; okay, product must be OK, put it out there ..Nope. The man is a PhD in electrical engineering. He designed a chip that thought worked perfectly. His distortion was 0.03%. I don't call that "defective." It was the "golden ears" who first told him it didn't sound right. He then analyzed the design and discovered that digital wasn't digital after all. Had it not been because of the listening tests, that product would have made it to the market.
You're expecting miracles if you believe that one can continuously get better than -96db noise/distortion, this is pure specmanship now. Like a normal car advertised to be able to do 120mph, and expecting it to do that continously and without fuss; yes, in exactly the right circumstances it'll get there for a short of time, to prove a point, but for what purpose?My goal with digital is to do what it is advertised to do. If it says 16 bits, then I darn well expect the device to do 16 bits. I don't care if I don't hear past 15 bits. As you said, good engineering exists to do things right. And I expect them to get it there. The point of this thread is that their job is hard, and as consumers you need to pay attention to them pulling this off. If you want to argue for a system that nets out 80 db, be my guest. It isn't for me. I want us to at least achieve what we advertise the CD can do.
Here is the important bit. if the distortion is below the least significant bit of 16 bits, then I don't have to worry about what is audible and what is not. Because by definition, my system noise is higher than my distortion. Once you get above -96db noise/distortion, then you get in the middle of the mud you are trying to dance in , which is what is good enough. I want us to achieve what both sides of the debate can agree is inaudible, not what someone can try to justify as being inaudible. If we couldn't achieve that metric, then sure, we could talk about it. But we can. Despite the broken architecture . If we stay away from HDMI for now....
The "middle of the mud" is a lot, lot higher than -96dB, otherwise R2R would sound pretty dreadful. 96dB rolls off the tongue easily, but it means something is accurate to close enough to 1 part in 100,000, which in electrical engineering terms is really pushing the envelope, and people's ears are a lot more tolerant than that. Sorry, this is really falling in the camp of those notorious Japanese amps with 0.001% distortion which sounded awful: focusing on getting one thing right at the expense of getting the overall balance of engineering right will not provide a winning formula.
FRank