WAV vs FLAC revisited

If you don’t hear a difference, you don’t.
That doesn’t prove there are no differences.
Absence of evidence is not evidence of absence
It only proves in the setup used, you are not able to hear a difference.

To eliminate expectation bias it is indeed preferred to test unsighted.
In practice the likelihood of hearing a difference (e.g. WAV sounds much better than FLAC) in a sighted test but not in a unsighted test is higher than the reverse, not hearing a difference sighted but do hearing a difference in a unsighted test.
 
I've never heard any differences and have done many comparisons. Perhaps it is limited to computers with poorly designed or implemented audio hardware? Perhaps audio hardware integrated into some motherboards?

Think about it. The audio clock is derived by its own crystal and is completely uninvolved with the PC clock. So the pacing of the output bits through the audio hardware has ONLY to do with the audio hardware clock. Bits going to the audio hardware are buffered to varying degrees by the audio hardware. As long as a bit makes it in time to be 'called' for output it will be right in sync with no more jitter than the audio clock itself. If, for some reason incoming bits don't come in as fast as the clock requires them, and the buffer runs dry, you will experience obvious stutter.

So unless the audio hardware is complete crap there couldn't be any difference.

With external audio hardware the clock can be derived by its own built-in crystal, which of course would be completely independent of the PC, or it could be derived from an external clock source such as a master clock in a studio. Assuming the word clock cabling and/or s/pdif cabling is properly implemented and terminated there is no issue.

I routinely play 176k and 192k bit rate FLAC files to my audio hardware, and they are identical in sound to their wav equivalent. At these bit rates with no problem, it's hard to imagine that the consumer comparisons with 44k source is somehow relevant unless hardware design issues are at hand.

I have noticed that many of those reporting differences between wav and flac are using (lower powered?) mac hardware, though that is not always the case. No, I'm not a mac basher as I have one myself.

--Bill
 
I've never heard any differences and have done many comparisons. (...)

Bblue,
Nice to have new people on this thread. Could you list the system you are using? Every opinion is valuable in these matters, but IMHO as anything in audio is system dependent some knowledge of the poster preferences is fundamental.
 
It only proves in the setup used, you are not able to hear a difference.

It really doesn't even prove that. Evidence is easy. Proof is demanding, time-consuming and repetitive. But I think if you're hearing a difference between WAV and lossless, you're hearing a flaw in your system, not a difference in the formats. I'm not sure that one even needs any further evidence.

Tim
 
(...) But I think if you're hearing a difference between WAV and lossless, you're hearing a flaw in your system, not a difference in the formats. I'm not sure that one even needs any further evidence.
Tim

Tim,
Surely. But currently my ears listen to systems not to formats. As far as I know humans do not have a digital input and this debate is just about systems ... :D
 
The audio clock is derived by its own crystal and is completely uninvolved with the PC clock.

A bit too optimistic I’m afraid.

You can’t rule out that the electrical activity going on inside a PC be it EMI, RFI or ripples on the power rail will disturb the DA conversion.
If the crystal is a VCXO, any disturbance of V will result in a change of speed.

With external audio hardware the clock can be derived by its own built-in crystal, which of course would be completely independent of the PC.
If you use SPDIF or adaptive mode USB, the DAC is slaved to the input.
If this input is jittery, it will affect the performance of the DAC.

In case of an asynchronous protocol like USB with asynchronous synchronization one doesn’t have input jitter as now the DAC is in control.
Even in this case this protocol is often combined with galvanic isolation.
It is perfectly possible for stray signals to enter the DAC e.g. by a common ground, USB power, etc.

That’s the reason why we talk “isolation” today.
Shield the DAC as much as possible from the source on both protocol and electrical level.
 
(...) If you use SPDIF or adaptive mode USB, the DAC is slaved to the input. If this input is jittery, it will affect the performance of the DAC.

In case of an asynchronous protocol like USB with asynchronous synchronization one doesn’t have input jitter as now the DAC is in control.
Even in this case this protocol is often combined with galvanic isolation.
It is perfectly possible for stray signals to enter the DAC e.g. by a common ground, USB power, etc.

That’s the reason why we talk “isolation” today.
Shield the DAC as much as possible from the source on both protocol and electrical level.

Very concise but not exhaustive view, IMHO. You can get perfect isolation using optically coupled fast devices operated by batteries very easily. I read people tried it and it would not make you independent of the computer source.
 
A bit too optimistic I’m afraid.

You can’t rule out that the electrical activity going on inside a PC be it EMI, RFI or ripples on the power rail will disturb the DA conversion.
If the crystal is a VCXO, any disturbance of V will result in a change of speed.
Well, as I said above, that would be considered poor engineering, and of course it does happen, but it doesn't seem to me to be a good argument for a flac playback being audible. Lots of things are audible or not audible from poorly designed electronics, cables, speakers, you name it. That doesn't necessarily make the source defective.

If you use SPDIF or adaptive mode USB, the DAC is slaved to the input.
If this input is jittery, it will affect the performance of the DAC.
Only if the DAC is not buffered. Are we talking worse case situations, or well designed ones?

In case of an asynchronous protocol like USB with asynchronous synchronization one doesn’t have input jitter as now the DAC is in control.
Even in this case this protocol is often combined with galvanic isolation.
It is perfectly possible for stray signals to enter the DAC e.g. by a common ground, USB power, etc.

That’s the reason why we talk “isolation” today.
Shield the DAC as much as possible from the source on both protocol and electrical level.
If that argument was to be believed, then it would be impossible to listen (or watch) digital audio/video files from the internet. There's no expectation of a steady stream there, but it works anyway. Why?

Regardless of the source, there should be sufficient buffering along the way to prevent the types of jitter you're describing.

--Bill
 
Bblue,
Nice to have new people on this thread. Could you list the system you are using? Every opinion is valuable in these matters, but IMHO as anything in audio is system dependent some knowledge of the poster preferences is fundamental.
Sure, no problem.

Most of my serious listening gear is in my home studio and associated with my DAW. The audio chain from any given digital or analog source (for playback) starts with a Crane Song Avocet studio monitor controller, driving differential balanced IC's to a Pass Labs X250.5 amp, connected to B&W Nautilus N801 speaker monitors on Sound Anchor stands, via 7.5 gauge fine gauge litz bundled bundles. The IC is fine gauge litz bundles as well. It's an extremely revealing system. The litz wiring is from BPT.

The DAW is a Win 7 PC (in a different room, so no noise) with quad core i7 processor running at 3.2Ghz, and with 12GB RAM. There's a Delta 1010LT audio interface card in it, as well as an M-Audio Profire 2626 audio interface via Firewire. Neither device is used for audio D/A just digital out to the Avocet. The 2626's A/D is for instrument & vocal tracking. Most of the time I use a Master Clock for timing (Apogee Big Ben or UA 2192), and for A/D recording/transcribing from analog source (turntable or reel to reel) the UA does the honors and is the Master Clock, followed by Big Ben as a watch dog stabilizer going to the DAW.

My primary DAW software is Reaper 4, but on occasion use Wavelabs 7, Adobe Audition CS5, or Izotope RX2 for additional digital editing depending on the need. While listening (or doing most anything in here) I also monitor the audio with a visual Spectrum Analyzer, XY scope and 1/3 to 1/6 octave graphing, all of which provide a lot of insight to what is being heard.

Outside the editing software and Reaper, most of my flac playback originates in Foobar 2000 driving one of the two digital cards with s/pdif out to the Avocet.

Everything in the studio receives conditioned, balanced power from Equi=tech Son of Q units and earth grounded.

That's pretty much it in a nutshell. I've been an audiophile and audio engineer for over 40 years, and am retired now. Mostly these days I'm doing analog->digital transfers from R to R masters and some vinyl. An occasional tracking/mixing or a pre-mastering (CD) project, but I don't encourage it much.

--Bill
 
Only if the DAC is not buffered. Are we talking worse case situations, or well designed ones?
Buffering does not help with this situation. It has to do with the architecture of our systems where *source* is the master. Assume the case where your DAC clock is running faster than the source. In what way would buffering help you? You would run out of data quickly and glitch. If the DAC clock is slower, you keep accumulating data. Think of my leaving my PC running with 10,000 tracks playing continuously. How much buffering do you need and how far off are you from the source?

Think of my playing a movie now using the same audio connection. The video is displayed on my computer monitor and audio with said DAC. Now think through the above scenarios of the DAC clock running slower or faster. You will easily drift and lose audio/video synch.

Our devices in default mode then run synchronous to the source as the master. The DAC cannot make a different assumption and have it work as customer expects it per above.

As Vincent mentioned, in the specific case of music, we can choose to go asynchronous using USB. In that case the DAC become the master and then, using buffering, be in charge of the consumption rate of bits. It would still break A/V sync but it is understood in this scenario that it would do that from customer/application point of view.

If that argument was to be believed, then it would be impossible to listen (or watch) digital audio/video files from the internet. There's no expectation of a steady stream there, but it works anyway. Why?
Different problem. Buffering is indeed used to gather enough data to play. Once there though, the same mechanism as above is used to maintain sync. If you run the DAC async from the PC you will indeed lose A/V sync here too.

Regardless of the source, there should be sufficient buffering along the way to prevent the types of jitter you're describing.

--Bill
Buffer does solve network delivery issues. It does get rid of that type of jitter. Reason it works there is because your PC is the master, not the network server. It buffers enough and then plays. The server does not control AV sync as it delivers both data types simultaneously. And it behaves as a pure "data pump." In sharp contrast, in audio playback we use a synchronous system where the source is in control. The only way to fix this is as mentioned above -- using asynch USB to turn this system to a mode similar to network delivery of streamed content where the source is a data pump and nothing more.
 
Tim,
Surely. But currently my ears listen to systems not to formats. As far as I know humans do not have a digital input and this debate is just about systems ... :D

Many people argue that there is an audible difference between the formats and vigorously deny that the issue could be in their systems, micro.

Tim
 
Many people argue that there is an audible difference between the formats and vigorously deny that the issue could be in their systems, micro.

Tim

Not in this forum anymore, I hope ...:)

Although we have to be very careful with the words or everything will be only the semantics. Reading single posts with the answers ignoring the questions can be misleading.
 
Many people argue that there is an audible difference between the formats and vigorously deny that the issue could be in their systems, micro.

Tim

This is indeed the problem, expressed at possibly its worst in The Absolute Sound's multi-part series (now on part 3 of 4)
 
Amir sums it up very well indeed.
You might have a buffer but who does the buffer management?
If it is the DAC this can only be done by or by altering the speed or setup a control loop.
Maybe this link is of use, it demonstrates the difference between adaptive and asynchronous quit nicely
http://www.audiomisc.co.uk/Linux/Sound3/TimeForChange.html
If you add some cleverness to the system there should be no problem in resychronising two different clock rates, the buffer fill rate and the DAC rate. Music has this marvellous ingredient in it called silence, occurs quite a bit in fact, and all you have to do is wait till you get a bit, or something that is acoustically equivalent, and skip a few samples in whatever direction is required. No audible glitches, very straightforward to do, and achieves a genuine result with a minumum of fuss and expense.

Frank
 
If you add some cleverness to the system there should be no problem in resychronising two different clock rates, the buffer fill rate and the DAC rate. Music has this marvellous ingredient in it called silence, occurs quite a bit in fact, and all you have to do is wait till you get a bit, or something that is acoustically equivalent, and skip a few samples in whatever direction is required. No audible glitches, very straightforward to do, and achieves a genuine result with a minumum of fuss and expense.

Frank
If the DAC clock is runnig faster, then what you propose doesn't solve the problem as it needs to manufacture samples, not skip them. Even if it ran slower, you can't deploy what you describe. A system has to work for all input samples. It does today even in a $10 music player. You can't in the interest of better fidelity, hope and pray there is silence to skip over or else it breaks. There is no such thing as absolute silence anyway. There is likely to be a noise floor and if you decimate them, you will create non-linear distortion. Systems that do what you say rely on resampling the signal. That solves the problem but then you resampler better be more perfect than your bit depth.

I don't even know why we are discussing such hacks. The right solution is already invented and is called asynchronous USB.
 
I(...) I don't even know why we are discussing such hacks. The right solution is already invented and is called asynchronous USB.

It seems so. But do you know of an existing implementation of this solution? One that does not depend on the activity, file type, type of music server, computer , operating system, cables or other gimmicks?
 
It seems so. But do you know of an existing implementation of this solution? One that does not depend on the activity, file type, type of music server, computer , operating system, cables or other gimmicks?
Let me put it this way. I don't believe in inability of the system to deal with such issues :).
 
Let me put it this way. I don't believe in inability of the system to deal with such issues :).

I can not figure now why but I was tough that denying a negative is never quite the same as asserting a positive. I will have to think about your diplomatic answer! :)
 
If the DAC clock is runnig faster, then what you propose doesn't solve the problem as it needs to manufacture samples, not skip them. Even if it ran slower, you can't deploy what you describe. A system has to work for all input samples. It does today even in a $10 music player. You can't in the interest of better fidelity, hope and pray there is silence to skip over or else it breaks. There is no such thing as absolute silence anyway. There is likely to be a noise floor and if you decimate them, you will create non-linear distortion. Systems that do what you say rely on resampling the signal. That solves the problem but then you resampler better be more perfect than your bit depth.

I don't even know why we are discussing such hacks. The right solution is already invented and is called asynchronous USB.
But you will always have the problem of source coming from some device or situation where YOU won't be able to have control of the clocking. Always. If someone decides to stream a once off audio event to a number of people to experience in real time, simultaneously, whose clock decides?

No, there will always need to be a buffering solution, and the better it is implemented the less there ever will be a real problem. Make the buffer large enough, have a pointer into the buffer where the audio is coming, and a pointer where it's coming out, going to the DAC; and every now and again when the output is acoustically inaudible just shift that output pointer a few samples to maintain a healthy margin between the two. It's not a hack, it's an intelligent approach to a real "problem" or situation ...

Frank
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing