Why 24/192 is a bad idea?

IIRC, It increases relative to the original signal, not in an absolute sense. The absolute value only goes down from what you see in the graph above (Ethan, is this correct? I'd doublecheck now, but I'm about to get on a plane).

I created those graphs in the simplest manner possible. Nothing changed between one file and the other than reducing the bit depth from 16 to 8, and I didn't change the FFT settings either. If this is not what you're asking, please clarify.

--Ethan
 
Agreed. It is quite tiresome (and some may say self serving) to say the least.

Sorry, I won't do it again. But really, is this all you have to refute me with? :(

It seems to me that the people who would benefit the most from reading and understanding my book are the ones who are least likely to do that.

--Ethan
 
There are 4 files (each about 19 secs of audio) - some of them are the same files. Listen to the whole file but particularly the rim shots in each for naturalness. Best to PM me with results so as not to pollute this thread or coach others.

There's nothing to PM you about. The files sound the same to me, and for the ones that are different those differences are about 48 dB down. What are these files supposed to show, and how do they prove that bit depth affects more than the noise floor?

--Ethan
 
I understand what you're stating. To make a blanket statement that preservation of the entire recorded signal (in this case, specifically the ultrasonic content) is purely detrimental, seems one dimensional in approach to me.

So far, no relevant facts, no rationale, nothing. Just a personal opinon. Everybody seems to have those - even the fool on the hill.


There is a lot of literature describing ultrasonic hearing via bone conduction, etc.

There is a lot of literature describing just about every example of utterly discredited pseudoscience and quack medicine. Reliable evidence added by this statement = zero.


These effects can certainly be an integral part of the sensory experience of listening to a live instrument.

True, other than the lack of reliable evidence supporting this viewpoint. IOW, the diametrically opposing view is one of those negative hypothesis that is difficult or impossible to prove. However the statement has little more evidence to support it than "These effects can certainly be an integral part of the sensory experience of listening to the melting of green cheese on the moon."

If we take a slightly different tack and make the statement more real world, we have "These effects can certainly be an integral part of the sensory experience of listening to live music in a concert hall". This turns out to be a provably false statement because of the rather dramatic loss of ultrasonic frequencies in air over relevant distances, and the fact that concert halls are generally by design devoid of materials that do anything but absorb ultrasonic frequencies. If you carefully measure the actual ultrasonic output of most musical instruments it is orders of magnitude down from the primary output of the instrument.


In a simple standard auditory test-tone audiometry world, ultrasonic frequencies have no place in human hearing. However, in a more completely-encompassing view of auditory perception, ultrasonics play an important role in how we perceive sounds around us.

<note edit to remove comment about subsonics, which is a completely different topic>

Mixing in a discussion of subsonics for which clear evidence exists looks like a bit of debating trade trickery.

With that edit we another statement for which a few arguments exist but for which little if any reliable evidence exists.
 
We can perform digital processing perfect to within any arbitrary specification.
Who is "we?" If you mean a PC software developer with Gigahertz processors, sure. You can do what you like. But if you are a DAC chip designer – the topic we are discussing – every gate costs money and you are not going for broke as far as digital processing resolution. More below.

It's incorrect to call oversampling or resampling 'interpolation', as no interpolation is happening.
Of course there is interpolation happening. Hell would break lose if it were not :). Even a simple google search would show you tons of hits as to relevance of this term and fundamental role it plays in many DACs: http://www.analog.com/static/imported-files/tutorials/MT-017.pdf

"OVESAMPLING INTERPOLATING DACS
The basic concept of an oversampling/interpolating DAC is shown in Figure 2. The N-bit words
of input data are received at a rate of fc. The digital interpolation filter is clocked at an
oversampling frequency of Kfc, and inserts the extra data points. The effects on the output
frequency spectrum are shown in Figure 2. In the Nyquist case (A), the requirements on the
analog anti-imaging filter can be quite severe. By oversampling and interpolating, the
requirements on the filter are greatly relaxed as shown in (B)."

http://en.wikipedia.org/wiki/Digital-to-analog_converter

"Oversampling DACs or interpolating DACs such as the delta-sigma DAC, use a pulse density conversion technique."

http://www.eetimes.com/electrical-e...erpolation-Filters-for-Oversampled-Audio-DACs

"Most audio DACs are oversampled devices requiring an interpolation filter prior to its noise shaped requantization. "

http://www.eetimes.com/design/signa...DSP-part-2-Interpolating-and-sigma-delta-DACs

“In a DAC-based system (such as DDS), the concept of interpolation can be used in a similar manner. This concept is common in digital audio CD players, where the basic update rate of the data from the CD is about 44 kSPS. "Zeros" are inserted into the parallel data, thereby increasing the effective update rate to four times, eight times, or 16 times the fundamental throughput rate. The 4×, 8×, or 16× data stream is passed through a digital interpolation filter, which generates the extra data points. "

http://www.essex.ac.uk/csee/research/audio_lab/malcolmspubdocs/C21 Noise shaping IOA.pdf

"The techniques describe in section 2 can also be used in digital to analog conversion. However, in this case the Nyquest samples must first be converted to a higher sampling rate using a process of interpolation, whereby noise shaping can then be used to reduce the sample amplitude resolution, hence removing redundancy, by relocating and requantisation of noise into the oversampled signal space."

So not only is the term correct but critical to function of such oversampling DACs.

Leaving the terminology debate aside, 'implementation constraints' itself implies that these constraints are somehow of damning practical concern. Once you're 160dB-180dB deep before even resorting to floating point, I should think anyone's requirements for a DAC have been met.
I think you are confusing a DAC *chip* with a DAC device or PC software implementations of resampling. They are not at all the same thing. We were discussing DAC chips. High volume DACs sell for cents rather than dollars and have severe manufacturing costs constraints. They use hardwired multiply/add circuits purpose built to have just the right precision below the spec they like to achieve for the DAC and no more. Don't confuse them with what goes on a PC processor which goes for $30 to $300 just for the computational engine or even a DSP that sells for $5. The interpolator needs to cost pennies. I can give you hundreds of references for this. Here is an example: http://www.aes.org/e-lib/browse.cfm?elib=6816

"A digital audio example will demonstrate this computational burden. Consider
interpolating 16-bit digital audio data from a sample rate of 48kHz to a rate 4x faster, or
192kHz. A digital audio quality FIR filter [interpolator] operating at 4x may have length N=128 with 14-
bit coefficient precision. The resulting computational load is 32 (16-bit x 14-bit)
multiplies/adds at 192kHz. Of course, the computation rate must double for a stereo
implementation; consequently, a digital audio quality filter interpolating by a modest 4x
needs a (16-bit x 14-bit) multiplier operating at 12.288MHz - a rate that requires a
dedicated, parallel, hardware multiplier in state-of-the-art CMOS technology."


As you see its multiplier is fixed point customized for input of 16 bits and output of 14 bits. Another example: http://www.ee.cityu.edu.hk/~rcheung/papers/fpt03.pdf

” Figure 10 shows the basic implementation of the SDM. The a(i), b(i) and c(i) coefficients are all floating point numbers. However, the several large float-point multipliers which calculate the intermediate value between the input data and these coefficients would eventually use up all the resources of the FPGA chip. As a result, we extract the mantissa part of these coefficients and transform all the floating-point multiplication into fixed point multiplication.”

Yes, there are DAC *devices* with external DSPs to achieve better filter response but as a rule, you can’t assume super high resolution implementations especially in run of the mill DACs.

Digital resampling is practically free of distortion (again, we're talking about 180dB down, not 60dB).
If I have a DAC that has 100 db SNR, why would I build its interpolator to go down to 180db? Why wouldn’t I reduce the accuracy down to my final spec and save the gates/space on chip together with reduced power consumption? Reality is that the chip designers do exactly that.
No, it does not. Taking ASYNC mode ISOCHRONOUS USB as an example, the DAC always clocks its samples to a high accuracy sample clock. The samples are taken from a buffer (FIFO) that holds a few ms of audio. If the DAC clock is a little faster than the host clock, the fill level of the FIFO drops slowly and the host is requested to increase the number of bytes per packet to keep up. If the FIFO is slowly filling, the host is requested to slightly reduce packet size. There is no distortion caused by this mechanism as it's completely divorced from the DAC. It can go wrong if the FIFO underruns or overruns, but we all agree this is an unmistakable catastrophe.
Asynchronous USB and asynchronous sample rate converters are two different things. The latter is a signal processing device and the subject of my post there. That just because said module was digital, it didn’t mean that it generated perfect results.
 
M&M were using fantastically expensive equipment. There was no audible IMD. That's not a big surprise.
Say what? You can look at equipment spec and tell whether it has IMD or not? Don't look but folks on this forum are also into expensive equipment ;) :).

What you are advocating is being unable to sell better masters to an audiophile without the added expense of senseless recording overkill, because said audiophiles have been 'educated' that they need a gold plated Hummer with artillery mount to drive to the corner store.
I am not advocating anything. The recording world has already decided to use high sample rate and resolution. Your beef therefore needs to be with them. Go and see if the Mix magazine will take your article and "educate" that crew. Folks here have nothing to do with that. What they want is what the talent and engineer heard when the final mix was created. Nothing less, nothing more. You pushing for 16/44.1 means getting the CD masters which is post crazy loudness wars as the Meyer and Moran paper clearly articulated with listening tests.

Nothing about loudness wars is about gold plated hammers. It is about a hammer that has a head that is metal vs one that is plastic. Folks are so wrapped around the axle on the bits and bytes that they forget the reality of how music is produced. Or worse yet, don't know how the music is produced.

Is this a benefit to the industry? to anybody?
Yes, there is benefit to going above 16/44.1. It frees the producer from having to spell and understand complex signal processing topics. If the format were 20 bits and 48Khz and better, I for one would be satisfied. But it isn't. 16 bits has too high a noise floor to be proven to be inaudible. Seeing how we all have systems that can do better, your insistence that we shouldn't fails to make the point on technical and business grounds.

I actually agree with your business point to an extent, but it's due to you painting yourself into a corner (as a whole industry, not you personally Amir) and you're here continuing to paint yourself tighter and tighter into the same corner.

More than a few of your potential customers are on Apple Insider, Audio Asylum, Computer Audiophile, etc, stating they're holding out for 384kHz because 192khz just has too many compromises. They're serious. These are loons of course, but this de-education is going going to come back to bite somehow.
So? I am not here to fight everyone's battle. I am here to advocate what I can prove to be sufficient performance. I do that as Bob does with analysis of the technology at hand and at what level we can demonstrate transparency. I realize some people want better than that. I leave it to them to prove that point. In no way do people wanting 384kbps painting me into any corner. You seem to think if someone wants a 12 cylinder car, it takes away my right to wanting a 6 cylinder over 4.

So you've jumped in with a cat, a squirrel and six musical gerbils, to argue.... what exactly?
That this is not about the ultrasonics. Or sensational headlines like your IMD point. It is a simple point that high resolution content is not subject to loudess wars, does not need proper truncation to 16 bits and understanding of noise shaping. And to the extent content owners are happy to send out the bits, we should take them if we so choose. If we don't, you can convert the high res samples or buy them already converted. Everyone wins, sans the person arguing otherwise :).

Might have something to do with running an educational foundation. I also really hate it when people are wrong on the internet.
Me too but in this case, you are missing the big picture that the high res adoption is occurring upstream of consumers here. So if you want to change the world, go on the forums where those guys hang out and convince them.

That's what the Catholic Church told me, but the Buddhists got all huffy.

I started with experimental results. The experimental results say your statement is incorrect. The theory provides some explanation, but it's the experiments that talk.

You're not the only person who would love to show me up on these points--- after all, other industry insiders do have plenty of dogs in this race. Several PHOsters suggested the idea of renting out a suite or finding a local studio at SXSW next year to do informal but experimentally sound listening tests. It would be off-record, no official results, no 'lording it' over one side or the other. Personal enrichment only. Interested?

Monty
Xiph.Org
How many different ways should I say I have no dog in this hunt? I claim no benefits to ultrasnoics. Or harm for that matter. My goal is discussing formats is not just what is good enough but what can be shown to be transparent. A listening test at a show can't be conclusive where as proper analysis of the system and its noise levels can be as Bob Stuart has shown. It is not like it cost more money to build those systems. Even dirt cheap AVRs these days support high sampling rate and bit depths.

As I have said, you are living in a decade back and fighting a war that needs not fighting. Content is produced in high resolution and folks are willing to sell them. And buyers are willing to buy them. And due to mastering differences, said content is most cases is superior to 16/44.1 created for CD consumption. That is all there it to it. Everything else is headline grabbing under the guise of writing something technical. As I said, if you want to get educated, read Bob's paper. That is where the real science lies.
 
As a non-engineer audiophile, I participate in these discussions to learn, ask questions, and corroborate data. I have no dog in the digital-sampling-rate-debate. My comments are directed more toward the declamatory statements made in this thread that emphatically state that anything above redbook is a waste. As a site moderator, it is my responsibility to see that discussions are held in a civilized manner. The fool on the hill comment, for instance, is representative of poor forum demeanor that borders on personal insults toward a member.

The argument put forth about attempts to play an instrument, and not getting the same feeling, were also attempts to dismiss the question raised: Does the sound of an instrument change AT THE MICROPHONE/LISTENING POSITION when the instrument is vibrated by ultrasonic sound vs. being totally inert? I'm not asking about emotional connections or group psychological factors that alter the perception of an event.

I would implore posters in this thread to remain objective in their comments, and not engage in off-track debate just so they can debate more. I'm disappointed that the professionalism of the responses in this thread are not of the same standard as most all of our topics here at WBF.

I'm not posting this in green because I'm not invoking any forum power when making these comments. I hope that the bar is raised here.

Lee
 
As for ultrasonic measuring issues, I think it's a great idea. I'm not sure where to start though. Can you provide a 24/192 sample where the ultrasonic IMD is audible so we can start working from there?
Me? No. The ultrasonic IMD thing is Monty's idea. He is the one claiming harm there. So surely he has music samples that demonstrate the same. Doesn't he? Please don't tell me he is hanging his hat on this graph from his report:

intermod.png


Why on earth would studio recorded content have two ultrasonic peaks near 0 dbfs at those frequencies? How often do you think that occurs?
 
Yes, that is how I interpreted your original message. The ultrasonic components are inaudible, do not affect the audible components, and contribute nothing to an instrument's tonal signature.
Monty
Xiph.Org
If I am hearing said instrument through the reproduction chain in the mixing room, then it better be audible or your entire theory of IMD harm is moot. Or are we back to pro equipment being linear and perfect and our home systems not?
 
If you had anything other than white noise + a signal, that would be visible in the FFT. I suppose you could have a very slowly time variant white noise, but there's no reason to think that would happen in the example above.

Agreed, but then you're using contextual information about what he's doing, not just looking at the FFTs.

There is no correlation whatsoever for appropriate dithered quantization (whether it's white or shaped), which is the point of the dither.

Ethan didn't mention anything about dither, just said he reduced the data to 8 bits so I was talking about quantization noise in its absence.
 
I created those graphs in the simplest manner possible. Nothing changed between one file and the other than reducing the bit depth from 16 to 8, and I didn't change the FFT settings either. If this is not what you're asking, please clarify.

I had not asked anything specifically about your graph. I was unsure about a specific aspect of truncated (ie, non dithered) quantization noise in general. Specifically, I know that the relative harmonic distortion increases, but I thought the absolute level of distortion did not. I was wondering if you knew, but I can go write some code now to check.
 
Me? No. The ultrasonic IMD thing is Monty's idea. He is the one claiming harm there. So surely he has music samples that demonstrate the same. Doesn't he? Please don't tell me he is hanging his hat on this graph from his report:

Sorry, for the misunderstanding, I thought you were arguing that ultrasonic IMD was good. In any case, since you're still claiming that 192/24 is audibly better, please provide me a single 192/24 file that -- when downscaled to 48/16 -- sounds audibly worse. I don't care what the reason is, but I'd really like to "hear for myself" (ABX) an actual difference. If you can't provide a single sample for which 192/24 is useful, then please stop arguing.
 
Who is "we?" If you mean a PC software developer with Gigahertz processors, sure. You can do what you like. But if you are a DAC chip designer – the topic we are discussing – every gate costs money and you are not going for broke as far as digital processing resolution. More below.

So, too deep a filter is needlessly wasteful, but bloating the data format by a factor of six is perfectly acceptable?

Of course there is interpolation happening. Hell would break lose if it were not :). Even a simple google search would show you tons of hits as to relevance of this term and fundamental role it plays in many DACs: http://www.analog.com/static/imported-files/tutorials/MT-017.pdf

Ah, I see. Zero order hold has been relabeled 'interpolation' by some manufacturers. Terminology objection withdrawn.

I think you are confusing a DAC *chip* with a DAC device or PC software implementations of resampling.

No, I am not. However, you are right to point out these are not the same thing.

Yes, there are DAC *devices* with external DSPs to achieve better filter response but as a rule, you can’t assume super high resolution implementations especially in run of the mill DACs.

Now we're assuming run of the mill equipment again? Then one definitely needs to control/eliminate ultrasonics.

If I have a DAC that has 100 db SNR, why would I build its interpolator to go down to 180db?

That's been my point from the beginning. Overkill beyond the audible into fantasy-land is senseless, costly and wasteful.

Why wouldn’t I reduce the accuracy down to my final spec and save the gates/space on chip together with reduced power consumption? Reality is that the chip designers do exactly that.

Yes! yes, yes, yes.

Asynchronous USB and asynchronous sample rate converters are two different things. The latter is a signal processing device and the subject of my post there.

Thank you for that clarification.
 
Say what? You can look at equipment spec and tell whether it has IMD or not?

You can look at the test results to see there was no audible contribution by ultrasonics, positive or negative.

The recording world has already decided to use high sample rate and resolution.

The recording world decided that quite long ago, when there was little conclusive data and 'playing it safe' was defensible. The modern generation working in home studios has not decided on high rate--- though some aspire to it solely out of a cargo-cult mentality and the gobbledygook spouted by audiophiles.

Your beef therefore needs to be with them.

My beef is with the misinformation. I really don't care who is promulgating it.

You pushing for 16/44.1 means getting the CD masters which is post crazy loudness wars.

That's quite a leap. I see where it's coming from.. and also point out it's been tried before.

Nothing about loudness wars is about gold plated hammers. It is about a hammer that has a head that is metal vs one that is plastic. Folks are so wrapped around the axle on the bits and bytes that they forget the reality of how music is produced. Or worse yet, don't know how the music is produced.

Do you really think you can successfully roll back only one targeted piece of superstition while leaving the other layers upon layers in place? I'm concerned with the very fundamentals of how digital audio works, and undoing some common, persistent myths.

Yes, there is benefit to going above 16/44.1. It frees the producer from having to spell and understand complex signal processing topics.

I... just... wow.

16 bits has too high a noise floor to be proven to be inaudible.

The experimental record disagrees with you. Believing otherwise fervently does not make it true.

Seeing how we all have systems that can do better, your insistence that we shouldn't fails to make the point on technical and business grounds.

I have no interest in your business plan, especially if it benefits even tangentially from misleading people.

You seem to think if someone wants a 12 cylinder car, it takes away my right to wanting a 6 cylinder over 4.

The article was title '24-bit / 192kHz music downloads and why they don't make sense', not '24-bit / 192kHz music downloads and why they should be outlawed'.

That this is not about the ultrasonics. Or sensational headlines like your IMD point.

The IMD point is correct. ...and you haven't refuted it.

It is a simple point that high resolution content is not subject to loudess wars, does not need proper truncation to 16 bits and understanding of noise shaping.

Starting over isn't going to help much, or for long, because you've changed nothing in the industry that led to the problem in the first place. All those producers whom you worry can't handle complex topics fundamental to their work are just going to muck it up again, right?

My goal is discussing formats is not just what is good enough but what can be shown to be transparent.

That's exactly the matter at hand. Not good enough. Transparent.

As I said, if you want to get educated, read Bob's paper. That is where the real science lies.

Ahem:
We are better than this here. Please hold yourself to a higher standard and open yourself to discussion and not confrontation or the provoking of hostility.

Does this standard apply to you as well, Amir, or not? You can't seem to resist constant baiting, and it's tiresome.

Monty
Xiph.Org
 
If I am hearing said instrument through the reproduction chain in the mixing room, then it better be audible or your entire theory of IMD harm is moot.

logical fallacy.

Monty
Xiph.Org
 
The argument put forth about attempts to play an instrument, and not getting the same feeling, were also attempts to dismiss the question raised

My comment was made in good faith. Please explain why you feel it was dismissive; PM if you feel that more appropriate.

The point is that the conscious steering/perception of sound and audio and music rely only partially on the sound itself. Perfection in the audio reproduction does not perfectly reproduce the sound. We have reached audibly perfect reproduction, and yet people are still longing for what can't be reproduced.

Does the sound of an instrument change AT THE MICROPHONE/LISTENING POSITION when the instrument is vibrated by ultrasonic sound vs. being totally inert?

The short answer is no, the long answer is yes. :)

Preface to long answer: If ultrasonics produced audible differences in a live performance, the effect would be audible and thus would be recorded whether the ultrasonics were or were not.

Example: Very high power pipe organs can drive air into nonlinearity due to trough rarification. This is one of the few cases where ultrasonics could potentially cause audible products in a live performance as a result of IMD in the air itself. However, the products _would be audible_ and thus they'd be recorded.

I'm not asking about emotional connections or group psychological factors that alter the perception of an event.

I attempted an anecdote as I've answered the question directly a few times, but it seemed there was still a disconnect. I was not attempting to dismiss your question, I was reaching for a way to figure out what I or you was missing in the exchange.

Monty
Xiph.Org
 
If you just truncate instead of rounding in general the distortion will rise. It has to do with truncating signal data, not the quantization noise.

I had been tossing 'rounding' and 'truncation' into the same boat as they're equivalent save a small DC offset. Both should behave roughly identically with respect to distortion.
 
So surely he has music samples that demonstrate the same. Doesn't he? Please don't tell me he is hanging his hat on this graph from his report

The graph is an illustration of what IMD is, and how it produces both higher and lower frequency products, and that's all that it is. I never implied anywhere that it was anything but.

Also, it's not the sample that produces IMD, it's the equipment. Do I have equipment that produces audible IMD? Why yes, I do.

As we shouldn't base the format on assumed equipment limitations, the format should not include ultrasonics in the event they contribute to audible IMD (as they contribute nothing useful anyway).

Why on earth would studio recorded content have two ultrasonic peaks near 0 dbfs at those frequencies? How often do you think that occurs?

It would only have to occur once, wouldn't it?

Again, I'd love to hear from folks trying out the IMD test samples I posted in the article on their playback rigs. For convenience sake, they're here:
http://people.xiph.org/~xiphmont/demo/neil-young.html#toc_1ch
 
Sorry, for the misunderstanding, I thought you were arguing that ultrasonic IMD was good. In any case, since you're still claiming that 192/24 is audibly better, please provide me a single 192/24 file that -- when downscaled to 48/16 -- sounds audibly worse.
I have said nothing about 192/34 being better. I have said what I said in the first page of this thread:

amirm said:
The goals for setting a standard here shouldn't be what is adequate but what has some safety margin as to give us high confidence of inaudibility. In that regard, we need to also allow for less than optimal implementations. To that end, Bob Stuart has published a much more authoritative version of this report at AES. Here is an online copy: http://www.meridian-audio.com/w_paper/Coding2.PDF. These are his recommendations:

"This article has reviewed the issues surrounding the transmission of high-resolution digital audio. It is
suggested that a channel that attains audible transparency will be equivalent to a PCM channel that
uses:
· 58kHz sampling rate, and
· 14-bit representation with appropriate noise shaping, or
· 20-bit representation in a flat noise floor, i.e. a ‘rectangular’ channel"


So as we see, the CD standard somewhat misses the mark on sampling rate. And depending on whether you trust the guy reducing the sample depth from 24-bit to 16 bits, we may be missing the right spec there too.

Ultimately, I think to the extent bandwidth and storage have become immaterial for music, it is best to get access to the same bits the talent approved when the content was produced. For a high-end enthusiast, there is no need for them to shrink down what they recorded before delivery. Let the customer have the same bits and then there is no argument one way or the other :).
If you want to advocate something worse than the above, then it is up to you to prove inaudibility. I am not here to do your homework for you to counter Bob's paper :).

I don't care what the reason is, but I'd really like to "hear for myself" (ABX) an actual difference. If you can't provide a single sample for which 192/24 is useful, then please stop arguing.
How about you not arguing a point that is moot? It is not like record labels need your permission to release music in 192 Khz/24. They already are. Or the people recording and mixing them. Or the customers who are buying them. You are whistling dixie as the saying goes :).

Normally I say it is good to discuss a topic from educational point of view. But neither one of you work in the recording industry, or in the field of creating and playing digital audio (i.e. DACs, and related signal processing). A better paper was written on the topic long before you all got the idea and does justice to the topic.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu