Redbook 44.1 kHz standard: theoretically sufficient timbral resolution?

Please repeat your specific question in a short, self-contained post.

Thanks, Tony, for your interest in helping. Groucho has already answered my question in post # 15; I am not sure if you agree with his answer and in any case, addtional observations are welcome.

In any case, here is my question in short form:

It has been suggested that any complex waveform can be synthesized from sinewaves, and thus according to the Nyquist theorem any music signal up to 20 kHz frequency can be perfectly represented by 44.1 kHz digital. Yet the following article by Chris Tham, "Exploring Digital Audio Myths and Reality Part 1" argues otherwise, that for example it does not apply to sawtooth waveforms:

http://www.audioholics.com/audio-technologies/exploring-digital-audio-myths-and-reality-part-1

Acoustic instruments that produce waveforms with, for example, saw-tooth like asymmetries, are trumpets, see the graphs in section 1.6.1 in the following article:

http://www.feilding.net/sfuad/musi3012-01/html/lectures/005_sound_IV.htm

So the question is: Is the 44.1 kHz standard indeed theoretically, on a technical level, insufficient when it comes to proper timbral resolution of acoustic instruments that produce complex, non-sine waveforms, and when it comes to the correct reproduction of odd non-sine waveforms (sawtooth, square) from synthesizers?
 
Thanks, Tony, for your interest in helping. Groucho has already answered my question in post # 15; I am not sure if you agree with his answer and in any case, addtional observations are welcome.

In any case, here is my question in short form:

It has been suggested that any complex waveform can be synthesized from sinewaves, and thus according to the Nyquist theorem any music signal up to 20 kHz frequency can be perfectly represented by 44.1 kHz digital. Yet the following article by Chris Tham, "Exploring Digital Audio Myths and Reality Part 1" argues otherwise, that for example it does not apply to sawtooth waveforms:

http://www.audioholics.com/audio-technologies/exploring-digital-audio-myths-and-reality-part-1
Let me say that is a screwy article :). The main part of the article is fine but then in the "conclusion" part he goes to the complete opposite camp and says a bunch of stuff not supported in his article.

Acoustic instruments that produce waveforms with, for example, saw-tooth like asymmetries, are trumpets, see the graphs in section 1.6.1 in the following article:

http://www.feilding.net/sfuad/musi3012-01/html/lectures/005_sound_IV.htm

So the question is: Is the 44.1 kHz standard indeed theoretically, on a technical level, insufficient when it comes to proper timbral resolution of acoustic instruments that produce complex, non-sine waveforms, and when it comes to the correct reproduction of odd non-sine waveforms (sawtooth, square) from synthesizers?
A sawtooth waveform just like the step functions had sudden change in value, i.e. the output changes in zero time. As I explained, this means it has infinite bandwidth. Here is the fourier transform of a sawtooth signal:

Inline28.gif


Don't leave the page yet :). Just focus on the funny sigma symbol. It says that we need a sum of infinite sine waves from N=1 to infinity to reconstruct a perfect sawtooth. No system, digital or analog therefore can record or capture it, even though we can synthesize it using a computer.

What happens in a digital system at 44.1 Khz is that any harmonics greater than 22.05 Khz get chopped off completely. A 19 Khz sawtooth therefore becomes a single sinewave. The next harmonic is well above 22 KHz. In an analog system, there is more gradual roll off so the treatment is different. The latter may recreate more than a pure sinewave. But you ear chops off those extra harmonics just the same. So their preservation didn't do one any good since the ear is band limited.
 
Rats. I typed a response to this but before I could hit "post," Windows decided to reboot and I lost it :(. So here is an abbreviated version.

The best way I can explain why we don't hear those squarewaves in digital is to talk about a fundamental fact in digital audio: to have a step response of any kind, you must have infinite bandwidth and energy.. Think about it. How can you go from level X to level Y in zero time?. That is what those little steps (and square wave) represent. Imagine how much energy you need to have your arm move from down to up position in zero time.

In nature, this can't happen because we don't have infinite energy. In computers though, we can readily create such content by say, having zero as one audio sample and 999 in the other. So in theory this requires infinite bandwidth and energy. But every digital audio system has a filter on the output of the DAC. That filter cuts out all the energy above half the sample rate. In doing so, it guarantees that no square wave or step response can ever come out. So no way do you ever see those little steps represented as digital samples coming out of the analog output of the DAC.

Now, there are 1-bit DACs and systems like DSD which attempt to actually produce such things. It is simpler to create a DAC out of binary values of zero and one. In that case, you are creating little steps and as a result, lots and lots of distortion that we call "noise." But these DACs have a solution to that in that they push that noise through a process called noise-shaping into ultrasonic audio range. This is what I think Orb is talking about.

Sorry, Amir, that you lost your original reply, and I appreciate that you took the time to repeat your post. If I understand you correctly,

a) stair-stepping would require infinite bandwidth and energy
b) the fliter at the output of the DAC cuts out all the energy above half the sample rate

As a result, there were never stairsteps to begin with. This would agree with what Chris Montgomery is saying (see the article and video linked to in my OP).
 
A sawtooth waveform just like the step functions had sudden change in value, i.e. the output changes in zero time. As I explained, this means it has infinite bandwidth. Here is the fourier transform of a sawtooth signal:

Inline28.gif


Don't leave the page yet :). Just focus on the funny sigma symbol. It says that we need a sum of infinite sine waves from N=1 to infinity to reconstruct a perfect sawtooth. No system, digital or analog therefore can record or capture it, even though we can synthesize it using a computer.

What happens in a digital system at 44.1 Khz is that any harmonics greater than 22.05 Khz get chopped off completely. A 19 Khz sawtooth therefore becomes a single sinewave. The next harmonic is well above 22 KHz. In an analog system, there is more gradual roll off so the treatment is different. The latter may recreate more than a pure sinewave. But you ear chops off those extra harmonics just the same. So their preservation didn't do one any good since the ear is band limited.

Thanks, Amir, very helpful. If I understand correctly, what you are saying is broadly in agreement with Groucho's post #15, even though you provide additional observations, which I find useful as well (and yes, including the mathematical formula, which you explained very well for the purpose of the argument!).
 
Not forgotten, just excluded. A DAC without a competently engineered reconstruction filter is broken.
LOL that may upset a lot of people including actual engineers who developed products as a NOS such as Thorsten Loesch, and there was a world before over sampling digital filter-DAC designs - yes I appreciate the world has moved on and challenges of maintaining a 24-bit linear-resolution-accurate design.
Anyway I do not think that is the reason it is ignored by Monty and others.
Going by that logic we should exclude a lot digital music over the years as they were not competently mastered with TPDF or I assume you also exclude DSD then as it does not have a comparable filter :)
Plenty of other real world examples that push boundaries, but yeah there is a lot of real negatives regarding NOS DACs such as IMD and the images.
I would love to know if you consider correcting sin(x)/x roll-off means it is broken, or whether the interpolation/reconstruction filter is broken for being more accurate in time domain than frequency or more accurate in frequency domain than time, or the acceptable strength of its alias rejection :)
Last point is light hearted cheekiness.
Cheers
Orb
 
Last edited:
Sorry, Amir, that you lost your original reply, and I appreciate that you took the time to repeat your post. If I understand you correctly,

a) stair-stepping would require infinite bandwidth and energy
b) the fliter at the output of the DAC cuts out all the energy above half the sample rate

As a result, there were never stairsteps to begin with. This would agree with what Chris Montgomery is saying (see the article and video linked to in my OP).
Yes but Monty mentions there are no stair-steps and it is a myth....
It is not for the reasons I mentioned before that it does exist in all situations in context of the transfer function of say the ADC (but would not be visible at the analogue output of a normal DAC).

Also as long as you use dither (which is an additional function to the original signal); reason is looking at a very low 16-bit signal sine wave (such as Stereophile shows) and it looks stepped without dither (to emphasise dither should always be present in the context your coming from).
All of these should be put into the narrative and context that looks at it from a complete chain of ADC-Digital Audio Workstation-master file recording-format-DAC.
Cheers
Orb
 
Last edited:
Thanks, Tony, for your interest in helping. Groucho has already answered my question in post # 15; I am not sure if you agree with his answer and in any case, addtional observations are welcome.

In any case, here is my question in short form:

It has been suggested that any complex waveform can be synthesized from sinewaves, and thus according to the Nyquist theorem any music signal up to 20 kHz frequency can be perfectly represented by 44.1 kHz digital. Yet the following article by Chris Tham, "Exploring Digital Audio Myths and Reality Part 1" argues otherwise, that for example it does not apply to sawtooth waveforms:

http://www.audioholics.com/audio-technologies/exploring-digital-audio-myths-and-reality-part-1

Acoustic instruments that produce waveforms with, for example, saw-tooth like asymmetries, are trumpets, see the graphs in section 1.6.1 in the following article:

http://www.feilding.net/sfuad/musi3012-01/html/lectures/005_sound_IV.htm

So the question is: Is the 44.1 kHz standard indeed theoretically, on a technical level, insufficient when it comes to proper timbral resolution of acoustic instruments that produce complex, non-sine waveforms, and when it comes to the correct reproduction of odd non-sine waveforms (sawtooth, square) from synthesizers?

There are a few problems with this statement. You will have to deal with the details.

Theoretically a 44100 Hz digital sampling system can reproduce, perfectly, any signal from whatever source, that is band limited to the range of DC up to, but not including 22050 Hz. However, and this is is also theoretical, for a signal to be bandlimited (to any finite bandwidth) it must either be the zero signal, or it must have an infinite temporal extent, i.e. has been going on since before the birth of the universe and will never cease. So much for theory.

In practice, a signal that is, for all practical purposes, band limited to frequencies below (say) 20000 Hz can be reproduced (for all practical purposes) by sampling the signal at 44100 Hz. For example, just last week I took a 88400/24 recording (of an orchestra including trumpets, cymbals and violins) and band limited it so that there was no energy above 18 Khz above -150 dBfs. This became my reference signal. I then converted the reference signal to 44100 kHz and resampled it back to 88200. The result was a signal that agreed to the reference, at every sample, subject to a worst case error of 2 least significant bits out of 24 bits. (This was due to roundoff error in converting the high precision values to 24 bits). The average difference between the two signals was -150 dBfs. If I had wanted, I could have changed the parameters of my experiment and gotten better results. I did this experiment using the commercial software (iZotope RX4 advanced) and my preferred settings that I normally use when converting a high resolution studio master back to the 44.1 kHz sampling rate for release on CD.

I could not hear a difference between the reference file and the file that had been passed through the 44.1 kHz sampling rate. I took the error signal (difference signal) and listened to it. It was still silent after I boosted it by 60 dB. Only after I boosted it by 100 dB was there an obvious difference in the form of white noise. So yes, in practice, taking audio signals that have no energy above 18 kHz and converting them to 44.1 kHz is possible without any audible loss (at least to most people's ears).

What's the issue here?

1. Does real music have energy above 20 kHz and so is changed when filtered arbitrarily to reduce its bandwidth.?

Yes: http://www.cco.caltech.edu/~boyk/spectra/spectra.htm


2. When high frequencies are filtered out of recordings, can people hear a difference?
Yes. See Amir's results on a variety of double blind tests in this and other forums for his results as well as peer reviewed experiments conducted by others.


So now, obviously, the solution is to remove all this evil filtering that corrupts the signal. This is easily done. For example, with most audio editors one can generate a "digital" square wave that does not conform to the limitations of the sampling theorem. One can generate a sweep tone of these signals (all samples either +v or -v, the equivalent of sampling an ideal square wave). If you then play back this test system you will hear the sweep tone, as expected and through most of the frequency range you will hear the characteristic bite of a square wave rather than a sine wave. However, it doesn't take much attention to notice something else. Along with the nicely increasing pitch of the square wave there are audible spurious tones, some going in the reverse direction from high to low. There are often called "birdies". This distortion is obvious with these test tones, but it exists just as well when processing music without the required filtering. Here the distortion is horrible, far worse than anything that happens due to filtering. The only way to escape the horrors of aliasing and the subtle losses of filtering is to increase the sampling rate.
 
I think this is a "PR talking" point and of no practical consequence. I can download high resolution audio and convert it to 44.1 KHz myself and compare. If there is audible IM, I should be able to hear it. And if so, I can then use the resampled version. Or buy new equipment that doesn't have that issue. So there is no situation where giving me 44.1 KHz is superior option. It is a degraded version of the file with no practical benefit.

Of course there's a benefit; access to the millions of files available only at RB resolution. So the question becomes not "is there a sonic benefit to RB?" but "is there a downside?" I know you can hear something, but unless I missed a very important post somewhere, we still don't know what you hear and whether or not it is beneficial or detrimental. Not that it makes much difference. There is still a ton of music, the overwhelming majority of music, for which the best resolution available is RB. Let's lobby the recording industry to make the most of RB through tasteful re-mastering first. Then, if the audible difference between RB and hi-res is proven to be beneficial, we can take that step.

Tim
 
Of course there's a benefit; access to the millions of files available only at RB resolution. So the question becomes not "is there a sonic benefit to RB?" but "is there a downside?" I know you can hear something, but unless I missed a very important post somewhere, we still don't know what you hear and whether or not it is beneficial or detrimental. Not that it makes much difference. There is still a ton of music, the overwhelming majority of music, for which the best resolution available is RB. Let's lobby the recording industry to make the most of RB through tasteful re-mastering first. Then, if the audible difference between RB and hi-res is proven to be beneficial, we can take that step.

Tim
As a format, redbook audio is on its way out. It just doesn't know it yet :). MP3/AAC are forcefully taking away mass consumption of CDs. Once that happens, audiophiles will not be enough to carry the format since it requires onerous inventory management for retailers. The number of times I go looking for a CD but can't find anything but MP3 on Amazon is growing alarmingly fast for the type of music I consume. I suspect when the turning point comes, there will be quite substantial drop in CD availability of new music.

The other problem with CD is lack of immediate consumption. I just downloaded a bunch of tracks from HDTracks. Loved being able to have them in just a few minutes. Can't do that with CD. So in some sense, we also need to let go of the CD as a format. It lacks convenience and in theory (with respect to bit depth without noise shaping) is not transparent.
 
There are a few problems with this statement. You will have to deal with the details.

... with most audio editors one can generate a "digital" square wave that does not conform to the limitations of the sampling theorem. One can generate a sweep tone of these signals (all samples either +v or -v, the equivalent of sampling an ideal square wave). If you then play back this test system you will hear the sweep tone, as expected and through most of the frequency range you will hear the characteristic bite of a square wave rather than a sine wave. However, it doesn't take much attention to notice something else. Along with the nicely increasing pitch of the square wave there are audible spurious tones, some going in the reverse direction from high to low. There are often called "birdies". This distortion is obvious with these test tones, but it exists just as well when processing music without the required filtering. Here the distortion is horrible, far worse than anything that happens due to filtering. The only way to escape the horrors of aliasing and the subtle losses of filtering is to increase the sampling rate.

"There are a few problems with this statement."
You are setting up a strawman argument. The synthesized "digital" square wave contains frequencies above Nyquist. That's an invalid condition. In the real world, the above-Nyquist frequencies would have been rejected (filtered) before A-D conversion. I'll grant that in the real world a synthesized musical note may be recorded directly as a bitstream, but again if it's competently generated it will contain no components above Nyquist.

"An anti-alias filter is not just a good idea, it’s mathematically necessary." - James D. (JJ) Johnston.
 
Last edited:
As a format, redbook audio is on its way out. It just doesn't know it yet :). MP3/AAC are forcefully taking away mass consumption of CDs. Once that happens, audiophiles will not be enough to carry the format since it requires onerous inventory management for retailers. The number of times I go looking for a CD but can't find anything but MP3 on Amazon is growing alarmingly fast for the type of music I consume. I suspect when the turning point comes, there will be quite substantial drop in CD availability of new music.

The other problem with CD is lack of immediate consumption. I just downloaded a bunch of tracks from HDTracks. Loved being able to have them in just a few minutes. Can't do that with CD. So in some sense, we also need to let go of the CD as a format. It lacks convenience and in theory (with respect to bit depth without noise shaping) is not transparent.

Nothing to argue with there, but it is a reason to offer a downloadable version of the massive library of 16/44.1 that is currently available, not a reason to go back to the masters and create a whole new library of hi-res. There might be a reason, but we still don't know, do we? Do you know what you heard is additional resolution, not something else?

Tim
 
Nothing to argue with there, but it is a reason to offer a downloadable version of the massive library of 16/44.1 that is currently available, not a reason to go back to the masters and create a whole new library of hi-res. There might be a reason, but we still don't know, do we? Do you know what you heard is additional resolution, not something else?

Tim

Well one reason is there is greater flexibility with the filters beyond CD, it is as CD spec filters are the most compromised in terms of trying to be ideal in both time and frequency domain when limited to around 20khz.
Going to 96khz or even a bit below means one can implement a good minimum phase/slow roll-off filter.
Another aspect is just how much you agree or not with Bob Stuart's and Craven's POV with time smear/transient response in relation to the sampling rate - lets put aside the debate whether it is proven audible and just from a technical/psychoacoustic perspective they present.
Cheers
Orb
 
LOL that may upset a lot of people including actual engineers who developed products as a NOS such as Thorsten Loesch, and there was a world before over sampling digital filter-DAC designs - yes I appreciate the world has moved on and challenges of maintaining a 24-bit linear-resolution-accurate design.

I suspect they know perfectly well that a reconstruction filter is mandatory, they just leave it out to cater to those audiophiles who think that the filter somehow removes music instead of actually adding spurious signals.

Anyway I do not think that is the reason it is ignored by Monty and others.
Going by that logic we should exclude a lot digital music over the years as they were not competently mastered with TPDF or I assume you also exclude DSD then as it does not have a comparable filter :)

DSD requires a comparable filter. It has to happen somewhere between the bitstream and the ear.

Plenty of other real world examples that push boundaries, but yeah there is a lot of real negatives regarding NOS DACs such as IMD and the images.
I would love to know if you consider correcting sin(x)/x roll-off means it is broken, or whether the interpolation/reconstruction filter is broken for being more accurate in time domain than frequency or more accurate in frequency domain than time, or the acceptable strength of its alias rejection :)
Last point is light hearted cheekiness.
Cheers
Orb

You pays yer money and you takes yer choice... personally, I'll accept some rolloff because my ears stop working before the rolloff becomes significant. I should run some experiments with my teenage daughters, who are bothered by HF squeals from old TV sets and wall-wart power supplies.
 
I suspect they know perfectly well that a reconstruction filter is mandatory, they just leave it out to cater to those audiophiles who think that the filter somehow removes music instead of actually adding spurious signals.



DSD requires a comparable filter. It has to happen somewhere between the bitstream and the ear.



You pays yer money and you takes yer choice... personally, I'll accept some rolloff because my ears stop working before the rolloff becomes significant. I should run some experiments with my teenage daughters, who are bothered by HF squeals from old TV sets and wall-wart power supplies.

Don, how do you do a reconstruction filter for the TDA1541 when it is not oversampling?
You can add oversampling before it with another chip, but the TDA1541 was designed also to not use it, so goes beyond audiophiles thoughts on filters.
DSD, well actually it is not a reconstruction filter though in terms of doing exact same function because it is a low pass filter specifically for the generated noise, so the filter you implement for DSD is different to that of PCM, no getting around that with the statement you said earlier that any DAC without a reconstruction filter is broken :)
Yeah with you on the roll-off, for CD I actually prefer minimum phase/slow roll-off implementation, but then my point is going with higher sampling rate the roll-off issue is resolved.

If you do a test, might consider high energy fast transient sounds such as drums/cymbals/etc; this is where I notice it primarily, but also strangely enough with sibilance (depending how recorded) that is much lower in the frequency range *shrug* - just mentioning these 2 example traits but I would say it does go further and influence instruments-voices subtly as well from an envelope sound perspective (also when considered in the harmonic-partials view).
Cheers
Orb
 
Last edited:
"There are a few problems with this statement."
You are setting up a strawman argument. The synthesized "digital" square wave contains frequencies above Nyquist. That's an invalid condition. In the real world, the above-Nyquist frequencies would have been rejected (filtered) before A-D conversion. I'll grant that in the real world a synthesized musical note may be recorded directly as a bitstream, but again if it's competently generated it will contain no components above Nyquist.

"An anti-alias filter is not just a good idea, it’s mathematically necessary." - James D. (JJ) Johnston.

I was setting up a strawman to make a point, namely that there are no band limited square waves. Such are impossible. In theory there are no band limited signals at all, as I pointed out in the earlier part of my post. In the later part of my post, I gave an example that is often used by the NOS idiots, and showed how theory and practice not only agree, but do so in a totally obvious fashion.

When I got my Apple II back around 1980 I programmed it to synthesize various musical notes. Unfortunately, Wozniak's software generated pitches in periods of that were multiples of several microseconds, as according to instruction loop counts. Anyone who was not totally tone deaf could hear right off that many notes in a scale were grossly out of tune. So then I wrote some code to generate better approximations, using a sampled data scheme. My interest was in generating various equal and unequal tempered scales to listen to them. Unfortunately, the sampled data scheme didn't work because small differences in pitch resulted in aliases (due to the 1 bit "D/A" which was a flip flop connected directly to the speaker). These changed character due to birdies when I made a small change in pitch. Eventually I wrote some serious code that was able to sample the synthesized square waves at a 1 MHz rate and this made the aliases inaudible. Later, some additional code allowed for generating the sum of two squarewaves at different pitches by using pulse density modulation, but I won't say what this was used for. You can guess. ;)
 
Don, how do you do a reconstruction filter for the TDA1541 when it is not oversampling? ...

The same way it was done back in the days of 1x DACs.

DSD, well actually it is not a reconstruction filter though in terms of doing exact same function because it is a low pass filter specifically for the generated noise, so the filter you implement for DSD is different to that of PCM, no getting around that with the statement you said earlier that any DAC without a reconstruction filter is broken :) ...

A reconstruction filter is a low-pass filter.

... my point is going with higher sampling rate the roll-off issue is resolved.

No argument there. Now persuade the NOS enthusiasts. :)
 
I was setting up a strawman to make a point, namely that there are no band limited square waves. Such are impossible. In theory there are no band limited signals at all, as I pointed out in the earlier part of my post. In the later part of my post, I gave an example that is often used by the NOS idiots, and showed how theory and practice not only agree, but do so in a totally obvious fashion.

My mistake. I read it as validating the "NOS idiots" arguments, not vice versa. One experiment I ought to perform is to D-A a non-limited square wave and show that the output contains significant energy at the filter cutoff. (I don't like calling it "ringing", even though it looks like it on a scope.) Then repeat the exercise with real-world musical sources and see how much energy there is.

When I got my Apple II back around 1980 I programmed it to synthesize various musical notes. ...

I did something similar, reprogramming the interrupt/memory refresh timer chip on a PC-AT to generate a supersonic tone then PWM the duty cycle to generate audio (Class D.) It was used in software which read strings of numbers out loud. "Sampling rate" was about 16 KHz, with 6 bits of resolution. Quite adequate for the speaker in a typical PC-AT. Maximising loudness was crucial, I learnt a lot about digital compresson / limiting / EQ. Since the data was played from RAM, I also learnt a lot about audio data compression on a low CPU cycle budget. I ended up using a NICAM-like scheme to compress 6 bits to 4.
 
Thank you, Tony, for your reply to my question.

What's the issue here?

1. Does real music have energy above 20 kHz and so is changed when filtered arbitrarily to reduce its bandwidth.?

Yes: http://www.cco.caltech.edu/~boyk/spectra/spectra.htm

Of course there is acoustic energy from instruments above 20 kHz, just like there is a UV component in sunlight. The question is how relevant, if at all, it is for our auditory perception. No systematic evidence indicates that we can hear beyond 20 kHz, just like it is established that we cannot see UV light (unlike bees, who can). We do get sunburns though from UV light, but obviously this has nothing to do with our eyes.

From the link you posted, in italics:

Oohashi and his colleagues recorded gamelan to a bandwidth of 60 kHz, and played back the recording to listeners through a speaker system with an extra tweeter for the range above 26 kHz. This tweeter was driven by its own amplifier, and the 26 kHz electronic crossover before the amplifier used steep filters. The experimenters found that the listeners' EEGs and their subjective ratings of the sound quality were affected by whether this "ultra-tweeter" was on or off, even though the listeners explicitly denied that the reproduced sound was affected by the ultra-tweeter, and also denied, when presented with the ultrasonics alone, that any sound at all was being played.

Objection: Here it is not clear if the signal through the extra tweeter did not at all alter the signal in the audible range, either through electronic feedback into the chain or through acoustic vibrations that could affect a) the electronics or b) the speaker system that transmitted below 26 kHz. That listeners explicitly denied that it had an effect, does not necessarily make it so. A lack of interference with the audio signal below 26 kHz would have to be carefully established before scientifically valid conclusions can be drawn.

In a paper published in Science, Lenhardt et al. report that "bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects." [5] They speculate that the saccule may be involved, this being "an otolithic organ that responds to acceleration and gravity and may be responsible for transduction of sound after destruction of the cochlea," and they further point out that the saccule has neural cross-connections with the cochlea. [6]

Objection: What is not mentioned is that the ultrasound has to be directly fed to the skull, thus with much higher energy than normal, which is irrelevant for regular hearing.

I am open to more systematic results from studies, but so far I see nothing conclusive, and what we have so far is so sporadic that it amounts to little more than anecdotal evidence. Extraordinary claims require extraordinary evidence.

2. When high frequencies are filtered out of recordings, can people hear a difference?
Yes. See Amir's results on a variety of double blind tests in this and other forums for his results

There is no direct evidence that this has to do with the high-frequency content. It could be other technical issues.
 
The same way it was done back in the days of 1x DACs.



A reconstruction filter is a low-pass filter.



No argument there. Now persuade the NOS enthusiasts. :)
Please could you be more specific with that explanation on the 1x DAC filter, because it is not really a reconstruction filter as implemented with interpolating/oversampling DACs.

Actually a reconstruction filter is to designed to remove the images and has an influence on phase-timing-frequency, the DSD low pass filter is to remove noise without the same issues (has different one though IMO).
So it is not actually a reconstruction filter - yeah I am being an ass I know but it was due to you saying if a DAC does not have a reconstruction filter it is broken :)
Been discussed plenty of times though here and at Computer Audiophile by others very experienced in implementing both so happy to change the subject.

And like you I would never use a NOS as it goes against much of my views like yours, but it exists and as JA has said in the past for a couple of products; does it sound good to listeners despite its measurements or because of them.
A question no-one can answer without very broad assumptions.
Cheers
Orb
 
Last edited:
I was setting up a strawman to make a point, namely that there are no band limited square waves. Such are impossible. In theory there are no band limited signals at all, as I pointed out in the earlier part of my post. In the later part of my post, I gave an example that is often used by the NOS idiots, and showed how theory and practice not only agree, but do so in a totally obvious fashion.

When I got my Apple II back around 1980 I programmed it to synthesize various musical notes. Unfortunately, Wozniak's software generated pitches in periods of that were multiples of several microseconds, as according to instruction loop counts. Anyone who was not totally tone deaf could hear right off that many notes in a scale were grossly out of tune. So then I wrote some code to generate better approximations, using a sampled data scheme. My interest was in generating various equal and unequal tempered scales to listen to them. Unfortunately, the sampled data scheme didn't work because small differences in pitch resulted in aliases (due to the 1 bit "D/A" which was a flip flop connected directly to the speaker). These changed character due to birdies when I made a small change in pitch. Eventually I wrote some serious code that was able to sample the synthesized square waves at a 1 MHz rate and this made the aliases inaudible. Later, some additional code allowed for generating the sum of two squarewaves at different pitches by using pulse density modulation, but I won't say what this was used for. You can guess. ;)

OK going to sound like a noob here but did it not make more sense back then to use best synthesizers with the midi interface with those computers that supported it?
Thought the Apple II did support it and the software was around back then *shrug*.
Cheers
Orb
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu