JRiver MC Version 18

The sorts of differences I hear probably wouldn't be so obvious with headphones and lot depends on the headphones too - then again I'm not into phones so what do I know?

Not being able to have DirectLink working suggests he has a bottleneck in his machine somewhere, this will be influencing even these tests as adding buffers changes (softens) the sound.
 
Why, can you not trust what you hear & accept that in time these measurements will come?

No, I can not trust what I hear. Have you ever heard of cognitive bias, psychoacoustic effects and placebo?
 
No, I can not trust what I hear. Have you ever heard of cognitive bias, psychoacoustic effects and placebo?
Yep, heard of all of them & they also cover why some people don't hear real differences!
 
Why, can you not trust what you hear & accept that in time these measurements will come?

I think that is usually referred to as the Tinkerbell Effect. If you just believe enough, it will become true.
 
Why, can you not trust what you hear & accept that in time these measurements will come?
Because some of us who do believe in measurements, also have opinions, like you do, and our opinion is that there is no difference to hear.
 
Here's what it boils down to
- Someone who doesn't trust what they hear & looks to existing measurements to tell them what they should hear
- someone who trusts what they hear (blind, etc) & looks at existing measurements as flawed
 
- Someone who doesn't trust what they hear & looks to existing measurements to tell them what they should hear

Nobody is telling you what you should hear. If you want to hear voices from Mars, be our guest.

- someone who trusts what they hear (blind, etc) & looks at existing measurements as flawed

Someone who blindly trusts what they hear (and probably hears what they believe) will of course look at any measurements (especially ones that don't confirm their beliefs) as flawed.
 
Nobody is telling you what you should hear. If you want to hear voices from Mars, be our guest.
haha, I'm sure if the measurements "proved" that there were voices on Mars, you would hear them, however I trust my ears & don't have a need to BELIEVE, as you seem to be psychologically stuck at

Someone who blindly trusts what they hear (and probably hears what they believe) will of course look at any measurements (especially ones that don't confirm their beliefs) as flawed.
Someone who blindly BELIEVES in measurements will not hear what the flawed measurements tell them are not audible especially when those flawed measurements are a confirmation of their BELIEFS. Works both ways!!
 
haha, I'm sure if the measurements "proved" that there were voices on Mars, you would hear them, however I trust my ears & don't have a need to BELIEVE, as you seem to be psychologically stuck at


Someone who blindly BELIEVES in measurements will not hear what the flawed measurements tell them are not audible especially when those flawed measurements are a confirmation of their BELIEFS. Works both ways!!

There are ways to mitigate or at least minimize such biases. Blind testing is one of them. You seem however not to care much for them. I find however odd such an utter rejection of an extremely useful tool.
 
Last edited:
There are to mitigate or at least minimize such biases. Blind testing, you seem however not to care much for them. I find however odd such an utter rejection of an extremely useful tool.
I think you misrepresent me - if you read what I said about blind testing you would see that I favour very rigorous DBTs which will help to eliminate ALL biases. I do not reject them but find that the ones often called for on audio forums are just pseudo-science posing. I go back to what I said to Julf his acceptance of flawed tests appears to me to be in this same mold - acceptance of some half-baked tests with the retort "well show us some better tests". Just because suitable sensitive, reliable tests have not been yet revealed is no reason to accept the obviously flawed tests as "proof" of anything - see where I'm coming from?
 
Just because suitable sensitive, reliable tests have not been yet revealed is no reason to accept the obviously flawed tests as "proof" of anything - see where I'm coming from?

OK, so the "obviously flawed" tests aren't proof - so there is no proof then of any of the claims.
 
OK, so the "obviously flawed" tests aren't proof - so there is no proof then of any of the claims.
Julf I thought I summed it up a few posts ago:
Here's what it boils down to
- Someone who doesn't trust what they hear & looks to existing measurements to tell them what they should hear
- someone who trusts what they hear (blind, etc) & looks at existing measurements as flawed

No claims, just observation with the sense of hearing
 
No claims, just observation with the sense of hearing

OK, so it boils down to no proof apart from subjective preferences. Nothing wrong with subjective preferences, but they only apply to that specific listener and his/her preferences.
 
OK, so it boils down to no proof apart from subjective preferences. Nothing wrong with subjective preferences, but they only apply to that specific listener and his/her preferences.

If I can tell the difference blind, that's proof for me - others may have a higher or lower criteria! I find that a product either delivers or it doesn't, the market tends to confirm this or not. Trying to skew people's free choice in the marketplace often has the opposite effect to what was intended.
 
For me it's like this: You will never find 100% agreement on a forum, you will also ALWAYS find someone that does not for reasons ONLY CLEAR to them NOT agree with you.

What I find peculiar is that you will also always find someone that thinks that only one conclusion can be reached. They are unable to understand that others may have an opinion, at all.
They seem to think it is a law, that the truth can be only one thing. Their thing...

Such a waste of brainpower...

Jriver and Jplay. I'd have to say I'm glad that the people behind both of them has put in the effort so that we can enjoy the music!
May they battle to their deaths, or find life anew...

Why complicate things, Jriver works just fine.
For those interested enough to be adventurous, let them be... on their own playground.



Imperial.
 
Last edited:
I wonder why he can't hear any difference either? Could it be he's using headphone listening?
There is no more ideal device with respect to hearing small differences than headphones. So not sure what you mean there. You really think you can hear noise better with a speaker? And Jitter? And oh, we had that argument in the computer audio thread so let's not rehash it here :).

I know that Amir realises that J-Test is inappropriate for USB testing so surprised that he hasn't commented on this as a waste of testing time. This has already been pointed out to Archimago some time ago but he continues to use it. Why?
Waste of time? First, he did a bunch of test using other test signals. Second, J-Test is designed to aggravate cable induced jitter. When you are using an async USB to S/PDIF converter, that aspect still comes into play downstream at S/PDIF level. In this case he used a DAC with async interface so that aspect doesn't come into play. His test then is one of sending a standardized test signal and seeing what comes out the other end. If the interest was to detect the performance of downstream devices, then one could say that this may not represent worst case signal as it does for S/PDIF. In this case however, the interest is the source and whether you aggravated the cable or not is not material.

As for the other tests -- they are obviously flawed as they don't reveal the differences picked up by listening!!
Let's be clear. There are two types of tests that get objectivists and subjectivits into a fight:

1. There are measurable differences in the audible band. But formal listening tests don't show it.

2. There are no measurable differences. And formal listening tests don't show it.

The hope and I think claim behind JPlay was #1. That noise and jitter were being reduced and hence would measure and confirm that. When there is a measurable difference we can hang our hat on something being improved even if our ears in specific tests don't show it. We can then think whether our tests represent that case well enough.

#2 is a much tougher situation where we are claiming that even though we have no measurable difference, audible ones might exist. While instruments are our fiend in #1, they become our enemy in #2.

All else being equal, one would want to be in #1 situation. I know I do :). I am a fan of improving measurable performance because it is just good engineering. When there is no measurable difference, then science stops being your friend. I like science being my friend so you won't often find me supporting #2 :).
 
There is no more ideal device with respect to hearing small differences than headphones. So not sure what you mean there. You really think you can hear noise better with a speaker? And Jitter? And oh, we had that argument in the computer audio thread so let's not rehash it here :).
I agree let's not

Waste of time? First, he did a bunch of test using other test signals. Second, J-Test is designed to aggravate cable induced jitter. When you are using an async USB to S/PDIF converter, that aspect still comes into play downstream at S/PDIF level. In this case he used a DAC with async interface so that aspect doesn't come into play. His test then is one of sending a standardized test signal and seeing what comes out the other end. If the interest was to detect the performance of downstream devices, then one could say that this may not represent worst case signal as it does for S/PDIF. In this case however, the interest is the source and whether you aggravated the cable or not is not material.
So, let me see if I understand what you are saying - Jtest was designed to expose particular SPDIF issues that would have been difficult to uncover without such a test. You consider it perfectly acceptable to use this specific test as a general purpose USB jitter test? You don't think that some similarly difficult to expose issues may exist in the USB environment environment which would require a specific test to expose? You don't consider that such a test might be needed to expose possible differences between upstream systems using USB output? BTW, JTest is not just a test of "cable induced jitter" it is more correctly described as a test of jitter created during SPDIF transmission which includes PLLs etc.


Let's be clear. There are two types of tests that get objectivists and subjectivits into a fight:

1. There are measurable differences in the audible band. But formal listening tests don't show it.

2. There are no measurable differences. And formal listening tests don't show it.

The hope and I think claim behind JPlay was #1. That noise and jitter were being reduced and hence would measure and confirm that. When there is a measurable difference we can hang our hat on something being improved even if our ears in specific tests don't show it. We can then think whether our tests represent that case well enough.

#2 is a much tougher situation where we are claiming that even though we have no measurable difference, audible ones might exist. While instruments are our fiend in #1, they become our enemy in #2.

All else being equal, one would want to be in #1 situation. I know I do :). I am a fan of improving measurable performance because it is just good engineering. When there is no measurable difference, then science stops being your friend. I like science being my friend so you won't often find me supporting #2 :).

Ok, have any formal listening tests been done to evaluate #2? If not then #2 is still unfinished & doesn't fit your classification.
So let's see what ArchiMago's has to offer regarding #1 In his other testing he has tested a number of other software playback programs - all bitperfect except for iTunes which uses Directsound. Now, maybe it is a good test of the sensitivity of his tests - does iTunes measure any differently to the rest of the playback software? Looking at his results, I can't see any differences, can you?
What does he have to say about this
Although DirectSound does not claim to be "bit-perfect" since it takes the integer audio --> converts to 32-bit float --> dithered back to 16/24-bit back to DAC through Windows Mixer, it looks like it is able to do this with a single audio stream without significant deterioration in the output in the 24-bit domain (remember the DMAC Test is 24-bit audio) - not exactly rocket science so this is to be expected. I believe many folks feel the quality of the Windows Mixer has improved over the years. Dithering down to the 16-bit domain would likely be very detectable in the measurements and would have messed up the 16-bit J-Test as well - this is why I always keep my Windows default output as 24-bits (note that the TEAC driver does not have a 16-bit setting for DirectSound so I can't demonstrate what 16-bit dithering looks like). Of course if you have multiple streams going through the mixer, things could deteriorate - but this is not generally relevant for home music playback. Also, if you run a DTS or AC3 file through DirectSound, it would not be surprising to hear errors in the bitstream.
Further he adds
Over the years I have used foobar, JRiver, and even iTunes for hours of listening... Other than the tests here, I have never tried to perform any controlled testing. However, I have not found occasion to complain that the audio output sound "bad" comparatively.
I'm not sure many would agree with him about iTunes sounding the same as Foobar/JRiver
I would have liked to have seen the tests with 16/44 inputs as this is what I mostly listen to & I would suspect the majority do also. I find it is always good practise to calibrate the equipment being used for such tests & show the baseline that the measurement setup can/cannot resolve. For instance the 16/44 files above - we also don't know if all the inputs are being upconverted in the TEAC or not?

Edit: Hold on, this gets even more confusing. He claimed above that 16/44 would show up measurement differences for iTunes & yet I find that he has done iTunes 16/44 RMAA & Jitter tests on Mac & found no differences http://archimago.blogspot.ie/2013_05_01_archive.html
"No difference to see here folk... Didn't show the 24-bit test, but that was unremarkable as well." Huh! he seems to either forget what tests he has done or says whatever fits the story of his test page!!

Are you really putting forth this guys tests as of some value??
 
Last edited:
So, let me see if I understand what you are saying - Jtest was designed to expose particular SPDIF issues that would have been difficult to uncover without such a test. You consider it perfectly acceptable to use this specific test as a general purpose USB jitter test?
Yes. It seems that you are thinking jitter only comes out if you feed it J-test signal. If that were the case, it would never show up with music since music never mimics what J-test does! :) J-test is a square wave with one bit of it toggling. The square wave once it goes through the low pass filter of the DAC becomes a pure sine wave with that very low level toggling embedded in it. If a DAC has jitter, it will show up there as it would show up if you were to feed it a single tone sine wave. What doesn't show up without S/PDIF is the link, i.e. S/PDIF, aggravating because of embedded clock in S/PDIF channel.

In this scenario, we are not interested in what the cable or S/PDIF is doing. We are interested to see if the source induces jitter into the down stream DAC. The use of J-Test here has a superfluous low-order bit toggling. That's all. Otherwise, we have a high frequency tone which is a hard test with respect to jitter.

Now, if we were testing quality of the USB connection then yes, maybe there is a worst case signal for that. But no one has invented one and at any rate, per above is not something we are trying to test.

You don't think that some similarly difficult to expose issues may exist in the USB environment environment which would require a specific test to expose? You don't consider that such a test might be needed to expose possible differences between upstream systems using USB output? BTW, JTest is not just a test of "cable induced jitter" it is more correctly described as a test of jitter created during SPDIF transmission which includes PLLs etc.
PLL is not part of the transmission. It is a subfunction of a receiver in order to deal with upstream artifacts caused by the cable or the source. J-Test is designed around what we know the cable can do to digital stream. We have no idea how the downstream PLL is designed as to create a worst case signal for that. As I explained above, if the goal was to agitate down stream devices, a different signal may do better but that is not the goal here. The goal here is to figure out if we just change the source, what happens. In this case, nothing did! :).
 
Ok, have any formal listening tests been done to evaluate #2? If not then #2 is still unfinished & doesn't fit your classification.
I am not aware of tests where measurements showed nothing but rigorous blind tests showed otherwise. Do you have some examples you want to put forward otherwise? If you have such a data, it would be a bigger deal than this test :). Absence of data in this case is not a good thing.

So let's see what ArchiMago's has to offer regarding #1 In his other testing he has tested a number of other software playback programs - all bitperfect except for iTunes which uses Directsound. Now, maybe it is a good test of the sensitivity of his tests - does iTunes measure any differently to the rest of the playback software? Looking at his results, I can't see any differences, can you?
What does iTunes do wrong when the sampling rate is set correctly to match the source? Using Kernel streaming?

Are you really putting forth this guys tests as of some value??
Oh, for the purposes of settling this argument, for sure. Now, personally I don't use PCs for analysis as he is doing. I rather use calibrated instruments. But for this purposes, he added a wealth of data to the discussion which just so happens, matches my knowledge of what should have happened. So I see no reason to doubt his conclusions.

Do you want to put forward a set of measurements that show any difference?
 
Since this thread is about Jriver. I would like to talk a little bit about a feature I recently discovered, although it's not new.

Last.fm is a great way to discover new music. You can switch between your own library and the last.fm radio suggested music. You can submit your music via scrobble to last.fm so it knows what your preferences are. It's really sweet. The sound quality is only 128kbs but its still a really simple way to discover new tunes without straying far from your own library; no need to setup output device.

You can sign up with last.fm via Jriver. You just click the player drop down, and then play last.fm radio. It will ask you to sign up and authorize last.fm to access your jriver. It's all self contained inside jriver; super easy way to discover new music. $3 per month for unlimited, unobtrusive and fun streaming without commercials; a pittance, IMO.

Anyone else last.fm'd?
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing