JRiver MC Version 18

All fine except when you got to the last statement. I am confident experiments can be set up that will make you fail that test blind. How about you testing that for us? Set up an AB test and have someone in your family run it and report the results.
But you seem not to have read my last statement accurately or what I posted before? I have done blind tests & identify Jplay Vs Foobar without a problem. Why should I have to repeat it?


Well, let me ask you this. I tell you that I am selling a set of balls that when you put them in your glove compartment, it will increase your car's fuel efficiency by 20%. I further say I have a money back guarantee. You you have to order remotely and return the goods if you don't like it. Are you honestly going to tell me you will go ahead and try this, and not ask me for some data that backs my claim?

Do I try other things in absence of such data? Sure. But the claim needs to be plausible and make sense to me. This one has the odds stacked against it because I know how the technology works. Just like you would have enough car knowledge to know said balls would not improve fuel economy.

I think the difference between us is that their claims are plausible to you. That they reduce certain activity and you equate that with potentially better performance. I got done explaining that reducing activity may make it predictable and reduce its high frequency down to audio frequency. Therefore if we are going to guess here, we better also accept the odds that this makes this worse. Based on this, they are the equiv to balls that in the glove compartment to me. Your notion that you would use your ears to determine it sounding better is like telling me that stuck your thumb out the window and thought your car was going faster with said balls :D. It doesn't carry any weight because you could very well be mistaken.
You weave a convoluted story which really doesn't get us anywhere. I undersatnd your position - your past experience & knowledge leads you to dismiss any possibility of Jplay making any sonic difference. I have the same view of Shakti stones, Crystals & many other products that I would naturally dismiss have any "real" beneficial effect on the sound. As I said before, they are usually expensive, don't have a free trial. I have had some esoteric products sent to me to trial/test. I used my ears & they failed. I keep an open mind however & don't need proof of anything before I listen. If my sighted tests don't reveal any benefit, I don't waste time on blind tests

Let me be fair and say that we are open minded in this forum. We allow for many manufacturer's claims such as fancy cables, high precision DACs, etc. with no objective proof either. If that is your point, that fine. Just don''t say we should ignore what we know when we have knowledge of a difference a product can make and "just try it." I don't need to just try it if I have objective data that saves me the work and aggravation.
As I have said repeatedly, I have done blind tests & Jplay passes these fine. That is data to me, don't know why you ignore it?

All of this said, if I have time to kill one day, I will try to measure the electrical characteristics out of the DAC with and without this plug-in.
I suspect you won't find the answer in stock measurements.
 
But you seem not to have read my last statement accurately or what I posted before? I have done blind tests & identify Jplay Vs Foobar without a problem. Why should I have to repeat it?
No need to repeat. Just provide the details of how and what.
 
But Amir, you have also ignored or avoided comment on the other results that I have pointed to along the way:
- the extra noise seen 8-9KHz on Archimago's plot when the GPU is under stress (remember that he set out to prove that system load made no change to the audio output). Your stated your testing showed no noise differences above the noise floor & therefore concluded that no software changes could possibly cause any chang ein noise above this noise grass. What do you have to say about Archimago's test?
 
What details do you want?
Well, anything more than "I ran a blind test." :) What did you test, how did you test, who ran the test, how many times you tested, did you repeat the results and were they the same, etc.

You are making me worried by asking the question :).
 
But Amir, you have also ignored or avoided comment on the other results that I have pointed to along the way:
- the extra noise seen 8-9KHz on Archimago's plot when the GPU is under stress (remember that he set out to prove that system load made no change to the audio output). Your stated your testing showed no noise differences above the noise floor & therefore concluded that no software changes could possibly cause any chang ein noise above this noise grass. What do you have to say about Archimago's test?
Oh, I missed that post. You shouldn't have reminded me though as it severely impacts your case :).

Let's start with the video in that link:http://archimago.blogspot.ie/2013/03/measurements-hunt-for-load-induced.html. He runs an audio encoding with dbpoweramp with all CPU cores whaling and nothing, absolutely nothing happens in the output of the DAC over the usually lousy internal Toslink of the motherboard. Remember, playing music takes zero CPU cycles in a modern PC. The load that he generated there is millions of times higher and it still showed no impact whatsoever. That is in a spectrum analysis that is showing constant jitter/noise at -110 db. In other words, whatever may be there is buried well below these levels -- precisely what I was saying my test showed.

Now as to our point regarding that bit of noise, let's examine that. This is with the system idle:

Idle_computer_24-48.png


This is with the system going nuts, with all CPU cores 100% busy and GPU maxed out to death:

6_Threads_going_&_GPU_Load_computer_24-48.png


Folks looking at this wouldn't know of any problems without reading the text and then squinting. What he is noting is that tiny, tiny rise in noise level between 8 and 9 (KHz) markers. That increase is from -140 to -130. And here is the key thing: this test used the motherboard DAC. So sure, if you are using the onboard DAC, things can leak onto it. Certainly to this level. This is super tiny impact.

BTW, look the evidence of what i said about making this worse sometimes. Look at that spike at 11 that vanished with load! Great evidence of how unpredictable the PC can be.

In my testing, which I assume is representative of the interest of people in this thread, I was using high-performance async USB to outboard DACs. I bet if he had used the same fixture there would not be that noise. If there were, then I would get a better USB adapter!

I actually have no problem accepting that an internal DAC in a PC would be noisy and at far higher levels than these under massive system load. Ditto for Toslink. But that is not the AB scenario that we are discussing. Our case is where CPU load remains minuscule in JPlay or no JPlay. And GPU is probably idle other than displaying UI and changing it from time to time. And one would hope the audience who cares about such things has a high performance interface to the PC which isolates it from that box's noise.
 
Well, anything more than "I ran a blind test." :) What did you test, how did you test, who ran the test, how many times you tested, did you repeat the results and were they the same, etc.

You are making me worried by asking the question :).
I tested Foober Vs Jplay using the same PC same music
I went out of the room & someone changed (or not) the playback player
When the music started I came back into the room & said what player I thought was playing
This was repeated
5 out of 5 correct
Any other questions?
 
Amir, is it possible that the tests you refer to aren't sufficient? I have two examples of related areas where industry respected people are using non-standard tests to get at why people are hearing things which standard tests suggest can't happen.

1) John Westlake (Audiolab) said this recently, I'll just use an excerpt to give you the flavour:

OK forgive me if this example is rather simplistic and a little “crude” but I’m trying to paint a picture of how a circuit response can be very different for simple “repetitive” or steady state signals (such as the simple “Clean” signals used for THD measurements) – and more complex signal, as music we are attempting to reproduce so happens to form…

The resultant distortion caused by these “loop / system” interactions might be -150dB down for cyclic steady state signals – but much higher during “Change of State” transient musical siganls… I believe what’s missing is an audio test that “measures” the system quality during “change of state” rather then simple cyclic signals such as THD – this is especially important of systems that use high order feedback loops such as ADC / DAC’s…

Even multi-tone IMD test (which can be converted into THD results) appear "cyclic" in nature - you don't see the internal integrator nodes going "chaotic" as you do with rapidly changing signals...

I’ve stated elsewhere that as I become older and a little wiser I appreciate more and more that the time domain is more important then the frequency domain for HiFi reproduction (By time domain I mean these periods of "change of state").


2) This month in HiFi News Paul Miller has published a review and test results of USB cables. The review of the cables was blind. They had no problem reliably hearing differences between cables. This I know upsets some people. He also published 'scope data showing how each cable measured differently. He also tested a giveaway cable which measures and sounded very different to the rest.


Those saying they understand everything there is to know about DACs, cables etc need to consider that perhaps they've not been measuring all the aspects they need to look at. The number of people out there saying they hear differences would I hope result in this reaction:

- it's interesting
- it doesn't stack up with our measurements
- I retain an open mind and wlll search for reasons why
- options:
1) people are difficult beings, acutely sensitive to both suggestion and minute differences - but which is the case here?
2) we don't know as much as we thought and the beginning of the journey is to realise this.
 
Oh, I missed that post. You shouldn't have reminded me though as it severely impacts your case :).

Let's start with the video in that link:http://archimago.blogspot.ie/2013/03/measurements-hunt-for-load-induced.html. He runs an audio encoding with dbpoweramp with all CPU cores whaling and nothing, absolutely nothing happens in the output of the DAC over the usually lousy internal Toslink of the motherboard. Remember, playing music takes zero CPU cycles in a modern PC. The load that he generated there is millions of times higher and it still showed no impact whatsoever. That is in a spectrum analysis that is showing constant jitter/noise at -110 db. In other words, whatever may be there is buried well below these levels -- precisely what I was saying my test showed.

Now as to our point regarding that bit of noise, let's examine that. This is with the system idle:

Idle_computer_24-48.png


This is with the system going nuts, with all CPU cores 100% busy and GPU maxed out to death:

6_Threads_going_&_GPU_Load_computer_24-48.png


Folks looking at this wouldn't know of any problems without reading the text and then squinting. What he is noting is that tiny, tiny rise in noise level between 8 and 9 (KHz) markers. That increase is from -140 to -130. And here is the key thing: this test used the motherboard DAC. So sure, if you are using the onboard DAC, things can leak onto it. Certainly to this level. This is super tiny impact.

BTW, look the evidence of what i said about making this worse sometimes. Look at that spike at 11 that vanished with load! Great evidence of how unpredictable the PC can be.

In my testing, which I assume is representative of the interest of people in this thread, I was using high-performance async USB to outboard DACs. I bet if he had used the same fixture there would not be that noise. If there were, then I would get a better USB adapter!

I actually have no problem accepting that an internal DAC in a PC would be noisy and at far higher levels than these under massive system load. Ditto for Toslink. But that is not the AB scenario that we are discussing. Our case is where CPU load remains minuscule in JPlay or no JPlay. And GPU is probably idle other than displaying UI and changing it from time to time. And one would hope the audience who cares about such things has a high performance interface to the PC which isolates it from that box's noise.

In your previous statements you claimed that the noise in a PC was so high & random that you never found any evidence of a repeatable effect. What Archimago's shows is a REPEATABLE increase in noise in a specific freequency range. Forget about the spike at 11KHz - this is a red herring because it's not repeatable - that is the point. Your contention being - if anything (software) changes the noise it will so swamped by the random PC noise that it is immaterial. Well this is software that is excercising the GPU & it has a gross, repeatable effect, visible on a crude meaurement. You may try to talk it down but I contend that we are lucky to be seeing this as the measurement is so gross it only lets through gross effects.

So that graph proves some things (software) that excercise parts of the PC produce repeatable non-random noise.

Now you seem to treat noise as just a random distortion with no characteristics. I have been saying all along that the existing meaurement techniques we have for noise are not sophisticated enough & don't give us enough information about it's internal structure - it's like having one figure for jitter - it's meaningless. If you can't accept this then we are just talking past one another. So it's quite possible that buried within the noise floor are non-random, deterministic, correlated, modulated noise which when reduced has a sonic effect.

Let me give you another example - the improvement in sound with Jplay is very akin to the improvement in sound when the PS of the PC is changed for a linear one and/or when the fan is turned off (or it's run fanless) - it is more of the same sort of improvement - better sound stage, more dynamics, more body to the music.

You can argue all you want about measurements but the measurements that you are citing are flawed.
 
Last edited:
Good points Clive
Yes, I also meant to nominate these factors which lead to the same types of improvements as we hear with Jplay - improved dynamics, deeper more solid sound stage, more body to the music. A group of us on another forum have done listening tests to verify these factors:
- USB cable (I prefer no USB cable - the DAC being directly connected to the USB port)
- linear PS supply
- fanless or fan operating from separate supply
- no internal drives - booting OS from Windows To Go USB stick, audio files on separate USB stick
- USB 3 ports for each of these sticks

Some things are immediately obvious like linear PS & fan disconnect, some are more subtle but they all head in the same direction sonically.
 
Last edited:
Picking up on Clive's post & JohnW's comment which are along the same lines as my thinking & something I have always maintained here from day one (which MEP recently accused me of "someone who seems to be relentless at getting his digital point of view heard") - the time domain is the area where the greatest focus should be concentrated, in my opinion.

It explains & correlates a number of seemingly unrelated phenomena:
- how come old guys (like myself & lots of others) can hear subtle changes in systems when our hearing acuity in the upper frequency range is compromised?
- what connects the characteristics of the changes in sound that I reported in the last post - timing. Greater sound stage depth & solidity, greater body to the sound, better dynamics - all these I would attribute to be an aspect of better timing between channels. This is the aspect that is most notable with high-end systems - they produce a more realistic, deeper sound stage with better dynamics, etc. Why would this be? Better FR response? Don't think so!

- What do none of the measurements that have been cited show - the interchannel relationship of the signal timing - how accurately the stereo channels are reproducing the music in the time domain. How sensitive are we to timing, it is becoming evident that we are very sensitive. Now don't trot out the usually sneering put down of 1uS timing difference being equivalent to moving a speaker 1cm closer to the listener - or whatever it is. This assumes a new FIXED position for the speakers & overlooks the fact that if we have fluctuations in timing, we have fluctuations in this distance of the speakers from the listener. Here's an experiement to try which I have never seen done - put the two speakers on platforms which move them randomly & a couple of centimeters - each one being moved independently of the other & report what it does to the sound stage.
Edit: Then try it by making the movements correlate to the music or PS fluctuations, again with variations between each speakers movement but yet correlated to the music. Let's get more sophisticated with this experiment - have a silent switch controlled by someone that turns on/off this speaker movement & see what the listeners report (this is effectively blind as no listener will be able to tell visually if the speakers are moving or not.

- The sound stage illusion from 2 speakers is a very tenuous illusion, easily disturbed by any number of factors, timing being one. It is fragile enough that it's easily disturbed by changes in the signal which are currently not being tested for or are too subtle for our current crude measurements
 
Last edited:
Amir, is it possible that the tests you refer to aren't sufficient? I have two examples of related areas where industry respected people are using non-standard tests to get at why people are hearing things which standard tests suggest can't happen.

1) John Westlake (Audiolab) said this recently, I'll just use an excerpt to give you the flavour:

OK forgive me if this example is rather simplistic and a little “crude” but I’m trying to paint a picture of how a circuit response can be very different for simple “repetitive” or steady state signals (such as the simple “Clean” signals used for THD measurements) – and more complex signal, as music we are attempting to reproduce so happens to form…

The resultant distortion caused by these “loop / system” interactions might be -150dB down for cyclic steady state signals – but much higher during “Change of State” transient musical siganls… I believe what’s missing is an audio test that “measures” the system quality during “change of state” rather then simple cyclic signals such as THD – this is especially important of systems that use high order feedback loops such as ADC / DAC’s…

Even multi-tone IMD test (which can be converted into THD results) appear "cyclic" in nature - you don't see the internal integrator nodes going "chaotic" as you do with rapidly changing signals...

I’ve stated elsewhere that as I become older and a little wiser I appreciate more and more that the time domain is more important then the frequency domain for HiFi reproduction (By time domain I mean these periods of "change of state").


2) This month in HiFi News Paul Miller has published a review and test results of USB cables. The review of the cables was blind. They had no problem reliably hearing differences between cables. This I know upsets some people. He also published 'scope data showing how each cable measured differently. He also tested a giveaway cable which measures and sounded very different to the rest.


Those saying they understand everything there is to know about DACs, cables etc need to consider that perhaps they've not been measuring all the aspects they need to look at. The number of people out there saying they hear differences would I hope result in this reaction:

- it's interesting
- it doesn't stack up with our measurements
- I retain an open mind and wlll search for reasons why
- options:
1) people are difficult beings, acutely sensitive to both suggestion and minute differences - but which is the case here?
2) we don't know as much as we thought and the beginning of the journey is to realise this.
Hi Clive. First, welcome to the forum :). I hope you don't get the idea from this specific interchange that I/we in this forum are against high-end or have a dogmatic attitude toward "prove it or not." We don't. I am an engineer and call myself an objectivist but have an extremely open mind about what we know and what we don't know. This discussion is not about "everything can be measured," "DBT is the only answer," etc. It is not about changing DACs and expect them to sound identical.

Rather, it is about something very specific and different: can the high variant PC traffic be tamed in a positive way by changes in the way the application reads samples from disk and output them. It is a given that apps like JPlay attempts to reduce traffic as the music plays by prefetching data. And its "hybernation" mode reduces background traffic. The issue at hand is knowing what I know, says that all of this may do nothing, or as I showed in the last analysis actually make things worse. In jitter for example, we don't care about megahertz jitter. We can't hear that. If you reduce that by a factor of 1000 then you now have jitter in KHz that can be audible. So less it not necessarily more.

Does this mean JPlay can't be making a positive contribution? No. It can. I am not here to rule it out. I am here to explain how the system works and diminish the intuitive sense we have that if something reduces activity automatically means the wind is behind them in having an advantage. That assumption will set one up for massive amount of placebo to be injected into the evaluation. Just like the example I gave in my comparison of Foobar and WMP. I thought the former used a direct pipeline and WMP did not (i.e. bit exact) and therefore may sound better. And sound better it did. Once I confirmed that the pipeline was the same, then that bias went away and so did the perceived performance advantage.

BTW, I have huge respect for Paul Miller and his work in analyzing digital systems. He is the only person testing and reporting on such in the world. It is an absolute shame we don' have more journalists like him. More of a shame is the fact that the articles are not online. Had not heard about his USB cable tests. I would love to hear more about that test. Can you describe more of it? I will go to the book store and see if I can find a hard copy too. Thanks in advance.
 
I tested Foober Vs Jplay using the same PC same music
I went out of the room & someone changed (or not) the playback player
When the music started I came back into the room & said what player I thought was playing
This was repeated
5 out of 5 correct
Any other questions?
No, I have lost any interest in asking you more :(. Honestly, when someone asks you about DBT and you say you know them well, then you should know what level of detail is normally provided to explain them. And the more challenging the claimed results, the more detail to be provided as to make the results credible. First you say you have done it and say no more. Then I prompt you again and all you say is you had the super human results of 5/5 and that is that? No talk of how content was picked. No talk of equipment. No talk of how the AB between apps was performed. No talk of setting levels. No talk of repeating it a different day. No talk of reversing role and having the proctor being tested as a reference. Just a claim of perfection in your listening ability and the topic you are defending.

You could very well have achieved what you said. But your demeanor in explaining it leaves one with huge doubts. A person who has the goods on their side would not be reluctant to have its details examined. In the face of people being incredulous, you need to be absolutely transparent and forthcoming as to not leave suspicion and doubt. But you are doing the opposite. I am sure people would have loved to know how to repeat your test. But no, let's leave it at "I was perfect" and "what else do you want to know." Answer is don't worry about it. You are liable to make your position worse, not better by giving answers like this.
 
In your previous statements you claimed that the noise in a PC was so high & random that you never found any evidence of a repeatable effect. What Archimago's shows is a REPEATABLE increase in noise in a specific freequency range.
Did you read what I wrote? He used the motherboard DAC. And changed the traffic from nothing to massive and all he got was a slight positive blip in noise and reduction of an actual spike! I said that I would readily accept an internal DAC changes performance that way. I explained how my testing was with external DACs and proper USB adapter. Has your interest shifted to what small changes can occur to an motherboard DAC? And extrapolate to an audiophile system built another way?

Forget about the spike at 11KHz - this is a red herring because it's not repeatable - that is the point.
Oh, so now we have proof of what I said. That the PC performance varies from time to time. Not repeatable. Isn't that what I said was happening and you seem to think should be impossible? And that change was far more significant than that slight rise in noise since it was correlated (spike at a frequency as opposed to noise). If you believe in a report, then you have to believe in everything it says. Not pick and choose the parts you want.

Your contention being - if anything (software) changes the noise it will so swamped by the random PC noise that it is immaterial. Well this is software that is excercising the GPU & it has a gross, repeatable effect, visible on a crude meaurement. You may try to talk it down but I contend that we are lucky to be seeing this as the measurement is so gross it only lets through gross effects.
No, my contention is not that. A single test is not the same as saying it will happen all the time with all PCs the same way. I am sharing data that I have. That data is not exhaustive any more than the one you are citing is. Every PC is different and so the fact that he saw that, with an internal DAC may mean absolutely nothing with a million other PC or be representative of some of them. You simply don't know. You seem to want to extrapolate his results to all of ours, but dismiss mine the same way. If you want to pick one set of data to generalize, it should be mine because it used an external DAC, didn't max out multiple cores and entire GPU. But you shouldn't even do that. I shared that data to say that personally I have tested my own hypothesis on my system. And in that test, the results were that I could not register changes beyond normal PC variations.

Remember, part of the reason he found predictable noise, is because he put on predictable load on the CPU. You have no idea if your workload on the PC you use to play music is that way.

So that graph proves some things (software) that excercise parts of the PC produce repeatable non-random noise.

Now you seem to treat noise as just a random distortion with no characteristics. I have been saying all along that the existing meaurement techniques we have for noise are not sophisticated enough & don't give us enough information about it's internal structure - it's like having one figure for jitter - it's meaningless. If you can't accept this then we are just talking past one another. So it's quite possible that buried within the noise floor are non-random, deterministic, correlated, modulated noise which when reduced has a sonic effect.
You fail the argument on multiple levels. First, as I explained, the workload on your system when playing music is millions if not billions of times lower than what he simulated. If I break a tree branch by putting a car on top of it, it is not the same thing as you putting your arm on it. Second, his test was the impact on an internal DAC in a PC. If you adjusted for both of these, the already low level of noise he measured, would vanish into nowhere. If it not nowhere where it sits right now. Please don't change the topic into something else.

And next time you put evidence forward, please explain its test conditions. I shouldn't have to go and read and entire article to discover that the test scenario uses an internal DAC and a PC motherboard and hence not something we consider typical of high performance computer audio in these discussions.

Let me give you another example - the improvement in sound with Jplay is very akin to the improvement in sound when the PS of the PC is changed for a linear one and/or when the fan is turned off (or it's run fanless) - it is more of the same sort of improvement - better sound stage, more dynamics, more body to the music.
Yada, yada, yada :). That's not data. You put forward measurements saying changing system load changes measurements. And predictably. So please produce the same for JPlay. That has been the question. Do you have the tools to do that? If not, do you have someone else who does? And does Jplay? Don't give me subjective assessments as proof which you could have very well imagined.

You can argue all you want about measurements but the measurements that you are citing are flawed.
I cited nothing. I gave an example of test I have run. You have run no measurements. Do I remember correctly that you are in hardware business? What test tools do you have?
 
No, I have lost any interest in asking you more :(. Honestly, when someone asks you about DBT and you say you know them well, then you should know what level of detail is normally provided to explain them. And the more challenging the claimed results, the more detail to be provided as to make the results credible. First you say you have done it and say no more. Then I prompt you again and all you say is you had the super human results of 5/5 and that is that? No talk of how content was picked. No talk of equipment. No talk of how the AB between apps was performed. No talk of setting levels. No talk of repeating it a different day. No talk of reversing role and having the proctor being tested as a reference. Just a claim of perfection in your listening ability and the topic you are defending.

You could very well have achieved what you said. But your demeanor in explaining it leaves one with huge doubts. A person who has the goods on their side would not be reluctant to have its details examined. In the face of people being incredulous, you need to be absolutely transparent and forthcoming as to not leave suspicion and doubt. But you are doing the opposite. I am sure people would have loved to know how to repeat your test. But no, let's leave it at "I was perfect" and "what else do you want to know." Answer is don't worry about it. You are liable to make your position worse, not better by giving answers like this.

Hey now, Amir, this is getting a bit ridiculous - you are now attempting to caste doubt on my results.
This was a personal blind test - I said nothing about DBT (the operator knew which was which - he controlled which playback software was operating).
What does the equipment or music selection matter? I described the test - you are really stretching to find some issue with it.
Why would I repeat it on another day or switch operators - it was a personal test that I was interested in - a sanity check, if you like.

Why would people want to repeat EXACTLY what I did? I doubt they will have the saem equipment or music selection so what is your point exactly. Let others do their own blind tests as they see fit, I'm not one to dictate their procedure.

Really, Amir, you are claiming you have an open mind but it is becoming less & less demonstrable.
 
Ok, Amir, I see you are getting more & more personal in this discussion & it's getting you heated somewhat.
You have your set of measurements which you refuse to consider might be too crude for uncovering subtle changes in the system (as I've pointed out repeatedly).
As I said a couple of posts back
Now you seem to treat noise as just a random distortion with no characteristics. I have been saying all along that the existing meaurement techniques we have for noise are not sophisticated enough & don't give us enough information about it's internal structure - it's like having one figure for jitter - it's meaningless. If you can't accept this then we are just talking past one another.
It seems that is the case & further discuusion with you is pointless. You insist on measurements - I have given you DATA - my blind test results which you try to reject. We are done!
 
Amir, forgetting about JPlay for a moment, it appears to me (correct me if I'm wrong) your null hypothesis is that all difference in sound quality resulting from using different music server software capable of bit perfect serving of the source data are either random or illusionary. While certainly conceivable, the implication would be that millions of people are engaging in massive audiophile groupthink and systematically deluding themselves. To be fair, this is also a conceivable scenario.

However, the burden of proof would be on you to prove (or any other objectivist) that such a massive delusion is in fact going on, not on millions of users of their favourite bitperfect playback software to disprove they are kidding temselves. Obviously, no amount of measurement or electrical engineering theory can conclusively do this (i.e. prove the null hypotehsis). Only an empirical approach, based on DBT and sound statistics can. Have any such test been conducted in a methologically sound manner?
 
Amir, forgetting about JPlay for a moment, it appears to me (correct me if I'm wrong) your null hypothesis is that all difference in sound quality resulting from using different music server software capable of bit perfect serving of the source data are either random or illusionary.
I will correct you because that is not quite the case :). I am saying the explanation that someone changed the pipeline leads to likelihood of the improvement has very little weight there in absence of some objective data. I am saying let's not fall for someone appealing to our intuition there. The starting point should not be that they changed this so it must be better. Technically they cannot say that even though they readily imply it. They could objectively demonstrate it and seems like they have not. I worry about that. I worry that they may know that the improvements have not translated into electrical signals changing.

While certainly conceivable, the implication would be that millions of people are engaging in massive audiophile groupthink and systematically deluding themselves. To be fair, this is also a conceivable scenario.
We have to agree that it is easy to make that happen. All I need to do is provide a technical explanation that goes deeper than many people understand, yet have it sound right on the surface. The power of suggestion would entice many people into believing that.

However, the burden of proof would be on you to prove (or any other objectivist) that such a massive delusion is in fact going on, not on millions of users of their favourite bitperfect playback software to disprove they are kidding temselves. Obviously, no amount of measurement or electrical engineering theory can conclusively do this (i.e. prove the null hypotehsis). Only an empirical approach, based on DBT and sound statistics can. Have any such test been conducted in a methologically sound manner?
I put no burden on anyone adopting such software (unless they want to defend it in which case yes). I put the burden on the manufacturer of the software first and foremost. They are making very specific claims that technically are feasible to analyze. Once we have that data, then we can argue whether the changes are audible. I am not even at that point yet. Jkenny post tests that someone created to test such hypothesis. Why doesn't Jplay? They could exaggerate the test case 1000 times, find the worst case scenario, etc. and that would still be cool with me as far as finding at least one instance where a measurement changes.

If the position is that the measurement does not change whatsoever, but it sounds better anyway, let's have that proven too. As it is, they are leading people into thinking that measurements will change. That jitter and noise will go down. That runs counter to the position of measurement not changing and gets us back to if such metrics do change, then let's see them on a scope.

All of this said, I get your point :). You are asking if I am declaring war against any subjectivist assessment lacking DBT. That is not me but I accept that in this context, it can be read that way. So what is different? Maybe nothing. Maybe my I dial my open mindedness back sometimes on such things :). But maybe it is because in this instance, as opposed to someone buy a DAC or whatever, I know how this technology works at very fine level. After all, my team built the windows media player and every bit of the pipeline in the PC that these apps use. I also have managed development of hardware that runs all of this. Does all of this mean that I am right? No. It just means that I move past the talking points provided because I know they don't lead to the conclusion they want us to reach.

If I may turn the tables on you :), is there no place for asking for objective data for an audiophile product that makes specific technical claims? If I said my cable carries a lot more current than another specific cable, should I not be able to ask them to show that on meter? That is what we have. A clear AB scenario. Jriver with and without Jplay. This is not one DAC against thousand others. It is the simple addition or subtraction of a piece of software. A lot of the complexity of getting an answer is taken care of for us. How can we sit here and spend more time talking about this than it would take for them to do a test?

Of course, this is on top of the null test we have which was published a year ago. Yet another poor person spent their time and energy to get the data. That data is damaging to the claims of Jplay. Why have they not done their own test to show it was wrong?

Does this make sense?
 
I will correct you because that is not quite the case :). I am saying the explanation that someone changed the pipeline leads to likelihood of the improvement has very little weight there in absence of some objective data. I am saying let's not fall for someone appealing to our intuition there. The starting point should not be that they changed this so it must be better. Technically they cannot say that even though they readily imply it. They could objectively demonstrate it and seems like they have not. I worry about that. I worry that they may know that the improvements have not translated into electrical signals changing.

How about the contention that WMC and Foobar sound identical? Doesn't this imply all bitperfect server programs (incl say Jriver and Media Monkey) should sound identical?

I put no burden on anyone adopting such software (unless they want to defend it in which case yes). I put the burden on the manufacturer of the software first and foremost. They are making very specific claims that technically are feasible to analyze.

If the position is that the measurement does not change whatsoever, but it sounds better anyway, let's have that proven too. As it is, they are leading people into thinking that measurements will change. That jitter and noise will go down. That runs counter to the position of measurement not changing and gets us back to if such metrics do change, then let's see them on a scope.

Does this make sense?

I see - makes sense. You take issue with unsubstantiated technical claims. Fair enough. I personally am exclusively interested in the claims of some users of the software that it makes their system sound better. If after an evaluation of the software I would concur with them, I would not really care what is happening under the digital hood. I would even give them a pass if it turned out they deliberately created a technology smokescreen to push more units - as long as the software delivers the goods - better sound.

Having said that, I am acutely aware of my own susceptability to placebo effect (and the power of suggestion) and do not trust my senses one bit....

If I may turn the tables on you :), is there no place for asking for objective data for an audiophile product that makes specific technical claims?

A very reasonable request, but of strictly academic interest to a die hard subjectivist like myself...

Parting thought.....I'll probably take a pass on Jplay 5.1, since another commentator on this forum mentioned with the new MSB drivers and Windows8 it offers no improvement over plain vanilla Jriver. Thenagain, if I have some time on my hands, I may run the trial copy. I hated usability of 4.0, but 5.1 is presumable totally stable.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing