Do I really need an " Audio Grade Network Switch "?

It is not true that fiber cuts all upstream noise. I can clearly hear changes in ethernet cables, switches and power supplies upstream of the fiber conversion. Here's John Swenson's explanation:

Q. What about fiber-optic interfaces? Don’t these block everything?
A. In the case of a pure optical input (zero metal connection), this does block leakage current, but it does not block phase-noise affects. The optical connection is like any other isolator: jitter on the input is transmitted down the fiber and shows up at the receiver. If the receiver reclocks the data with a local clock, you still have the effects of the ground plane-noise from the data causing threshold changes on the reclocking circuit, thus overlaying on top of the local clock.


Swenson is also the same one who says shield grounding is ineffective against EMI etc, so not sure how much faith I put in your Q&A. Tell that to MSB and a host of other companies who believe differently on fiber decoupling and have bet the farm on it. And maybe I am missing something but what does fiber downstream have to do with the effect of cables upstream?

I can't account for your system set up, this is all very nuanced. But what I do know based on two systems i've use fiber decoupling is, it make an audible difference in my system, which is VERY revealing to changes.
 
Depends on implementation..The fiber connection cuts the noise from any power supply or etheret related noise (which is all your trying to mitigate). There is no need to layer switches, just use fiber to decouple from all the noise downstream related to your router etc, then cut noise with fiber, then use a hifi switch right before the source. Which btw also has a power supply, so does your DAC, preamp, amp etc etc. If anything, a properly implemented power supply in the circuit will increase all the things you are claiming will be negatively impacted.
Yes it depends on implementation. As I stated, fibre and its power supply both have an impact on sound quality. Note that I did not specify whether that impact is positive or negative because that depends on a. the system in which the fibre is implemented and b. on the quality of the fibre hardware and power supply employed.
Fibre optic provides galvanic isolation, which will prevent the transmission of certain types of noise. Wireless transmission also provides galvanic isolation. Switches downstream of the fibre optic or wi-fi-ethernet bridge may provide additional benefits, again depending on the same variables of implementation and hardware employed.

Why do cables and hardware upstream of a fibre optic set-up make a difference? Because the better the quality of the fibre optic input, the better the quality of its output.
 
Swenson is also the same one who says shield grounding is ineffective against EMI etc, so not sure how much faith I put in your Q&A. Tell that to MSB and a host of other companies who believe differently on fiber decoupling and have bet the farm on it.
I don't know what you are referring to. Could you post a link?
And maybe I am missing something but what does fiber downstream have to do with the effect of cables upstream?
I can hear the effects of changes upstream of the fiber conversion.

Have you not heard the effects of changing a switch, power supply or cable upstream of the fiber conversion? Have you tried? I find the difference is glaringly obvious when I change a cable or PSU that feeds my opticalModule FMC. If the fiber conversion were blocking all upstream noise, I should not be able to hear the effects of an upgrade before the conversion.
I can't account for your system set up, this is all very nuanced. But what I do know based on two systems i've use fiber decoupling is, it make an audible difference in my system, which is VERY revealing to changes.
I am a big proponent of fiber optics. I have a 15 meter run in my main system and a 5 meter for my desktop system. The change vs. copper LAN cables was profound.
 
Last edited:
In the days before streamers, CD players didn’t need the LAN nor switches. They relied on having the data in the machine prior to and during playback.

For streaming, if the data can be totally buffered in the machine prior to playback then LAN, optical or otherwise, isn’t needed.

My system uses an Orbi wireless hub (galvanically isolation) to bring the data into the music room. It's then connected by wired LAN to the streamer and the data buffered 200% (2 music tracks) to RAM prior to playback.

Thereafter playback is from the streamer RAM to CPU to a USB bridge much like a CD player. In my case passing these 3 sections is timed by 3 dedicated OCXO Ultra clocks. Theoretically there is no LAN interference any longer.

Because the music is 200% buffered I can now physically disconnect the LAN and listen as a test. I cannot hear much difference between a connected or disconnected LAN so for my system, the LAN effect is sufficiently managed.

The key is full buffering. It’s like a reset of all noise, anything upstream does not matter. The data starts fresh again in RAM nearest to the processor.
 
Last edited:
In the days before streamers, CD players didn’t need the LAN nor switches. They relied on having the data in the machine prior to and during playback.

For streaming, if the data can be totally buffered in the machine prior to playback then LAN, optical or otherwise, isn’t needed.

My system uses an Orbi wireless hub (galvanically isolation) to bring the data into the music room. It's then connected by wired LAN to the streamer and the data buffered 200% (2 music tracks) to RAM prior to playback.

Thereafter playback is from the streamer RAM to CPU to a USB bridge much like a CD player. In my case passing these 3 sections is timed by 3 dedicated OCXO Ultra clocks. Theoretically there is no LAN interference any longer.

Because the music is 200% buffered I can now physically disconnect the LAN and listen as a test. I cannot hear much difference between a connected or disconnected LAN so for my system, the LAN effect is sufficiently managed.

The key is full buffering. It’s like a reset of all noise, anything upstream does not matter. The data starts fresh again in RAM nearest to the processor.
Hi flkin,
I spent 5 years upgrading and optimizing a local and remote streaming system, based on an Innuos NG Statement which buffers all tracks….local and remote, before replay. Unless the server shuts down the network like the Statement, the network is still running during replay, so any non-audio traffic is going to continue to generate noise. Ideally the server should be seeing only audio related traffic and should run on a ‘push’ OS in order to avoid the constant polling of a ‘pull’ OS. Having said that, the quality of what goes into the buffer directly affects the quality of what comes out on a better in = better out basis, so the quality of the upstream network always plays a role….buffering or not. Similarly galvanic isolation works on the same principle….the quality of what’s sent over the galvanically isolated link affects what comes out of it.
In fact, every component on a network adds its identity to the final sound produced…..buffering and galvanic isolation will certainly improve SQ, but will not remove the dependence on the quality of the entire network.

Let me give you an example. Take a music streaming system based on a dual band ‘Intel Puma’ chip set router with galvanic isolation and buffering and as many switches and LPSs as you like. Replace the router with a ‘Broadcom chipset’ based, tri-band router with one band dedicated to audio and you’ll hear a major improvement in SQ. Add a high quality LPS for another improvement. Place the router on an anti-vibration bases for yet another improvement. Why? Because the entire network operates on a better in = better out basis. If what comes out of the router has less latency, vibrates less, has more precise polarity switching and less PS and network traffic-based noise, those improvements ripple through the entire rest of the network. The ability for a router upgrade to be heard as improved sound quality can only happen if better in = better out is how the whole network operates.
 
  • Like
Reactions: Re-tread
Unless the server shuts down the network like the Statement, the network is still running during replay, so any non-audio traffic is going to continue to generate noise.
Hi Blackmorec,

Yes agreed, the player needs to be capable of a full LAN shutdown during playback. My Pink Faun 2.16x using Euphony Stylus software offers this function and is able to power down the LAN connection while playing. If this is not done through the OS there’s an issue that the OS continues to poll for a LAN connection creating unwanted extra process loading. It’s not only non-audio process but also LAN related audio processes that should to be stopped. Powering down is the best way.

Having said that, the quality of what goes into the buffer directly affects the quality of what comes out on a better in = better out basis, so the quality of the upstream network always plays a role….buffering or not.

It’s true that with many audio processes, everything matters. The complex nature of how leakage currents and noise enters the audio path is hard to isolate and protect against. A bit like expecting an unplastered brick roof to stop rain, water will always find a way through. So without doubt it’s safer to make the effort to clean every step of the way.

However for the specific point of data stored in RAM, I disagree that the quality of upstream matters. The music file stored is totally accurate and not subject to interpretation. Buffers do not store noise only bits. If accurately transferred and fully buffered, the sound caused by data bits we hear starts here and degrades from here, not anywhere upstream.

There may be other upstream effects at play like galvanic isolation that need attention but it’s not from the fully stored and disconnected data. Otherwise computers and any tech relying on stored information will simply not work.

The best way to check the “sound” of LAN is to compare the setup with No LAN. A physical disconnect of the LAN cable during playback with the LAN module shut off and therefore no OS polling for a connection. If one hears no difference, the LAN doesn’t matter anymore and there is a sufficient disconnect from all the contamination caused from a wire connection. So further LAN related setting effort is no longer required. I have managed to achieve this in my system and cannot hear any difference with or without the LAN wire connected during playback.

But if there is an audible difference, despite playing from fully buffered data say, the disconnect isn’t sufficient and needs further attention. In both cases the stored data in RAM remains accurate and bit perfect and is not the cause for a bad sound.
 
Last edited:
Hi Blackmorec,

Yes agreed, the player needs to be capable of a full LAN shutdown during playback. My Pink Faun 2.16x using Euphony Stylus software offers this function and is able to power down the LAN connection while playing. If this is not done through the OS there’s an issue that the OS continues to poll for a LAN connection creating unwanted extra process loading. It’s not only non-audio process but also LAN related audio processes that should to be stopped. Powering down is the best way.
This is true if the OS operates in pull mode…in which case it’s always asking (polling), “anything for me?’. If on the other hand the OS operates in push mode, then it only runs when there’s a specific command it needs to execute, for example if you want to stop the buffered track and load and play something else.
It’s true that with many audio processes, everything matters. The complex nature of how leakage currents and noise enters the audio path is hard to isolate and protect against. A bit like expecting an unplastered brick roof to stop rain, water will always find a way through. So without doubt it’s safer to make the effort to clean every step of the way.

However for the specific point of data stored in RAM, I disagree that the quality of upstream matters. The music file stored is totally accurate and not subject to interpretation. Buffers do not store noise only bits. If accurately transferred and fully buffered, the sound caused by data bits we hear starts here and degrades from here, not anywhere upstream.
It’s very simple. The quality of the file into the buffer directly affects the quality of the file coming out. Disconnecting the LAN may remove noise carried on the cable, but it doesn’t alter the file that’s stored in the buffer, whereas the quality of the LAN used to load the file has a direct effect on the file and thus the outcoming data stream, even if said LAN is disconnected during replay.
There may be other upstream effects at play like galvanic isolation that need attention but it’s not from the fully stored and disconnected data. Otherwise computers and any tech relying on stored information will simply not work.
I’m guessing that you are making the assumption that all you need is a ‘bit perfect’ file. I have found this not to be the case. In the example I gave previously, a change in the modem and router has an effect on the buffered file, even though the file was ‘bit perfect in both cases (prior to and following the modem/router upgrade). Computers and tech relying on stored information work just as well with either router and you will not notice a difference. It’s only when the file is converted in a DAC and listened to as music that differences become obvious.
The best way to check the “sound” of LAN is to compare the setup with No LAN. A physical disconnect of the LAN cable during playback with the LAN module shut off and therefore no OS polling for a connection. If one hears no difference, the LAN doesn’t matter anymore and there is a sufficient disconnect from all the contamination caused from a wire connection. So further LAN related setting effort is no longer required. I have managed to achieve this in my system and cannot hear any difference with or without the LAN wire connected during playback.

But if there is an audible difference, despite playing from fully buffered data say, the disconnect isn’t sufficient and needs further attention. In both cases the stored data in RAM remains accurate and bit perfect and is not the cause for a bad sound.
The quality of the LAN affects the file in the buffer. Disconnecting the LAN will only remove noise transmitted with the LAN but it won’t change the file itself. I’ll give you an example to illustrate. I rip a CD using a PC drive and DBPoweramp and rip the same CD with the same specs using the Innuos CD ripper. I check both files to make sure I have a bit perfect copy. Now I download both files to my server’s storage and compare. Do they sound the same? No. Why not? Because the file ripped on the innuos has superior quality…better drive, better power supply, different software. Both are bit perfect copies of the same CD but one was ripped using inherently better hardware and I can hear the difference quite clearly. So much so that I re-ripped over 1000 CDs when I made the comparison and heard the difference it made. LAN hardware has exactly the same effect. The better the modem, router, bridge, cables, vibration control of the LAN used to create the buffered file, the better the sound quality when the file is replayed from the buffer. Better in = better out.
 
Last edited:
  • Like
Reactions: Re-tread and cfl839
Ok in nut shell you’re saying the file data is somehow different despite the bits being all the same. All depends on what happens prior to arriving at the storage.

You arrived at this from experimentation and experience but not logically as there is no explanation how this difference is stored and passed on.

Bits are, well, bits. I’m not sure we can challenge this basic axiom of computing. Let’s assume there’s more at play than is immediately visible. Say you are hearing what you hear and also say bits are bits, do you have any suggestion what this difference is and how it’s stored?

Starting to sound more like quantum effects where one can only hear the difference when a person listens but data remains identical otherwise. Audio is full of unusual effects that are hard to explain :) My system has unexplainable improvements from my tweeks also. But this particular one on disconnected, perfect copy of data in RAM storage doesn’t affect me in strange quantum ways. My own tests and experience is, once in downstream storage and there is sufficient disconnect, upstream effects don’t matter.
 
  • Sad
Reactions: cfl839
Hi flkin,
Take any system playing bit perfect files from a server that pre-buffers the files in RAM. Now upgrade the router, network switch, power supplies and cables. Check the transmitted files to make sure they are still bit perfect. If they are then logically the exact same bit pattern is stored in the buffer. Now compare how the upgraded network sounds……in my experience it sounds a lot better and is why companies like PF, Innuos and Taiko manufacture networking products to install upstream of their servers.

The point is, a data file isn’t just the bit pattern….the pattern of polarity switches or data packets. It’s also the physical implementation of that pattern. The rise and fall time of the signal, the noise on the signal, the jitter on the signal edges etc. etc. The manner in which the file is created and transmitted changes that physical structure. How much noise, how many errors, how many retransmissions, how accurate the timing, how quick or slow the polarity transitions, how much latency.. in other words all the physical aspects of the data stream.
This plays absolutely no role when it comes to purely IT applications. If the data is bit perfect that’s all that matters. But in audio, where the file is converted to an analog signal, converted to sound pressure waves, then to nerve impulses, then on to a stream of consciousness we call music, the physical quality of the data files used have a huge impact on the eventual music. There’s nothing quantum about it. It’s straightforward physics. The more accurate and physically perfect the structure of the data file, the better the music sounds. The more perfect and precise the bits are that are driving the DAC, the more perfect the resulting analog signal. Better in = better out.
In your model, all you’d need is a cheap as chips modem and router, a bit of old ethernet cable and a decent server that buffers the resulting file and as long as the transmissions are ‘bit perfect’ ie. there is no alteration of the bit pattern during transport, then nothing can be improved. This is what I believed and expected when I started setting up my local and remote streaming system. All I needed was a bit perfect transmission and it was job done. And indeed that did work perfectly! With my very first implementation I was delighted with the results. My Innuos server buffered both local and remote files in RAM and played them directly from RAM with very little network activity. Both local and remote sounded great, the local sounding better than the remote ….but not by much. But I had a problem. I was temporarily using 15m of ethernet cable slung bungee style over the stairs so i needed to tidy up how I got the files from my ISP modem/router to my upstairs server. With several large electronics stores in my vicinity I had no problem sourcing hardware to try various strategies…..ethernet with cheap and high quality cable, power over mains, wi-fi mesh, wi-fi bridge…I tried them all. My expectation was that the different means of transmission would all sound about the same as long as transmission caused no alteration to the bits. . I was therefore surprised when I heard significant differences between the different approaches. Best amongst them was wi-fi to a wi-fi-ethernet bridge . Adding a switch with OCXO clock after the bridge made another significant improvement. Adding an LPS to the switch brought major, indeed jaw dropping changes. Finding myself on a bit of a role I added LPSs to bridge and router, upgraded the router to a tri-band with one band dedicated to audio and a host of other upgrades far too numerous to mention, all of which brought significant improvements when playing files from RAM. The degree of these improvements was incredibly rewarding, sometimes sounding like I’d swapped out amplifiers and indeed one upgrade to the 6 LPS power supplies I eventually ended up with sounded for all the world like I’d changed my speakers, so solid and physical in nature did the sound become. .
So what my 5 years of constant upgrading taught me was that bit perfect files were only the start and as far as audio is concerned (as opposed to IT) the actual physical structure (quality) of the music file is paramount if you want the finest sound quality possible. In IT, the network is a means to move data files electronically and nothing more. In audio, the network is also the means to clean and refine the physical structure of the file, as this aspect plays a massive role in the final sound quality.

Getting back to the original subject, there’s no doubt that adding a Fibre Optic link prior to the server introduces Galvanic Isolation, which blocks any noise that may otherwise be conducted along copper wires. But FO is not a panacea for all ills and may introduce its own noise and sonic character. Like all other components of the network, FO needs to be carefully chosen and optimised, regardless of whether files are buffered in RAM or not.
 
Last edited:
Hi Blackmore,

Thanks for describing your path and passion for excellent music. Always a delight to read about other audiophiles’ paths and see what they have discovered along the way. I see that we share many similar passions, your strive to look for a better sound no matter the technique is admirable.

Perhaps a bit of my audio background is due now as I do not want to be labeled as a closed minded techno nut. :D I believe our audiophile hobby is an art form which has many approaches all of which are highly personal and true to each person. I would like to share my path of discovery which has led to my current belief of why buffering and software negates the need for upstream data management. It's on topic for this thread and it’s a personal story that has worked out well for my system.

I’ve been in digital for around 20 years switching over from analogue and into streaming for around 8 years. It was during then, in early days of streaming, when the effects of the network on sound quality was discovered. I recall my early attempts at experimentation with digital bits and manipulating them into the best shape possible prior to arriving at the DAC. In 2016 my earliest “audiophile” switch was a SOtM modified D-Link DGS105 which I clocked externally with a Cybershaft OP14 OCXO clock and had the internal switching regulators replaced with linear ones. It was part of the famed Trifecta of devices for managing bits (the other 2 parts being the sMS-200 network player and the tx-USBUltra clock and USB cleaner). This was long before SOtM came out with their own clocked switch or even Mutec coming onto the scene. I brought the data into my listening room using a wireless Orbi system and wired from then on.

Yes, at that time I found the switch did contribute to a better sound but also discovered that the power supplied to this switch was as significant, or perhaps even more so, as adding the device. It seems that the benefit the switch offered was negated by the addition of electronic parts and wires in the signal path especially if the switch’s power was not good enough. Nevertheless I heard a net benefit and so I soldered on with an audiophile switch in place.

Later, around 6 years ago in 2018, I moved on to a fully decked out Pink Faun 2.16x server. The sound of this streamer was far better than my old Trifecta set. I further found that the modded D-Link switch didn’t bring any audible benefits upstream to the PF and so put it aside.

Soon after in 2019, Uptone announced their famous EtherRegen device - a product that promised galvanic isolation and noise reduction of the LAN signal with a reference clock input and optical output as well to boot. Well, I thought this was going to be a real game changer and so I was one of the earliest to buy a unit hoping that this new super switch would make the difference where the modded D-Link couldn’t. After all the network really affects the sound right?

Did it make a change in my system? Yes it did. But not for the better. Imaging and depth improved at the cost of a sharper sound, and treble quality. Ah, so the network did make a difference but was it due to the better power management of the EtherRegen or the isolated bits coming out of the device? That was my conclusion at that time. For sure power did make a difference and the sound changed depending on what supply I used for the EtherRegen. Again I found that good power is key.

The white paper that John Swenson (designer) issued about the EtherRegen says in the FAQ section on Page 5 the following :

"A very large buffer where the input completely shuts down while all music is playing can eliminate the phase-noise overlay of upstream sources”. - JS​

I found this intriguing and coincidentally at that time I was switching to a new OS/playback software in my Pink Faun - Euphony. An amazing software that had an edge over my previous playback/OS software AudioLinux. It had a function to fully buffer tracks/albums, both local or streamed, before playback. In later versions a shutdown of the LAN hardware through the OS was offered after the buffering. It was called the Play and Relax feature as one lost the control of the streamer for the duration of the playlist after which the LAN would boot up again. Here is exactly what John was talking about. Load the data into RAM, switch off the LAN hardware and through the OS stop the polling effect for the missing LAN connection. No LAN is the ultimate best LAN sound, zero contamination, a total disconnect like a CD player.

Was this the key to a LAN-less sound where upstream effects no longer matter? In theory yes but this still required verification through hearing tests. If I removed the LAN cable during Play and Relax, playback would continue and should sound identical to when the cable was attached. But only if the attached cable had no leakage current, ground plane noise issues etc then it would.

And it did sound identical - I could not hear any difference with or without the LAN cable attached during playback. Meaning my isolated wireless Orbi to PF’s RAM buffer was sufficiently disconnected from LAN effects and that further upstream adjustments were no longer required.

And so that is why I concluded the starting point of optimisation is at the data in my Pink Faun’s RAM and not further upstream. And it’s an effort, not simple, from here onwards. Once bits are there, no further action needed is too simplistic an observation :

.. all you’d need is a cheap as chips modem and router, a bit of old ethernet cable and a decent server that buffers the resulting file and as long as the transmissions are ‘bit perfect’ ie. there is no alteration of the bit pattern during transport, then nothing can be improved...

From the RAM (which has to be optimised for chip type, voltage and frequency) onwards I do everything possible to ensure the bits are delivered cleanly to the DAC as possible. This includes phase noise management using 3 dedicated Pink Faun OCXO Ultra clocks on the motherboard, CPU and USB bridge. Powered by Paul Hynes double regulated teflon/z-foil DC supply for extremely low DC jitter noise and impedance, wired by QSA Landri cables. And this is just to get the bits from RAM to the streamer's output USB cable as cleanly as possible. A lot of improvements possible I assure you, based on science. :)

Of course all the above assumes bits are bits and they only start to sound poorer after leaving the buffer from dirty power jitter, phase or ground plane noises. Even buffers have to be built correctly and electrically isolated or they may make things worse not better. If a buffer is not built correctly, then noise may accompany the bits and stay with the bits. In such an event, upstream reduction of noise might matter.

However, all this doesn’t take in account that bits are not bits and some identical bits sound better than others. And the effect of expensive LAN devices are needed to make these bits, despite being identical, sound better. As an engineer I have trouble understanding how this concept works. Better bits has to be different to sound better and this difference has to be recordable to pass on. Could it be that bits are still bits but sound different because they are contaminated by the kinds of noise mentioned above? Or the buffer in some streamers are not properly isolating fully and noise remains with the bits despite being fully buffered? This would then make sense. But this will not explain why 2 identical tracks sound different.

That said, every person’s system is different and I acknowledge there are many paths to audio nirvana. I do not claim to understand all there is to know about the science of streaming, perhaps there is some electrical action beyond my knowledge at this time. I have not come across any better explanation to date and so my current understanding is still the best for me. The above is my own personal path and conclusions are my own. I share them with the hope that some readers find it useful and offer some food for thought.

This will be my last word on this matter... for now! :D

Best, Kin

ps. I would love to listen to two bit-identical tracks with one sound better than the other. That would open my eyes to a whole new world and would totally change my future understanding of audio. Can anyone privately/confidentially share? :p
 
Last edited:
  • Like
Reactions: PYP and Blackmorec
Hi Blackmore,

Thanks for describing your path and passion for excellent music. Always a delight to read about other audiophiles’ paths and see what they have discovered along the way. I see that we share many similar passions, your strive to look for a better sound no matter the technique is admirable.

Perhaps a bit of my audio background is due now as I do not want to be labeled as a closed minded techno nut. :D I believe our audiophile hobby is an art form which has many approaches all of which are highly personal and true to each person. I would like to share my path of discovery which has led to my current belief of why buffering and software negates the need for upstream data management. It's on topic for this thread and it’s a personal story that has worked out well for my system.

I’ve been in digital for around 20 years switching over from analogue and into streaming for around 8 years. It was during then, in early days of streaming, when the effects of the network on sound quality was discovered. I recall my early attempts at experimentation with digital bits and manipulating them into the best shape possible prior to arriving at the DAC. In 2016 my earliest “audiophile” switch was a SOtM modified D-Link DGS105 which I clocked externally with a Cybershaft OP14 OCXO clock and had the internal switching regulators replaced with linear ones. It was part of the famed Trifecta of devices for managing bits (the other 2 parts being the sMS-200 network player and the tx-USBUltra clock and USB cleaner). This was long before SOtM came out with their own clocked switch or even Mutec coming onto the scene. I brought the data into my listening room using a wireless Orbi system and wired from then on.

Yes, at that time I found the switch did contribute to a better sound but also discovered that the power supplied to this switch was as significant, or perhaps even more so, as adding the device. It seems that the benefit the switch offered was negated by the addition of electronic parts and wires in the signal path especially if the switch’s power was not good enough. Nevertheless I heard a net benefit and so I soldered on with an audiophile switch in place.

Later, around 6 years ago in 2018, I moved on to a fully decked out Pink Faun 2.16x server. The sound of this streamer was far better than my old Trifecta set. I further found that the modded D-Link switch didn’t bring any audible benefits upstream to the PF and so put it aside.

Soon after in 2019, Uptone announced their famous EtherRegen device - a product that promised galvanic isolation and noise reduction of the LAN signal with a reference clock input and optical output as well to boot. Well, I thought this was going to be a real game changer and so I was one of the earliest to buy a unit hoping that this new super switch would make the difference where the modded D-Link couldn’t. After all the network really affects the sound right?

Did it make a change in my system? Yes it did. But not for the better. Imaging and depth improved at the cost of a sharper sound, and treble quality. Ah, so the network did make a difference but was it due to the better power management of the EtherRegen or the isolated bits coming out of the device? That was my conclusion at that time. For sure power did make a difference and the sound changed depending on what supply I used for the EtherRegen. Again I found that good power is key.

The white paper that John Swenson (designer) issued about the EtherRegen says in the FAQ section on Page 5 the following :

"A very large buffer where the input completely shuts down while all music is playing can eliminate the phase-noise overlay of upstream sources”. - JS​

I found this intriguing and coincidentally at that time I was switching to a new OS/playback software in my Pink Faun - Euphony. An amazing software that had an edge over my previous playback/OS software AudioLinux. It had a function to fully buffer tracks/albums, both local or streamed, before playback. In later versions a shutdown of the LAN hardware through the OS was offered after the buffering. It was called the Play and Relax feature as one lost the control of the streamer for the duration of the playlist after which the LAN would boot up again. Here is exactly what John was talking about. Load the data into RAM, switch off the LAN hardware and through the OS stop the polling effect for the missing LAN connection. No LAN is the ultimate best LAN sound, zero contamination, a total disconnect like a CD player.

Was this the key to a LAN-less sound where upstream effects no longer matter? In theory yes but this still required verification through hearing tests. If I removed the LAN cable during Play and Relax, playback would continue and should sound identical to when the cable was attached. But only if the attached cable had no leakage current, ground plane noise issues etc then it would.

And it did sound identical - I could not hear any difference with or without the LAN cable attached during playback. Meaning my isolated wireless Orbi to PF’s RAM buffer was sufficiently disconnected from LAN effects and that further upstream adjustments were no longer required.

And so that is why I concluded the starting point of optimisation is at the data in my Pink Faun’s RAM and not further upstream. And it’s an effort, not simple, from here onwards. Once bits are there, no further action needed is too simplistic an observation :



From the RAM (which has to be optimised for chip type, voltage and frequency) onwards I do everything possible to ensure the bits are delivered cleanly to the DAC as possible. This includes phase noise management using 3 dedicated Pink Faun OCXO Ultra clocks on the motherboard, CPU and USB bridge. Powered by Paul Hynes double regulated teflon/z-foil DC supply for extremely low DC jitter noise and impedance, wired by QSA Landri cables. And this is just to get the bits from RAM to the streamer's output USB cable as cleanly as possible. A lot of improvements possible I assure you, based on science. :)

Of course all the above assumes bits are bits and they only start to sound poorer after leaving the buffer from dirty power jitter, phase or ground plane noises. Even buffers have to be built correctly and electrically isolated or they may make things worse not better. If a buffer is not built correctly, then noise may accompany the bits and stay with the bits. In such an event, upstream reduction of noise might matter.

However, all this doesn’t take in account that bits are not bits and some identical bits sound better than others. And the effect of expensive LAN devices are needed to make these bits, despite being identical, sound better. As an engineer I have trouble understanding how this concept works. Better bits has to be different to sound better and this difference has to be recordable to pass on. Could it be that bits are still bits but sound different because they are contaminated by the kinds of noise mentioned above? Or the buffer in some streamers are not properly isolating fully and noise remains with the bits despite being fully buffered? This would then make sense. But this will not explain why 2 identical tracks sound different.

That said, every person’s system is different and I acknowledge there are many paths to audio nirvana. I do not claim to understand all there is to know about the science of streaming, perhaps there is some electrical action beyond my knowledge at this time. I have not come across any better explanation to date and so my current understanding is still the best for me. The above is my own personal path and conclusions are my own. I share them with the hope that some readers find it useful and offer some food for thought.

This will be my last word on this matter... for now! :D

Best, Kin

ps. I would love to listen to two bit-identical tracks with one sound better than the other. That would open my eyes to a whole new world and would totally change my future understanding of audio. Can anyone privately/confidentially share? :p
Hi kin,
Thanks for the kind reply and very interesting discussion. I have a couple of other points to share. I believe that the quality or condition of the music file that reaches the DAC plays a major role in sound quality. I also believe that galvanic isolation, full RAM buffering and network shutdown all contribute greatly to the final musical performance of a streaming system. Some other conclusions I reached are:
1. Separating audio from household network traffic as early as possible is highly beneficial. The audio network should literally be a quiet backwater, ideally with audio related traffic being the only activity.
2. A server operating system based on ‘push’ rather than ‘pull’ is highly beneficial
3. Every component on the network makes a contribution to the final sound, power supplies and DC cables especially so. Every component on the network works on the basis of better in = better out and of course, the reverse
4. The network is the ideal way to ‘massage’ and improve the physical quality of the music data stream. A network arranged such that each step produces a stream with superior physical attributes to the previous step is the way to maximise results. Making sure the early network is optimal is the best way to ensure that the file used to drive the server’s buffer is as high quality as possible, as in my experience the server and its buffer also work on a better in = better out basis. This is reason a streaming system can ultimately be made to outperform other musical sources. In my system, other than fuses I actually ran out of further improvements and still never found a ‘law of diminishing returns’. The SQ potential of streaming is extremely high.
 
Hi Blackmore,

Thanks for describing your path and passion for excellent music. Always a delight to read about other audiophiles’ paths and see what they have discovered along the way. I see that we share many similar passions, your strive to look for a better sound no matter the technique is admirable.

Perhaps a bit of my audio background is due now as I do not want to be labeled as a closed minded techno nut. :D I believe our audiophile hobby is an art form which has many approaches all of which are highly personal and true to each person. I would like to share my path of discovery which has led to my current belief of why buffering and software negates the need for upstream data management. It's on topic for this thread and it’s a personal story that has worked out well for my system.

I’ve been in digital for around 20 years switching over from analogue and into streaming for around 8 years. It was during then, in early days of streaming, when the effects of the network on sound quality was discovered. I recall my early attempts at experimentation with digital bits and manipulating them into the best shape possible prior to arriving at the DAC. In 2016 my earliest “audiophile” switch was a SOtM modified D-Link DGS105 which I clocked externally with a Cybershaft OP14 OCXO clock and had the internal switching regulators replaced with linear ones. It was part of the famed Trifecta of devices for managing bits (the other 2 parts being the sMS-200 network player and the tx-USBUltra clock and USB cleaner). This was long before SOtM came out with their own clocked switch or even Mutec coming onto the scene. I brought the data into my listening room using a wireless Orbi system and wired from then on.

Yes, at that time I found the switch did contribute to a better sound but also discovered that the power supplied to this switch was as significant, or perhaps even more so, as adding the device. It seems that the benefit the switch offered was negated by the addition of electronic parts and wires in the signal path especially if the switch’s power was not good enough. Nevertheless I heard a net benefit and so I soldered on with an audiophile switch in place.

Later, around 6 years ago in 2018, I moved on to a fully decked out Pink Faun 2.16x server. The sound of this streamer was far better than my old Trifecta set. I further found that the modded D-Link switch didn’t bring any audible benefits upstream to the PF and so put it aside.

Soon after in 2019, Uptone announced their famous EtherRegen device - a product that promised galvanic isolation and noise reduction of the LAN signal with a reference clock input and optical output as well to boot. Well, I thought this was going to be a real game changer and so I was one of the earliest to buy a unit hoping that this new super switch would make the difference where the modded D-Link couldn’t. After all the network really affects the sound right?

Did it make a change in my system? Yes it did. But not for the better. Imaging and depth improved at the cost of a sharper sound, and treble quality. Ah, so the network did make a difference but was it due to the better power management of the EtherRegen or the isolated bits coming out of the device? That was my conclusion at that time. For sure power did make a difference and the sound changed depending on what supply I used for the EtherRegen. Again I found that good power is key.

The white paper that John Swenson (designer) issued about the EtherRegen says in the FAQ section on Page 5 the following :

"A very large buffer where the input completely shuts down while all music is playing can eliminate the phase-noise overlay of upstream sources”. - JS​

I found this intriguing and coincidentally at that time I was switching to a new OS/playback software in my Pink Faun - Euphony. An amazing software that had an edge over my previous playback/OS software AudioLinux. It had a function to fully buffer tracks/albums, both local or streamed, before playback. In later versions a shutdown of the LAN hardware through the OS was offered after the buffering. It was called the Play and Relax feature as one lost the control of the streamer for the duration of the playlist after which the LAN would boot up again. Here is exactly what John was talking about. Load the data into RAM, switch off the LAN hardware and through the OS stop the polling effect for the missing LAN connection. No LAN is the ultimate best LAN sound, zero contamination, a total disconnect like a CD player.

Was this the key to a LAN-less sound where upstream effects no longer matter? In theory yes but this still required verification through hearing tests. If I removed the LAN cable during Play and Relax, playback would continue and should sound identical to when the cable was attached. But only if the attached cable had no leakage current, ground plane noise issues etc then it would.

And it did sound identical - I could not hear any difference with or without the LAN cable attached during playback. Meaning my isolated wireless Orbi to PF’s RAM buffer was sufficiently disconnected from LAN effects and that further upstream adjustments were no longer required.

And so that is why I concluded the starting point of optimisation is at the data in my Pink Faun’s RAM and not further upstream. And it’s an effort, not simple, from here onwards. Once bits are there, no further action needed is too simplistic an observation :



From the RAM (which has to be optimised for chip type, voltage and frequency) onwards I do everything possible to ensure the bits are delivered cleanly to the DAC as possible. This includes phase noise management using 3 dedicated Pink Faun OCXO Ultra clocks on the motherboard, CPU and USB bridge. Powered by Paul Hynes double regulated teflon/z-foil DC supply for extremely low DC jitter noise and impedance, wired by QSA Landri cables. And this is just to get the bits from RAM to the streamer's output USB cable as cleanly as possible. A lot of improvements possible I assure you, based on science. :)

Of course all the above assumes bits are bits and they only start to sound poorer after leaving the buffer from dirty power jitter, phase or ground plane noises. Even buffers have to be built correctly and electrically isolated or they may make things worse not better. If a buffer is not built correctly, then noise may accompany the bits and stay with the bits. In such an event, upstream reduction of noise might matter.

However, all this doesn’t take in account that bits are not bits and some identical bits sound better than others. And the effect of expensive LAN devices are needed to make these bits, despite being identical, sound better. As an engineer I have trouble understanding how this concept works. Better bits has to be different to sound better and this difference has to be recordable to pass on. Could it be that bits are still bits but sound different because they are contaminated by the kinds of noise mentioned above? Or the buffer in some streamers are not properly isolating fully and noise remains with the bits despite being fully buffered? This would then make sense. But this will not explain why 2 identical tracks sound different.

That said, every person’s system is different and I acknowledge there are many paths to audio nirvana. I do not claim to understand all there is to know about the science of streaming, perhaps there is some electrical action beyond my knowledge at this time. I have not come across any better explanation to date and so my current understanding is still the best for me. The above is my own personal path and conclusions are my own. I share them with the hope that some readers find it useful and offer some food for thought.

This will be my last word on this matter... for now! :D

Best, Kin

ps. I would love to listen to two bit-identical tracks with one sound better than the other. That would open my eyes to a whole new world and would totally change my future understanding of audio. Can anyone privately/confidentially share? :p
This is a helpful, comprehensive explanation (as is @Blackmorec 's). Even for those of us who do not have a technical background. I finally realized that this post should be read along side of your detailed itemization of your setup (on AS), which is exhaustive and exhausting. :)

In a very, very minor way I did some of the same experiments with switches, power supplies and DC cables and reached the same, and to me frustrating, conclusion. Everything adds a sound (to me, especially powered switches) and it began to be hard for me to understand what a "neutral" presentation was. In the end, of course, we have our ears and our preferences.

I finally chose the easy path and listened to, and subsequently purchased, a combined server/streamer/DAC from Grimm Audio -- the MU2. This integration allows Grimm to optimize some (many?) of the elements you mention - from the incoming wired copper ethernet to the analog output. The unit is still breaking in, and I have not heard a system such as yours, but it is clear to me that the more control a manufacturer has over all of these streaming elements, the better. The level of musical enjoyment with the MU2 in my simple setup is very satisfying. I'm amazed by what they have accomplished.

At the end of your list of equipment, you rightly list the most important link in the chain -- us. I have started to wonder if 90% (or more) of the listening experience starts with our own mood and receptivity/focus at that moment in time when we listen to music. Of course, it gets complicated. A wonderful setup and the right music (to our individual tastes), can put one in the mood, which then deepens as one listens. It puts an interesting twist on what's best.
 
Last edited:
  • Like
Reactions: flkin

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing