To switch or not to switch? Melco S-100 or Innuos Phoenix NET switch?

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
Buffering is simply memory. Storing packets in a memory to assist with various functions. Buffering in voice/video networks is generally used to increase resolution.

ie: Zoom does very little buffering because latency is more important then video resolution (I need to hear and to lesser extent see what you are saying in the context of bi directional conversation, and vice versa). Akamai (Netflix et al), is buffering (storing data at the edge, to increase resolution to support your HD or 4K picture)

I'm assuming you have some thoughts on application in this scenario?
 
Last edited:

MarkusBarkus

Well-Known Member
Feb 6, 2021
991
1,632
228
66
...just wondering about your thoughts on buffering re: audio chain.

I don't really have a packet in this fight. I tried multiple switches in series three or four years back. Two Cisco 2960s was better than none or one. Three was a maybe, and four went backwards here. But something positive was happening to the SQ at output.

A modded Buffalo switch with good ps, OCXO clock, unused ports turned off was/is excellent, replacing the chain. In fact, I am/was hesitant to give it up for the forthcoming Taiko switch. Those guys say their switch will squash my beloved PFBuff. We'll see in another month.

What you described previously seemed to me as if the data flow through the network and switches did not allow for the devices to "self heal" via the defined protocols regarding error correction.

I expected that packet loss would result in missing data, ergo drop-outs, skipping, etc. which I do not hear in my network.

I don't disagree with your technical descriptions above, BTW. Perhaps it is the multiple error correction processes that sound good, which I think was one of your ideas. Again, I don't disagree with the assertion; but I'm thinking the error correction would occur, and buffer would send along the "signal" intact/corrected.

BTW: I'm not sandbagging you. I'm not a network honcho laying low to pounce. I am genuinely trying to understand all this, relative to improving SQ.
 

Blackmorec

Well-Known Member
Feb 1, 2019
747
1,271
213
I suspect (just a guess) that the color users perceive as added space, clarity, lower noise floor etc, is in fact the error correction algorithms doing their job (correcting for lost or misaligned packets). If you break it down and think about it, it is really the only thing that makes sense.
Don’t like the name so my instinct is to keep this short.

When I break it down there is a very logical alternative, so it can’t be the only thing. The sound is getting better and better because of progressively less involvement of error correction. Much more likely when I consider that by the third switch, the stream has been through 2 previous error correction screens. Given that error correction is a noisy process, doing less of it would be very beneficial to the sound, allowing me to resolve and hear all those gorgeous elements you mentioned, all of which are completely correlated to the music and its associated recordings.
Also worth adding, that the extra clarity and lower noise floor you mentioned allow me to hear additional detail in the space. I can hear all the acoustic attributes of the venue, the ‘texture‘ of the air and atmosphere that the recording creates, the direction of notes reflections as they decay within that space. Is all that detail also coming from the error correction algorithms?
 
Last edited:

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
Don’t like the name so my instinct is to keep this short.

When I break it down there is a very logical alternative, so it can’t be the only thing. The sound is getting better and better because of progressively less involvement of error correction. Much more likely when I consider that by the third switch, the stream has been through 2 previous error correction screens. Given that error correction is a noisy process, doing less of it would be very beneficial to the sound, allowing me to resolve and hear all those gorgeous elements you mentioned, all of which are completely correlated to the music and its associated recordings.
LOL, sorry you don't like my nick. I've had it for years, and have been told it's fitting.

Hope you understand that I'm not suggesting improvement (from a user perceptual standpoint) isn't occurring.

Simply pointing out that error correction is NOT happening at the switch level (unless these are something more then switches). Error correction occurs when digital signal (packet payload) is unpacked (decoded). This typically happens at the first processing device in the chain (streamer, DAC etc.).
 
  • Like
Reactions: NigelB and 7ryder

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
A modded Buffalo switch with good ps, OCXO clock, unused ports turned off was/is excellent, replacing the chain. In fact, I am/was hesitant to give it up for the forthcoming Taiko switch. Those guys say their switch will squash my beloved PFBuff. We'll see in another month.
This would make sense. By improving clock, and flow of power, you are enhancing the traffic shaping capabilities of the device. It can reorder and better prioritize traffic, without have extraneous traffic (data/processes) to analyze.

I expected that packet loss would result in missing data, ergo drop-outs, skipping, etc. which I do not hear in my network.
You wouldn't hear packet loss unless it was catastrophic (buffer emptied). What you would "sense" most likely would be time shift or shrill/thin sound etc.

Again I was trained that every network has packet loss and latency. The impact of that loss/latency is not linear. How much, where, when etc. dynamically impact the codecs behavior, specifically the error correction/concealment function when transcoding and reconstructing data at the far end of the network.
 
  • Like
Reactions: MarkusBarkus

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
One more quick post to wrap all this up, as I've gotten way off the topic here (sorry).

As I understand it, neutrality is the holy grail for audiophiles. Digital audio (conversion, compression, transport, transcoding etc etc.) is imperfect. What you hear, will never be the same, when digitally converted and transported through a network, as the original source material.

The more cogs in the process, the further from the source you get (in theory). If neutrality is the goal, eliminating variables should be the objective (again in theory). If on the other hand enhanced experience is the goal, then experiment away, just understand that all the variables are changing the experience (the reproduction is increasingly different the source). Hope this makes sense.
 

kennyb123

Well-Known Member
Nov 30, 2012
856
796
1,155
Kirkland, WA
Lets start with one truism. There is packet loss in EVERY network.
Whether switches that are operating properly contribute to this is what’s pertinent here. You haven’t convinced me that a pair or even three daisy-chained switches will necessarily increase packet losses. Nor have you accounted for the situation where the successively deployed switches have been purpose-built for high-end audio.

You appear to be attacking a straw man that doesn’t come anywhere close to representing the configuration that @Re-tread implemented, which was what you initially reacted to.
 

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
Whether switches that are operating properly contribute to this is what’s pertinent here. You haven’t convinced me that a pair or even three daisy-chained switches will necessarily increase packet losses. Nor have you accounted for the situation where the successively deployed switches have been purpose-built for high-end audio.

You appear to be attacking a straw man that doesn’t come anywhere close to representing the configuration that @Re-tread implemented, which was what you initially reacted to.
:( I'm not attacking anything or anyone. I'm simply questioning the logic of daisy chaining switches, including but not specifically different manufacturers switches. If that is perceived as me "attacking" I apologize that is not my intention.

If these switches were designed and tested by the manufacturer to be daisy chained, with documented improvement it would be one thing.

I doubt they are, as it would be a great marketing tool to get people to buy more then one switch....;)

The underlying networking protocols that these devices use are standards based, and not specific to high end audio that I am aware of. Some pro-audio solutions use proprietary networking protocols, but they are proprietary and would only work with other devices from the same ecosystem.

Again, all I was questioning was whether anyone had considered the implications of daisy chaining, even though the user experience might be perceived to be better.

Whether I have convinced you or not also was not my intention. I'm simply trying to have a dialog here, as I may learn something new.
 
  • Like
Reactions: 7ryder

kennyb123

Well-Known Member
Nov 30, 2012
856
796
1,155
Kirkland, WA
Again I was trained that every network has packet loss and latency. The impact of that loss/latency is not linear. How much, where, when etc. dynamically impact the codecs behavior, specifically the error correction/concealment function when transcoding and reconstructing data at the far end of the network.

But who says latency is a bad thing as far as sound quality goes? Those who hear and improvement from daisy-chaining switches are increasingly latency but maybe that’s contributing to the improvement they are hearing. I don’t know if that’s the case. Listeners have often found 100 mbps to sound better than gigabit, and what this tells us is that the pursuit of reducing noise requires us to come at these things differently.
 
  • Like
Reactions: Ratbastrd

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
But who says latency is a bad thing as far as sound quality goes? Those who hear and improvement from daisy-chaining switches are increasingly latency but maybe that’s contributing to the improvement they are hearing. I don’t know if that’s the case. Listeners have often found 100 mbps to sound better than gigabit, and what this tells us is that the pursuit of reducing noise requires us to come at these things differently.
Not saying "Bad", suggesting what you are hearing (relative) to the original source material, is different due to the dynamic effects of codifying and transporting through the network, along with any introduced degradations in that process.

If you run with this thought process (which I have), one might go so far as to suggest this answers the question of why one ethernet cable sounds different then another. It's all bits and bytes right?

Just saying.....:cool:
 

treitz3

Super Moderator
Staff member
Dec 25, 2011
5,459
961
1,290
The tube lair in beautiful Rock Hill, SC
As I understand it, neutrality is the holy grail for audiophiles. Digital audio (conversion, compression, transport, transcoding etc etc.) is imperfect. What you hear, will never be the same, when digitally converted and transported through a network, as the original source material.
It seems as if this is the audiophile debate of the month and it's making its rounds across multiple forums (using multiple switches). As for this statement, every playback format known to man is imperfect. The best we could ever hope for is an optimized approximation of the real event.

The more cogs in the process, the further from the source you get (in theory). If neutrality is the goal, eliminating variables should be the objective (again in theory). If on the other hand enhanced experience is the goal, then experiment away, just understand that all the variables are changing the experience (the reproduction is increasingly different the source).
The one thing I have learned with everything leading up to the NAP's is that everything affects everything and that conventional wisdom and age old standards no longer apply. It's not only a whole new ballgame, it's a different sport altogether.

Technology concerning streaming music and network configurations are still in its infancy in the whole grand scheme of things. Looking back over the years, power cords didn't make a difference at one point and the debates raged. Now, it's established that they do. There are many more examples that I could cite over the decades but I think you get where I am going with this.

;)

Tom
 
Last edited:
  • Like
Reactions: Ratbastrd

7ryder

Well-Known Member
Jan 31, 2015
197
161
275
It seems as if this is the audiophile debate of the month and it's making its rounds across multiple forums (using multiple switches). As for this statement, every playback format known to man is imperfect. The best we could ever hope for is an optimized approximation of the real event.


The one thing I have learned with everything leading up to the NAP's is that everything affects everything and that conventional wisdom and age old standards no longer apply. It's not only a whole new ballgame, it's a different sport altogether.

Technology concerning streaming music and network configurations are still in its infancy in the whole grand scheme of things. Looking back over the years, power cords didn't make a difference at one point and the debates raged. Now, it's established that they do. There are many more examples that I could cite over the decades but I think you get where I am going with this.

;)

Tom
Linn introduced their first Klimax Digital Streamer in 2007 and, while I wouldn't say that Sonos is in the same league, their zone players were using Ethernet a couple of years earlier at least; I started streaming with them in 2005. Given that this is going on 20 years, I'd hardly say that streaming music and network configurations are in their infancy, teens, maybe.

Chris
 

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
It seems as if this is the audiophile debate of the month and it's making its rounds across multiple forums. As for this statement, every playback format known to man is imperfect. The best we could ever hope for is an optimized approximation of the real event.


The one thing I have learned with everything leading up to the NAP's is that everything affects everything and that conventional wisdom and age old standards no longer apply. It's not only a whole new ballgame, it's a different sport altogether.

Technology concerning streaming music and network configurations are still in its infancy in the whole grand scheme of things. Looking back over the years, power cords didn't make a difference at one point and the debates raged. Now, it's established that they do. There are many more examples that I could cite over the decades but I think you get where I am going with this.

;)

Tom
Spot on!

While not specifically designed for this use case, this methodology is the standard for objectively measuring the effects I've been discussing. It might provide a foundation for improving the existing solutions.

If you have trouble sleeping at night, this will solve the problem...POLQA
 

treitz3

Super Moderator
Staff member
Dec 25, 2011
5,459
961
1,290
The tube lair in beautiful Rock Hill, SC
Linn introduced their first Klimax Digital Streamer in 2007 and, while I wouldn't say that Sonos is in the same league, their zone players were using Ethernet a couple of years earlier at least; I started streaming with them in 2005. Given that this is going on 20 years, I'd hardly say that streaming music and network configurations are in their infancy, teens, maybe.

Chris
Okay, you got me there. Teens it is.

:)

Tom
 
  • Like
Reactions: 7ryder

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
Linn introduced their first Klimax Digital Streamer in 2007 and, while I wouldn't say that Sonos is in the same league, their zone players were using Ethernet a couple of years earlier at least; I started streaming with them in 2005. Given that this is going on 20 years, I'd hardly say that streaming music and network configurations are in their infancy, teens, maybe.

Chris
I know the guy's at Sonos very well, worked with them for a number of years. Their solution to "whole home audio" very much centered on the problems we've been discussing. Much of today's mesh networking technology was originally ideated in the Sonos laboratories.

Problem is IP networking was never designed for low latency, high definition media transport. It was designed to be robust & reliable and most importantly simple. How long it takes for packets to get from A > B and in what condition, was never part of the equation. Best effort is good enough.

Everything we've done in the last 30 years has been a work around

maxresdefault.jpg
 

agisthos

Well-Known Member
Oct 14, 2012
116
37
935
You have it exactly right! :cool:

This is what you said to me....

The JS-2’s DC outputs are “floated” (though the chassis and shield of the transformer are grounded to AC mains for safety).
Yet the two -VE/0-volt “grounds” of the JS-2’s separately regulated outputs are common to each other (because only one transformer secondary, one set of Schottky diodes, one large filter choke).

Power to the EtherREGEN is directly to its ‘A’ side (the ‘B’ side gets power through an isolating regulator). So using the shared -VE (“ground”) JS-2 to power both the the EtherREGEN and anything downstream of it will somewhat defeat the EtherREGEN’s isolation. Though you could mitigate that by turning the EtherREGEN around and running it in the B>A direction.


In @Re-tread case he is powering a modem/router which is upstream of the eRegen, not downstream so that is OK. I got it wrong, partly.
 

analogsa

Well-Known Member
Apr 15, 2017
382
122
175
Cascais
Problem is IP networking was never designed for low latency, high definition media transport. It was designed to be robust & reliable and most importantly simple. How long it takes for packets to get from A > B and in what condition, was never part of the equation. Best effort is good enough.
No idea why any of this should be important. A large number of dacs use a substantial fifo with a subsequent reclocking and there is no reason the specifics of the transport mechanism should play any role as far as data integrity and correct timing are concerned. Which leaves noise as the only possible explanation for network sound.
 
  • Like
Reactions: NigelB

Ratbastrd

Well-Known Member
Feb 23, 2017
44
11
138
No idea why any of this should be important. A large number of dacs use a substantial fifo with a subsequent reclocking and there is no reason the specifics of the transport mechanism should play any role as far as data integrity and correct timing are concerned. Which leaves noise as the only possible explanation for network sound.
I disagree, but we'd need some very expensive test equipment to resolve the debate.
 

analogsa

Well-Known Member
Apr 15, 2017
382
122
175
Cascais
I disagree, but we'd need some very expensive test equipment to resolve the debate.


"very expensive test equipment" is fortunately quite common.

The only place it makes sense to take measurements is at the input of the actual converter circuit, usually an i2s signal. Don't you think this has already been done a few times at all levels of resolution?
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing