Friendly debate: What do we actually know about networking?

Have there been tests that would intentionally throw a bad bit in music to see if it’s perceptible and what would it sound like? I’d be curious to see because I don’t think it would sound like static since the sample rate should be high enough to not destroy the song but it would be interesting to know if noise could actually change a bit while also leaving the music intact.

@EvilGnome6 How is network able to make sure the bits arrive exactly on time? We know that digital jitter plays a pretty big part in sound quality so I’m curious what makes networking so perfect. Things like this would be helpful to understand rather than “it’s all placebo” etc. Thanks

Sidenote - it is interesting that audiophiles knew that digital was pretty crappy compared to analog when digital was introduced but didn’t really have the understanding why, just knew it by listening. Now we have definitions like jitter, galvanic isolation, etc and for some reason we forgot these things didn’t have an explanation not that long ago.

FWIW, it’s awesome when an innocent bystander (e.g. wife) says, “Whoa, that sounds different, did something change??”

I did that to my brother about 20 years ago when I changed a TOSlink to AES/EBU from my CDT to DAC. We were both living in Silicon Valley and he is an EE. His face was priceless when I told him what I did. “WTF that shouldn’t matter…”

4 Likes

There is no mechanism in practice for the bits to change, but your also transmitting analog noise along those wires, and that can make a difference to timing (read jitter) of the final signal IF it isn’t filtered out well by the devices.

As an interesting aside, it was assumed jitter was inaudible when CD was first introduced, so no one measured it, until people started to report that more physically robust transports sounded better. Which on first blush makes no sense, the more robust transport doesn’t deal with different bits. It took some time for I think it was Cambridge audio to identify the mechanism, basically the more robust transports resulted in less jitter.

The point being that mechanisms for differences in perception are not always obvious.

3 Likes

Sure I’ve done that test hundreds of times.
I had the misfortune to write audio drivers for various game consoles over the years, I’ve spent more hours of my life listening to 440Hz Sine waves with the occasional missed bit than I would ever want to add up.

Short version is assuming it’s in the upper part of the word, it’s likely quite audible and it’s not subtle, and that is not the difference your hearing with network changes.

You can assume if your network is not broken that your Streamer/DAC will see the exactly bits in the source file, but it also sees all the analog noise on the lines, it just doesn’t directly impact it’s ability to extract the bits.

There is a better demo you can do if you can find an old firewire DAC.
Unlike USB, Firewire cables are symmetric (same connector at both ends), but commonly they aren’t actually identical at both ends, one end the shield normally grounded and the other it isn’t, so you get an audible difference just swapping the direction of the cable.
Because with the ground at source end it caries all the shield noise into the source and the other way around it carries in into the DAC.

FWIW the one part of that I can’t justify is how small the noise signal must be in that case, and how much it impacts the output. But at some level noise on a power cable makes even less sense.
What I’ve never seen is any study that tries to quantify how much noise and what specific characteristics it has to impact perception of a signal.

1 Like

That’s why AC cables are grounded at the wall end.

A lossless music “stream” requires 1.44 Mbps. The typical modern network transfers data at 1,000 Mbps (minus some overhead losses). Let’s just say that’s roughly 500 times faster than needed. The network isn’t going to sloooowly trickle out those bits at the required rate. It’s gonna send a chunk of data at 1 Mbps and then sit around waiting 499 times longer than that took for the next chunk of data to be requested. If something was corrupted in transfer (very rare) upper level protocols will request for that chunk to be resent. When 499 time slices out of 500 are spent waiting around, there’s ample time for this to happen.

On the receiving end, a chunk of data makes its way up the network stack. Once it has trickled out enough bits to the the circuit/firmware feeding the DAC and is ready to receive more into the buffer it’s trickling out of, it sends a request for the next chunk.

2 Likes

Does the receiving circuit have a potential ground plane that can float and be susceptible to noise and current leaks?
I’m seeing this more and more now in grounding streamers, DACs, etc.

If digital signals were always perfectly received and perfectly interpreted, timed precisely, a DDC would do absolutely nothing.

We’re not talking about the network not being able to deliver perfect reconstructed data from point a to point b, we’re talking about noise introduced into the transport medium that gets into a a streamer/DAC and then what the DAC has to do to take that perfect data stream and turn that into a digital i2S protocol that the DAC can decode. those are the areas that introduce problems.

Not everything Hans “dry toast” Beekhuyzen says here is gospel, specifically his use of the word jitter, but this video does illustrate the issue of how a digital i2s signal can be smeared by a DAC.

2 Likes

So the issue isn’t really the network modem but the receiver. Almost every DAC out there will need to convert the data to i2s and that’s possibly where the issue comes in. I guess how does the DAC handle the buffered information? Does every network router operate the same way? Transfer all the data as quickly as possible? When does the data check happen and is it always the same no matter which router or network you have? i.e. After everything is transferred or after parts are transferred, etc.

TBH I’m not super interested in network switches. The thing that I find quite unbelievable are SFP modules and cables making a difference. Unfortunately there’s more anecdotal evidence to suggest that they do make a difference and not enough explanation of why they should never make a difference.

1 Like

Grounding is “complicated”, or at least more complex than just something is or isn’t grounded.
A single device can have one or more grounds and any of them could be tied together or floating.
It’s why ground loops are so painful to find and solve.

The ground connections that have always been common on phono pre’s and are now turning up on DAC’s Streamers etc, are about trying to share a ground between equipment to reduce the noise differing grounds can themselves introduce.

The fact is any noise anywhere (signal lines +ve, -ve, ground) can make it onto a line that eventually is involved in being interpreted as a clock. There is crosstalk between adjacent lines on circuit boards, the same way there is crosstalk between wires in a cable.

If external noise entering via a digital line were not a thing no one would try and galvanically isolate the USB portion of a DAC.

2 Likes

I hate his videos, because they are just handwavy enough to be misleading. And as a result easily dismissed including the useful information in them.

2 Likes

The way these last posts are reading, there is a conflation between how the DAC handles the bits, rather than the networking piece handling the bits. Then throw in a DDC or streamer to cause more confusion. What piece of the transmission is being discussed would be helpful to know while reading these interesting arguments and counter-arguments about bits.

— Not an IT person

2 Likes

I get it, but it’s dismissed usually by folks that aren’t willing to watch it and then try to either disprove the hand wavy commentary with substantive commentary or do further research to substantiate. It’s YouTube after all and not conducive to 3 hour long courses on digital noise.

We have to bear some responsibility to educate ourselves further after all. :slight_smile:

2 Likes

Just assume the digital transmission is entirely lossless in all practical cases.
It actually isn’t end to end, but flipped bit errors are so rare they don’t contribute to the “sound”.

Any wire inside or outside a device is carrying an analog signal relative to some ground even digital lines, that signal includes some noise, that noise can via various mechanisms be transferred to other wires. And it can increase because it will also act as an antenna.

At some point something has to extract the digital signal from a line with that accumulated noise, and will infer a clock based the rising or falling edge of a pulse passing some voltage threshold. The noise can move the exact time at which that clock (were talking picoseconds or less) is inferred, and if it’s the audio clock that results in jitter.
The more noise the more jitter.

So lets say my computer is generating lots of noise, it transmits a portion of that over the ethernet wires (which will add a very small amount because they act as antennas) to the switch that probably attenuates that somewhat and adds some of it’s own, that then gets passed over another ethernet cable to the streamer. The streamer tries to remove as much of that noise as it can, but it will add some of it’s own, and passes it to the DAC, which again tries to remove as much noise as possible, but adds some of it’s own, and that impacts the exact clocking of the digital signal that’s converted to analog introducing jitter.

The open question is how much noise from where and how much effect.

Hans does have a video somewhere of measuring a difference in jitter at the point of D to A conversion based on I think changing to an audiophile network switch.

FWIW from my personal experiments with networking gear, the effects are a lot more subtle than many other things, on my wired network optically isolating the NS1 from the rest of the network was a marginal win, but it’s an expensive streamer that likely does a good job of isolating itself.
I haven’t noticed a difference I care about swapping network cables. And I should clarify what I mean by that. It isn’t that I can’t necessarily hear a change, My rule of thumb is if I have to swap multiple times to establish a preference I don’t care.

But only a very small part of my network is wired.

3 Likes

How the DAC handles the information is beyond the scope of “networking”. From my perspective, the “network” consists of the Network Interface Cards (NICs) of the two devices communicating and everything in between (switches, routers, cables, etc.) Essentially, every switch/router is just a box with a bunch of NICs, but the whole cloud of devices and cables between the source and destination NICs is “the network”.

All network devices operate at a specific transfer rate. Unless the interface is configured to operate at a specific rate, an autonegotiate protocol will determine the highest mutually compatible rate between two NICs. All data will be transmitted at that rate between those two interfaces. The only way the transfer rate can be “slowed” is through “windowing” in which the two will negotiate how frequently to send packets.

TCP/IP frames contain a header, the actual data transmitted and a trailer. In the header you’ll have such things such as address information (who is this for?) and sequence numbers (so frames can be sorted into the correct order if they arrive out of order). Trailers contain a checksum (among other things) so that the receiving network card can tell immediately if a frame is corrupt. If it is, it will request retransmission of that sequence number.

2 Likes

Thanks for answering. Do you happen to know what differences there are in SFP modules? Why would anyone buy a more expensive one or if the technologies really matter? I would think there should be no differences even for networking purposes so I’m surprised that I know many people who have heard differences there.

In network systems, the noise doesn’t accumulate as you are suggesting. At every point of transfer, all noise is completely stripped from the analog signals when converting back to bits. If there’s any corruption of the data due to noise, error detection encoded within the frames will be detected. Those frames are discarded and retransmission is requested.

2 Likes

Other than providing modularity for different media (fiber vs. copper), nothing. The whole point of the interchangeability of SFPs demonstrates the robustness of network design. You can change out the components at different layers, communicate over completely different media and have seamless interoperability. Both fiber and copper SFPs are capable of transmitting and receiving trillions of bits without any lost/corrupt frames.

1 Like

I’ll throw in one comment regarding noise in electrical circuits (because I think the benefit of improved network setups, if there is one, is due to analog noise coupling over the Ethernet cable and into the receiving circuit):

How power and ground noise couple into an electrical circuit is not always obvious. In principle it’s straightforward (inject sinusoids over a range of amplitudes and frequencies through an impedance that is representative of the system it is connected to), but getting these results to match between simulation and measurement is tricky. And the actual mechanism in the circuit for this noise to degrade circuit performance is not always obvious. Typically what we see is noise at lower frequencies (<10MHz) rectify across parasitic diodes, which then creates either voltage or current offsets in the circuit, thereby degrading performance. At higher frequencies the mechanism is usually coupling through a parasitic capacitances, which then can create similar offsets and performance degradation.

I’ve done this type of modeling and testing for CAN and LIN transceiver designs going into cars. When you either directly inject or radiate such a noise sources into the circuit you can easily measure the loss in circuit performance (even if still in spec, the performance is lower).

At the end of the day, any noise coupled into an electrical circuit through any node has the potential to degrade circuit performance. And from an Ethernet connection perspective, this sort of analog noise has nothing to do with packet timing/loss/jitter.

Given how sensitive most circuits are to this type of noise injection, it’s a real testament to the quality of design work going into these products.

3 Likes