Something about " reclock " & "dejitter ", which I couldn't comprehend...

preston8452

New member
Joined
Jan 5, 2022
Messages
25
Location
Arizona
Hi y'all,

Firstly, sorry to bring out this old question again, but recently I just received this inquiry from my friend who is also on his way to digital streaming though, as we were discuss whether a network switch would help or not like everyone else, he sorta struck me with this, " Based on some researches, I found that most hi-end or high quality digital audio devices would reclock and dejitter signals on the arrival anyway, so what's the possible benefit of doing that in a network switch prior to arrival? "

I was speechless lol, because I didn't know about this, and I bough a network switch myself actually, so I seriously would like to prove my point to him!

If you guys have some insightful opinions regarding this question, please do share, I'm dying to learn...

Best,
 
How does the signal receiver know if the signal has been previously affected by jitter? How does it ‘dejitter’ the received signal?
 
The subject of network switches is an interesting one, since switched Ethernet is packet based with error detection, correction, and retransmission modalities built into the standard, along with robust galvanic isolation. So technically it shouldn’t matter. It’s not about 1’s and 0’s for sure.

But audiophiles globally do hear an improvement when they add these devices to their networks, and unless there is some sort of mass delusion going on, (entirely possible) then there is something going on here that goes beyond 1’s, 0’s and even jitter.

I’m the case of asynchronous isochronous USB the DAC itself does all the reclocking and the DAC’s internal clocks determine the performance.

In the case of other protocols like AES and S/PDIF, the streaming source and its clock determines the timing and jitter performance of the system.

If you have a device that accepts Ethernet on one and and outputs to a DAC on the other, you have the possibility that noise on the Ethernet input will propagate through the device and end up impacting operation of the DAC. A properly designed streamer should mitigate these effects, but since these switches do make a difference in some installations the conclusion could be that all streamers are not created equal in their ability to isolate Ethernet network noise from their outputs and grounds.
 
"robust galvanic isolation"

I've read (if I'm comprehending what I've read) that poor galvanic isolation is easy to make happen in a system and is a somewhat common cause of sub-optimal sound. And it's high on the list of why "it's not all 0s and 1s.)

Others will know better I'm sure.
 
Hi y'all,

Firstly, sorry to bring out this old question again, but recently I just received this inquiry from my friend who is also on his way to digital streaming though, as we were discuss whether a network switch would help or not like everyone else, he sorta struck me with this, " Based on some researches, I found that most hi-end or high quality digital audio devices would reclock and dejitter signals on the arrival anyway, so what's the possible benefit of doing that in a network switch prior to arrival? "

I was speechless lol, because I didn't know about this, and I bough a network switch myself actually, so I seriously would like to prove my point to him!

If you guys have some insightful opinions regarding this question, please do share, I'm dying to learn...

Best,

Jitter... well, can you actually hear it?
You can find out here... Archimago's Musings: DEMO / MUSINGS: Let's listen to some jitter simulations with sideband distortions...

And if you are feeding a digital USB signal to an Asynchronous DAC, then the clock inside the DAC does all the work. and there is no need for reclocking.

Most of today's DACs are Asynchronous and they also can handle jitter.
 
Firstly, sorry to bring out this old question again, but recently I just received this inquiry from my friend who is also on his way to digital streaming though, as we were discuss whether a network switch would help or not like everyone else, he sorta struck me with this, " Based on some researches, I found that most hi-end or high quality digital audio devices would reclock and dejitter signals on the arrival anyway, so what's the possible benefit of doing that in a network switch prior to arrival? "

Based on the question, I'm assuming there is a network delivering audio data to a streamer, then going to a DAC. The streamer may be built in to the DAC.

There are at least two different protocols involved here: the network protocol and the digital audio protocol. Because any network communication needs to be translated to digital audio somehow, you can basically consider the two halves of the transmission independently, if you assume the "translation block" (i.e. where the network data is received, interpreted, and then sent out as digital audio data) in the middle is ideal, or you could consider the translation block separately.

Since the network transmission will be using a reliable communications protocol, jitter (or anything else) that actually results in an error of data will cause that data packet to be rejected and the data to be resent. If this is enough of a problem that the somewhat-real-time conversion to the digital audio data is delayed, you will notice a gap/pause in the audio playback. You may experience this when watching streaming video when you see everything pause and it has to rebuffer or buffer more to catch up. (This is not the same thing as frame drops.)

Digital audio signals, like S/PDIF, have been making use of some method to deal with jitter for many decades. The issue has been that depending on the method used, while the data will be captured and interpreted correctly (absent extremely bad jitter), it may carry digital audio signal jitter through to the clock that is used to drive the digital-to-analog conversion process. In other words, what happened is jitter on the incoming S/PDIF signal would result in jitter on the outgoing analog audio signal. (How much jitter you need to be audible is a subject of research papers.)

But, a few decades ago (instead of many decades ago), people started doing things differently in order to isolate the incoming jitter from the DAC stage. Buffering and re-clocking would be used. Extreme jitter would still be a problem, but anything below a certain threshold would not negated/rejected. I'd hazard a guess that almost anything designed in the last 10 years is going to do something like this where jitter on the incoming digital audio should not be a concern anymore. For instance, pretty much everything measured by Stereophile shows high immunity to jitter these days.

Also, since the data transmission rate of the network is going to be significantly higher than that of the digital audio stream, and converting packetized data into bitstream data, the translation block is going to be a gigantic buffer and re-clock stage. There should be no reason for jitter on the TCP stack to in any way influence jitter on the S/PDIF stream unless the translation block is too underpowered (extremely unlikely considering the minimum requirements).

If the network switch is actually impacting the sound, it is not going to be due to jitter or any sort of dejittering. It has to be something else, like Tom (W9TR) says.
 
TL;DR?
Jitter is just a boogeyman. :D

:p I tried to make my last paragraph a sort of TL;DR, but didn't put it in front.

I actually did some academic research into network jitter's (among other things) impact on real-time audio and video, many years ago. In real-time, you want to render as quickly as possible (i.e. minimal buffering) and drop any "late" data otherwise you'll end up behind the sender or other participants. For any communication that is more than 1-way (e.g. video conferencing) or for broadcasts where keeping up-to-date with the sender is important (e.g. sports, news, presentations). So poor network conditions can certainly have a noticeable impact on the quality of experience.
 
:p I tried to make my last paragraph a sort of TL;DR, but didn't put it in front.

I actually did some academic research into network jitter's (among other things) impact on real-time audio and video, many years ago. In real-time, you want to render as quickly as possible (i.e. minimal buffering) and drop any "late" data otherwise you'll end up behind the sender or other participants. For any communication that is more than 1-way (e.g. video conferencing) or for broadcasts where keeping up-to-date with the sender is important (e.g. sports, news, presentations). So poor network conditions can certainly have a noticeable impact on the quality of experience.

My non audiophile switches don't have poor network conditions, everything hooked on these do their job. Despite of that: with the audiophile Silent Angel Bonn music sounds a little bit better. The reason for it, I have no idea
 
Back
Top