Hi-Resolution Audio and Sonos


  • Contributor I
  • 1 reply
This topic form is intended to uncover the truth about what Sonos products (if any) actually support Hi-Res audio (ex. 24bit/192KHz, DSD, MQA). Further more I would like to exclude personal opinions about PDM (pulse-density modulation) vs DSD (the format used by in SACD). Weather you can hear the difference between a 256k AAC version vs a uncompressed 24bit or DFS file is worthy discussion, but just not on this topic. Again I would really like to understand what Sonos is doing or planning to do to support Hi-Resolution audio. There is a flood of Hi-Res content being released from master studio recordings. This trend is on the rise.

171 replies

Userlevel 2
Badge

And some people blindly believe what the books say without trying out themselves.

I used to believe the books too. Until I tried it out myself to see if high res made any difference and sure thing it does.

I an an electronics engineer myself. The way digital music is stored, it does not capture all the information. There is definitely loss based on sampling rate. This I learnt at college.

Some say you can’t make out the difference. But so many people can make out some difference. I think you just have to hear it out yourself and figure out whether you can.

I come with no bias. I heard it myself and made the determination that I can hear better at high-res. My own music, I play back at high-res only because otherwise it loses a lot of the “warmth” = mid & low frequencies. I can only make out the difference in that. Everything else feels the same between CD quality and high-res.

I have come to that conclusion, that hires audio is important in production and mixing - when the final result is to be sent to the listener 16-bit/48kHz i perfectly enough IF the encoding is done right - aparently many MP3 etc encoders are of very poor quality and produces the bad audio that we experience. SO either use 16bit/48kHz lossless OR use a quality encoder (when using a music provider like Google) you are at their mercy...

An Amazon Firestick isn’t capable of grouping itself with other Amazon Firesticks in order to play synced HD Video on multiple TVs. It’s moot to compare standalone devices with the particular requirements of multiroom systems.

As we try to point out, your Line-In experience is not relevant.

Streaming of HD files (or any other type for that matter) on Sonos would be akin to the way your Firestick works. 

Right, so you are saying that grouped wireless play of HD files will work as flawlessly as my Firestick does HD TV. 

@ratty I understand what you are saying - all I am saying at the end is that my experience using line in over the years suggests that HD group streaming wirelessly is going to be quite interesting to read about whenever S2 commences with these streams.

As we try to point out, your Line-In experience is not relevant.

Streaming of HD files (or any other type for that matter) on Sonos would be akin to the way your Firestick works. 

@ratty I understand what you are saying - all I am saying at the end is that my experience using line in over the years suggests that HD group streaming wirelessly is going to be quite interesting to read about whenever S2 commences with these streams.

And it seems to me that the now ancient source Connect is flawlessly doing all it needs to in 70 ms - else wired would not work either which it does with the 70 ms delay not causing any noticeable lip sync issues. The problem is with the other units in the wireless group. Or, in my case now, with the sole such unit if I try to add it.

The Firestick plays HD streams flawlessly, wirelessly received from the base station, with no pixelating or degraded video. But the play 1 when grouped with the Connect cannot wirelessly play just the audio part of the same signal on Sonosnet without stuttering.

Apples and oranges.

The Firestick is fetching a file, in chunks, at its leisure from an Amazon server. The buffer size can be made large enough to ride out network perturbations. Latency is largely irrelevant. 

Sonos has to encode, deliver and render a stream within ~70ms of receiving it. The buffer size is constrained by the latency requirement.

 

 

Not the way it works at all.  The master unit sends out the data stream to all grouped devices, and play doesn’t start on any of the group until all devices have sufficient buffering.  Then, the timing logic/cues in the data are utilized for all to play from their buffers in sync. 

That is as good an explanation as any as to how group play uses buffers in every unit involved. So my lazy thinking was correct after all!

But then it makes the Firestick question relevant as seen in my specific use case below:

Firestick supplied TV, stereo audio line out wired to line in on Connect at one side of the room. On the other side 15 feet away in unimpeded line of sight, Play 1 wired to 2011 make Apple base station.

The Firestick plays HD streams flawlessly, wirelessly received from the base station, with no pixelating or degraded video. But the play 1 when grouped with the Connect cannot wirelessly play just the audio part of the same signal on Sonosnet without stuttering. Sonos Support was involved, but they ended up with the ridiculous statement that “TV audio streams are too heavy for the Connect”, and washed their hands of the matter. I wired the Connect back to the router, learning how to make and crimp ethernet cables in the process that was a cool learning, and other Sonos units as well that I need for grouped play for the TV audio. And even now, a wireless play 1 away at the other end of the open space will stutter when added to this group, but that is not a big deal for me, because the wired group play now works flawlessly for all places where I need it to.

So it will be interesting to read about what happens when the shiny new S2 systems start playing HD streams wirelessly in grouped mode.

Kumar,

Consider the case where you must attend a very important meeting and you must be exactly on time. You will create a “buffer” by planning to arrive at the venue a little early then, using your wristwatch (that has been synchronized with the timezone), you’ll walk in exactly on time. Actually, there could be multiple “buffers” as you arrive at train and bus stations along the way.

This is the sort of issue that SONOS must solve -- wired or wireless, mesh or not. Audio must be delivered to the speaker on time -- regardless of little traffic jams along the way..

Poor choice of words on my part, indeed chatter is a better term. Thank you.

‘Chatter’, maybe. ‘Crosstalk’ traditionally has connotations of undesirable audio effects.

Timing information is exchanged within groups (and bonds) via SNTP messages in order to synchronise playback.

Which I suspect is why there is an amount of crosstalk between Sonos speakers, so that they’re all aware of each unit’s timing.

I don't get this; how can a downstream player in a group use a buffer to ensure that it plays music in a stable way and still remain in sync with the upstream - in the group - player to it? It will then always lag behind the upstream player.

I suggest you re-read buzz’s post.

A downstream player doesn’t play data as soon as it gets it. Playout is regulated by its internal clock, which is frequently synchronised across the group. 

I don't get this; how can a downstream player in a group use a buffer to ensure that it plays music in a stable way and still remain in sync with the upstream - in the group - player to it? It will then always lag behind the upstream player.

 

Not the way it works at all.  The master unit sends out the data stream to all grouped devices, and play doesn’t start on any of the group until all devices have sufficient buffering.  Then, the timing logic/cues in the data are utilized for all to play from their buffers in sync. 

Buffering is always used. It’s impossible to deliver a real-time stream over an asynchronous network without a playout buffer. 

 

I don't get this; how can a downstream player in a group use a buffer to ensure that it plays music in a stable way and still remain in sync with the upstream - in the group - player to it? It will then always lag behind the upstream player.

Managing SonosNet bandwidth demand in groups isn’t quite that clear cut. A standard rule of thumb is indeed to start from a wired ‘room’ when building the group.

But imagine a scenario where the ‘Group Coordinator’ node is several wireless hops out from the wired network, yet its group peers are very close to it. Since the intra-group streams should go peer-to-peer by direct routing -- with good signal strength and low interference -- then everything could be absolutely fine. 

Userlevel 4
Badge +5

 

With respect to Groups, players might (depending on which nodes are wired and wireless) be facing the polluted wireless environment multiple times, once to fetch the source, once for each member of a Group, plus once for a stereo paired unit in the Group. In a SonosNet wireless mesh, the data must deal the wireless environment as communication is passed through multiple nodes. While a mesh can be self healing, there is actually more data “in the air” than in a scheme built around a central server and access point. In a polluted environment, total data is also elevated as corrupt packets are re-transmitted. At some level of pollution and traffic, any scheme will saturate and break down.

I sort of understand this, the bandwidth is shared between all devices, and can go up exponentially if there are re-transmissions. I’m trying to figure out the ‘best practice’ for minimising the wasted bandwidth in the SonosNet mesh.

I have learnt that starting a group from a wired device gives me best results, is there any advantage from starting it from a wired stereo pair? I recall its the left speaker that is best to be wired if possible?

Thanks!

 

Lossless compression will not compromise the quality of an HD stream.

 

Yes, but I am not sure that this will allow enough bandwidth saving to overcome stuttering in HD group play where no buffer can be used.

Buffering is always used. It’s impossible to deliver a real-time stream over an asynchronous network without a playout buffer. 

 

Lossless compression will not compromise the quality of an HD stream.

 

Yes, but I am not sure that this will allow enough bandwidth saving to overcome stuttering in HD group play where no buffer can be used.

And for Line In, the compressed mode that is used for stable play in compromised WiFi environments is lossy compression. Although even this is not something that is audibly a compromise to most unbiased ears.

Lossless compression will not compromise the quality of an HD stream.

Multiple buffers could be used because the packets are tagged with a time to deliver -- as long as the local clocks are in sync. A track recorded decades ago will not mind being handed off to multiple buffers in order to allow synchronized Group play.

With respect to Groups, players might (depending on which nodes are wired and wireless) be facing the polluted wireless environment multiple times, once to fetch the source, once for each member of a Group, plus once for a stereo paired unit in the Group. In a SonosNet wireless mesh, the data must deal the wireless environment as communication is passed through multiple nodes. While a mesh can be self healing, there is actually more data “in the air” than in a scheme built around a central server and access point. In a polluted environment, total data is also elevated as corrupt packets are re-transmitted. At some level of pollution and traffic, any scheme will saturate and break down.

If an email or web page struggles with delivery, as long as delivery is ultimately successful, there is no major damage, but a late video or audio frame is virtually useless for its primary purpose.

I also realise that the comparison with Firesticks has a fallacy - they must use a hefty buffer to allow HD play before it degenerates to SD quality if the WiFi issues persist. But if they had to do grouped play in sync, with other Firesticks, I am sure the downstream Firesticks will not be able to cope via WiFi.

I admit to lazy thinking.

The buffer will only serve for the source unit to get incoming streams without stuttering via the buffer. But since grouped play by definition has to be in perfect sync, no buffer can exist to remove stuttering in downstream units, because it will affect the sync. If this arises due to unavoidable circumstances, the only solutions are ethernet wiring or compressed streams if these serve to overcome the WiFi issues and allow stable music play.

And since it is silly to use compressed HD files, the solution for grouped play of these may have to be wiring alone if there are WiFi issues.

Although this compression may not be so silly - if a better master has been used, even compressed HD music will sound better than CD quality streams from not as good masters.

A question now: What is then the point of the options that Sonos now offers for audio delay for Line In? Assuming that this is just a buffer/delay and no compression is involved. The source feed is via a wire, so why the need for any delay more that the 70 millisecond one that is needed for the ADC/DAC and other Sonos processes?

 Is there something in this that will allow HD audio streams to also work much better with the 2 second delay? Or are these two unrelated and therefore different aspects?

 

The “delay” is actually a buffer -- similar to Internet radio. Line-In is real time data. Delay does not imply compression.

When I had lossless streaming issues, I recall it was because I was initiating playback from a room that was wireless. The data had to come from LAN via SonosNet to the room that I initiated the playback on and then back again over SonosNet. If I initiated playback from a wired device, it would half the amount of data and I didn’t get dropouts. This was a couple of years ago, things may have changed now.

The architecture remains the same: the first room contains the ‘Group Coordinator’, which fetches the stream and distributes it to the other group members. Whether a group will struggle with a wireless GC obviously depends on the local wireless conditions. 

Userlevel 4
Badge +5

@ratty : now that makes it more mystifying - if there is this buffer, why does Sonos have issues with wireless streaming of FLAC files? When Amazon devices easily do HD Video+Audio?

Personally I don’t have an issue with FLAC and wireless. SonosNet is not over-endowed with bandwidth however, and too many 192/24 streams could (unnecessarily) gum it up. It would be sustained throughput, not buffering, which would be the issue.

When I had lossless streaming issues, I recall it was because I was initiating playback from a room that was wireless. The data had to come from LAN via SonosNet to the room that I initiated the playback on and then back again over SonosNet. If I initiated playback from a wired device, it would half the amount of data and I didn’t get dropouts. This was a couple of years ago, things may have changed now.

Compressed or uncompressed relates to the amount of data that needs to be sent over the network. Obviously, less data will result in fewer music interruptions if there is a communication struggle. Lossless compression would be mandatory for HD streams, but this does not result in minimal network traffic.

Severe local issues can limit effective communication with the NAS and communication between the coordinator and members of a Group, stereo pair, or surrounds and SUB.

Reply