Hi-Resolution Audio and Sonos

Show first post
This topic has been closed for further comments. You can use the search bar to find a similar topic, or create a new one by clicking Create Topic at the top of the page.

172 replies

An Amazon Firestick isn’t capable of grouping itself with other Amazon Firesticks in order to play synced HD Video on multiple TVs. It’s moot to compare standalone devices with the particular requirements of multiroom systems.

None support higher than 44 kHz. 48kHz

And the 'support' of 24-bit is evidently based on reading the 24-bit files then immediately truncating to 16-bit.

Yeah, I corrected that.

And you keep ruining my fun! 😠
Certainly his education extends far beyond your first year engineering textbook?
Somehow I doubt the problem is with the textbook.
Userlevel 1
Badge +3
I understand that this was banned by the original post, but I can’t resist.
Hires audio is pointless. Red book CD is as good as it gets and Sonos supports it. So who cares? Why bother investing time, effort, energy, disc space etc to something for absolutely no benefit whatsoever?
Whatever the truth - and I am agnostic - what relevance does this have for the reality of 99.999% of Sonos listening? None I would suggest. How many more units would Sonos sell if it supported HiRes? I don't know but I suspect not many.

But I am pretty sure nobody is going to persuade anyone else about sound quality by writing about it. On either side of the argument.

HiRes will come to Sonos if and when it makes commercial sense.
And there it is folks, when confronted with science, they are left with nothing but insults and personal attacks. Yet we are the "bullies"?

Tell you what golden ears, do that same test at your home using tracks from the same master run through any computer and any DAC you want, but run it through the A/B/X software on foobar2000. Anyone who can't be bothered to do that can hardly expect to have a valid opinion.

Be sure to post the screenshots of your results!
People who think that 16/44 is 'perfect', the pinnacle of music reproduction, are rather like the idiots who believe that all amps sound the same, or cables, or DACs etc.
I just noticed this rather disreputable debating tactic: caricature and misrepresent the opposing views so that you can ridicule them. Disappointing.

I have another question for @nevalti. Who organised / funded the 'proper' test of hires he attended? Was it a hifi retailer with hires equipment to sell? Just interested to know, as I have known hifi retailers to organise such demos.
Userlevel 7
Badge +21
What is stopping Sonos from supporting 24 bit 192 khz?

The same thing that stops Sonos from doing so many things, it won't make them more money than what they are selling now.

Look at how long Sonos resisted doing a mobile/battery speaker and now we have one. Why, they saw a potential profit in it at this point.
By the very virtue of the fact “samples taken” means that only an approximation of the original signal can be reproduced at any point in time - rest is “filled in” = approximated. This is basic electronics. The more samples you take, the better you can reproduce the original signal irrespective of what frequency it has. Just means higher fidelity = HIFI.
Before you get in any deeper I suggest you do at least familiarise yourself with how the Nyquist–Shannon sampling theorem actually works. Any signal with a frequency less than half the sampling rate can be reconstructed perfectly. No approximation. No stair-step jaggies. Those are marketing untruths for which the odd manufacturer has actually been censured by the regulatory authorities.

Here is a great summary of the science: https://people.xiph.org/~xiphmont/demo/neil-young.html

Besides, you said that your music was in 24 bit 48kHz. Unless you have superhuman hearing the frequencies between 22.05kHz and 24kHz simply will not register.

What I fear may have happened to your 24-bit recordings is that in conversion to 16-bit CD quality some clipping or loudness compression was introduced. Even if it did not, simply pushing samples to full scale (0dB) could have resulted in inter-sample peaks during reconstruction which exceeded full scale. This could lead to clipping on the loudest passages which might just be audible. Sensibly engineered 16-bit conversion leaves a few dB of headroom.
Good grief. 🙄

there's more polite ways to express

While I agree that we need to be civil, I trust that castigating with reason a statement offered, is not confused to be an attack on the person making said statement.


Imagine two independent tasks running inside of a player. One task attempts to keep the buffer full and the second task meters data out of the buffer at a constant audio rate. If the buffer is full the unit has several seconds to work through a communication issue. If the unit has been having communication issues for a while, the buffer may be virtually empty and any additional communication issues will result in an audible issue, usually a mute.

One can request data from the NAS at any time. Fetching a few seconds of audio content in order to fill the buffer can be done in a fraction of a second. If there is a large communication issue, the buffer can be fully rebuilt in a fraction. Live Internet radio is different because this is real time data. Lets say that SONOS delays start of station play for two seconds while a buffer is built. Later, if there is a momentary communication issue, since the buffer cannot fetch future data, the buffer will shrink while maintaining audio. Eventually, the buffer will be exhausted and every small communication  interruption will result in a mute. Restarting the station will refill the buffer and play will be uninterrupted until the buffer is exhausted again.

It’s a tug of war about how large the buffer should be. Large buffers, especially in the older units with limited memory, will limit the feature set. With respect to Internet radio, building a large buffer will delay start of a station. For example if an Internet Radio station builds a ten second buffer, station play cannot start for ten seconds and the user interface will seem sluggish.

I suppose that a player could build a history for Internet Radio stations and provide larger buffers for troublesome stations, but this requires player resources that may not be available in older units.

Userlevel 5
Badge +6

@ratty : now that makes it more mystifying - if there is this buffer, why does Sonos have issues with wireless streaming of FLAC files? When Amazon devices easily do HD Video+Audio?

Personally I don’t have an issue with FLAC and wireless. SonosNet is not over-endowed with bandwidth however, and too many 192/24 streams could (unnecessarily) gum it up. It would be sustained throughput, not buffering, which would be the issue.

When I had lossless streaming issues, I recall it was because I was initiating playback from a room that was wireless. The data had to come from LAN via SonosNet to the room that I initiated the playback on and then back again over SonosNet. If I initiated playback from a wired device, it would half the amount of data and I didn’t get dropouts. This was a couple of years ago, things may have changed now.

When I had lossless streaming issues, I recall it was because I was initiating playback from a room that was wireless. The data had to come from LAN via SonosNet to the room that I initiated the playback on and then back again over SonosNet. If I initiated playback from a wired device, it would half the amount of data and I didn’t get dropouts. This was a couple of years ago, things may have changed now.

The architecture remains the same: the first room contains the ‘Group Coordinator’, which fetches the stream and distributes it to the other group members. Whether a group will struggle with a wireless GC obviously depends on the local wireless conditions. 

First, stop posting the same matter in more than one thread.
Two, the link, in case you haven't realised, is a meta study, nothing original in itself. And does not become scientific just because it uses the buzz word "meta"; it is just a study of articles on the subject, many non scientific. Ergo, GIGO.
Third, it is from 2016 and there are rebuttals that we are also familiar with as under:

First, from someone that subscribes to the Hi Res philosophy at the mastering stage, who points out that all the study does is point out what needs to be done in a new study! :


And then a 16 page discussion on HA, nixing this mere collection of old articles:


And of course ranged against this, there are the Kool Aid drinkers of Hi Res aplenty that claim this as manna.

So, as always, East is East and the West is West, with no meeting possible by their nature, between the twain.

Lossless compression will not compromise the quality of an HD stream.

Multiple buffers could be used because the packets are tagged with a time to deliver -- as long as the local clocks are in sync. A track recorded decades ago will not mind being handed off to multiple buffers in order to allow synchronized Group play.

With respect to Groups, players might (depending on which nodes are wired and wireless) be facing the polluted wireless environment multiple times, once to fetch the source, once for each member of a Group, plus once for a stereo paired unit in the Group. In a SonosNet wireless mesh, the data must deal the wireless environment as communication is passed through multiple nodes. While a mesh can be self healing, there is actually more data “in the air” than in a scheme built around a central server and access point. In a polluted environment, total data is also elevated as corrupt packets are re-transmitted. At some level of pollution and traffic, any scheme will saturate and break down.

If an email or web page struggles with delivery, as long as delivery is ultimately successful, there is no major damage, but a late video or audio frame is virtually useless for its primary purpose.

Managing SonosNet bandwidth demand in groups isn’t quite that clear cut. A standard rule of thumb is indeed to start from a wired ‘room’ when building the group.

But imagine a scenario where the ‘Group Coordinator’ node is several wireless hops out from the wired network, yet its group peers are very close to it. Since the intra-group streams should go peer-to-peer by direct routing -- with good signal strength and low interference -- then everything could be absolutely fine. 

And God created the world six thousand years ago with all the marks of antiquity and decay as seen in the geographical and fossil record.

There isn't any arguing with such wisdom.

Poor choice of words on my part, indeed chatter is a better term. Thank you.

Just one more example that shows why the Hi Res myth has such legs.


Consider the case where you must attend a very important meeting and you must be exactly on time. You will create a “buffer” by planning to arrive at the venue a little early then, using your wristwatch (that has been synchronized with the timezone), you’ll walk in exactly on time. Actually, there could be multiple “buffers” as you arrive at train and bus stations along the way.

This is the sort of issue that SONOS must solve -- wired or wireless, mesh or not. Audio must be delivered to the speaker on time -- regardless of little traffic jams along the way..

The Firestick plays HD streams flawlessly, wirelessly received from the base station, with no pixelating or degraded video. But the play 1 when grouped with the Connect cannot wirelessly play just the audio part of the same signal on Sonosnet without stuttering.

Apples and oranges.

The Firestick is fetching a file, in chunks, at its leisure from an Amazon server. The buffer size can be made large enough to ride out network perturbations. Latency is largely irrelevant. 

Sonos has to encode, deliver and render a stream within ~70ms of receiving it. The buffer size is constrained by the latency requirement.


Why bother ratty? It's obvious that he already knows everything. 🤔
Why bother ratty? It's obvious that he already knows everything. 🤔
"Science" gave it one last shot but, yes, "faith" could simply be too strong.
Tail tweaking? Moi?