Zp 24/96


  • Anonymous
  • 0 replies
Ability to play 24bit/96 files (like the competition: slimdevices transporter)

This topic has been closed for further comments. You can use the search bar to find a similar topic, or create a new one by clicking Create Topic at the top of the page.

1012 replies

I'll ask again, show me the peer reviewed scienctific study that discounts properly conducted double-blind audio tests as "bad science." there is none. They are accepted as valid science. And nobody is saying the null argument is proven. That is a logical fallacy, and no self-respecting scientist will ever claim a negative is proven. What they are claiming is the differences that may exist cannot be detected by human subjects in properly conducted double-blind tests. So quit putting words in people's mouths. And if your own version if mumbo jumbo is valid science, then link to the studies which show it. :rolleyes:

Basically, in layman's terms - a positive result establishes, to a certain level of confidence, the reality of different perceptible acoustic differences, a null result, or a failure to reliably detect a difference, does not indicate the nonexistence of that difference.


I'll ask again, show me the peer reviewed scienctific study that discounts properly conducted double-blind audio tests as "bad science." there is none. They are accepted as valid science. And nobody is saying the null argument is proven. That is a logical fallacy, and no self-respecting scientist will ever claim a negative is proven.

Guys, I honestly think you are both agreeing with each other without knowing it. I don't want to see this turn into an argument on that basis.

To summarize, I think we (as in myself, jgatie, gtyper, and Buzz) all agree that:

* Double Blind tests (with the assumption they are properly conducted) can specifically prove the case in which there are clear, obvious, and statistically significant differences, but cannot disprove any differences absolutely.

* If there were gross differences between standard res and hires formats they would show up and be conclusively proved in double blind tests. The absence of such results after many years and several tests is enough to conclude that gross (meaning "obvious" or "night and day" to use the words of the hires zealots) differences do not exist.

* The existence of subtle audible differences cannot be completely dis-proven using DBT (or any other proposed scientific methods), but the results are stacked more against them than they are for them. I think all of us probably agree that, in many if not most (or all) cases, the differences between standard res and hires may not be audible.

* Based on hires differences being subtle at best, they will not be audible to most people. I saw a figure somewhere (I don't have a reference) which said that less than 0.05% of people in the UK spend more than £2000 on any room on their audio setup.I personally feel that that is way below the level of spend you need to resolve hires if it is audible.

* The reasons for wanting hires extend beyond pure audibility (for example, the mere presence of superior material, the complexity/inconvenience of down-converting and multiple libraries, the marketing benefits, etc.)

Of course I may be wrong, but I think we are actually all pretty much in agreement with these principles. In which case, the discussion can probably evolve.

I think the big problem of this discussion is that it's all black and white when the reality is grey.

Cheers,

Keith
Majik, I appreciate you trying to moderate, but on these points I believe gtyper does not agree:

* Double Blind tests (with the assumption they are properly conducted) can specifically prove the case in which there are clear, obvious, and statistically significant differences, but cannot disprove any differences absolutely.

* If there were gross differences between standard res and hires formats they would show up and be conclusively proved in double blind tests.


He said as such earlier in the thread when he discounted DBT's that were conducted on compression codes which showed people were indeed able to statistically hear the difference (emphasis mine):

For the record, I come from a science research background - and my opinion is that a blind test in this regard (regardless of the results) is not necessarily good science.

. . .

I did a little digging and found a report wherein the difference between 44.1khz and 96khz was statistically significant. People could, based on the results, find a difference. But, even though I might agree with the assertion, I disagree with the testing methodology and thus believe it not capable of being able to prove anything. I find the test incorrect and I'm sure if you looked at the results - the actual conclusion would be inconclusive. I would argue, that the correlation between the data would not be grouped in a method that would lead one to the original assertion.

It's the nature of using the wrong test methodology. Unfortunately, we don't have a different methodology for comparison other that actually looking at the raw physics and making assumptions on whether or not people can hear the difference.

. . .



And needless to say, on his points above, I most assuredly disagree. Which is why I ask for peer-reviewed articles that back up his assertion that DBT's of audio are not "good science." So far he's produced none.
Userlevel 2
Majik, I appreciate you trying to moderate, but on these points I believe gtyper does not agree

Actually, I do. Note Majik said:

"[i]Double Blind tests (with the assumption they are properly conducted) can specifically prove the case in which there are clear, obvious, and statistically significant differences, but cannot disprove any differences absolutely.[/i]"

I agree whole-heartedly with this statement. My point, and perhaps I haven't been fully clear, I do not agree with ABX testing methodology for subtle differences. There have been numerous instances where Double Blind AB testing has yielded consistent results in subtle differences but the same test data in an ABX environment fail to produce statistically significant data. This implies a flaw inherent in the test itself and/or a weakness in our sense memory.

Any psychologist will tell you that human memory, especially short term, is a very weak point. This is known and accepted, whether or not you wish to assign any credence to it is another matter.

What I am arguing is that the ABX testing methodology is unsound and poor science for what you're trying to prove in this instance.

I question why we have come to accept ABX methodology for audio to be the gold standard, yet rarely do other sense comparisons use? Why aren't we reviewing the testing methodology to ensure it is actually giving us proper results?

Let's use the Double Blind ABX test for something we know is different. Take shades of the same primary color group. Test it similar to the ABX audio test with the null hypothesis being "One cannot perceive the difference between shades." I am almost certain, in an ABX situation, the null hypothesis would not be disproven. Yet, we know when the colors are placed touching each other (direct visual comparison), the perceptible difference is there thus proving the null untrue. This would point to a flaw in the methodology.

Human's ability to rely on memory for sensory comparison is simply weak. Sound being probably the weakest. Trying to test subtle differences without direct comparison is flawed in the way it is being handled.

Which is why I ask for peer-reviewed articles that back up his assertion that DBT's of audio are not "good science."


I promise you that I spent a good deal of time coming up with a reply. Unfortunately, the system logged me out and it was lost to the ether. I did not cite peer-reviewed articles, but did mention some studies I knew of that showed certain aspects of what I'm talking about but mentioned I was far too lazy to look them up. Now I'm doubly lazy, and half cross-eyed.

The reasons for wanting hires extend beyond pure audibility (for example, the mere presence of superior material, the complexity/inconvenience of down-converting and multiple libraries, the marketing benefits, etc.)

At the end of the day, that's all I care about. I don't care about the difference. I'm not even convinced there is one, as I've stated numerous times.

I simply take offense to the ABX test used, in my opinion, improperly.
I promise you that I spent a good deal of time coming up with a reply. Unfortunately, the system logged me out and it was lost to the ether. I did not cite peer-reviewed articles, but did mention some studies I knew of that showed certain aspects of what I'm talking about but mentioned I was far too lazy to look them up. Now I'm doubly lazy, and half cross-eyed.


Sorry, but without peer-reviewed studies, this is all audiophile hogwash, meant to cloud the results of accepted science. If you haven't got peer-reviewed science, then it's not science. The study I have cited has been peer-reviewed and no one, despite all the money Naim and others stand to lose, have stepped up to debunk it in a scientific journal.

And as far as you agreeing with the tests for gross differences, you also derided a study that showed gross differences between compression techniques using double-blind methods. Forgive me if I find your claims to accept them for non-subtle differences a backslide from your earlier statements.

Oh, and I would appreciate it if you would apologize for saying I or any other proponents of double-blind audio tests implied in any way that negative results proved no differences, which is antithetical to the scientific method. I know I never did, and I don't appreciate you putting words in my mouth in an attempt to state I and others like me are just as bad as the audiophile snakeoil fans.
Userlevel 2
Sorry, but without peer-reviewed studies, this is all audiophile hogwash, meant to cloud the results of accepted science.

See - this is where you get all caught up in agenda. I have no "audiophile hogwash" since I don't really care about audiophile stuff.

I don't sit around reading Scientific Journals - and I would bet you don't either. I also don't have any agenda and simply don't care enough to research it. Although I am willing to bet that you can find many arguments and studies against blind testing on human senses. Heck, we spent a whole week one semester discussing the bias human memory introduces in psychology and more in law.

If you haven't got peer-reviewed science, then it's not science.


That's just an asinine statement. And my background is as a scientist.

The study I have cited has been peer-reviewed and no one, despite all the money Naim and others stand to lose, have stepped up to debunk it in a scientific journal.


Good for them and good for Naim. I sincerely doubt Naim, with any amount of money, could concoct a study that could disprove your study. If for no other reason than the human memory issue I've stated - even if there is a slight difference it would be near impossible to scientifically test it using a human being.

With that said, Naim (whoever they are) should hire me. At the end of the day, trying to prove the subtle sound difference is a losing war. It's far more accurate and far easier to discredit the test. It would not take me much money to design a study that invalidated ABX testing for auditory memory. I doubt I could design a test to prove their stuff actually has a perceptible difference ... but I could poke serious holes in the currently accepted testing methodology.

The problem is that people would assume that I had an audiophile agenda and was coming at it from this perspective. This alone would cause any such study to be ignored by people that accept the current testing methodology.

But again, you really don't hear what I'm saying -- I don't know and don't care if there is a difference between hi-res and non-hi-res. I also don't know or care if $6000000/ft audio cable is the same as a coat hanger. At the end of the day, it makes not one lick of difference to me. I will wake up tomorrow the same as I did today with either truth being absolute.

I am simply saying the methodology currently being used to derive the conclusion is flawed. And to a point, any type of AB (including ABX) testing involving memory is flawed based on our very makeup. This is an immutable truth. But, we are forced to work with what we're given, right? This is why it is accepted by science.

And as far as you agreeing with the tests for gross differences, you also derided a study that showed gross differences between compression techniques using double-blind methods. Forgive me if I find your claims to accept them for non-subtle differences a backslide from your earlier statements.


I was deriding the technique. And I still would. I do not agree that ABX tests are a good measure for human senses. The human mind isn't good at this type of testing.

Oh, and I would appreciate it if you would apologize for saying I or any other proponents of double-blind audio tests implied in any way that negative results proved no differences, which is antithetical to the scientific method. I know I never did, and I don't appreciate you putting words in my mouth in an attempt to state I and others like me are just as bad as the audiophile snakeoil fans.


I'm sorry I misquoted you. It surely seemed that when you call things "snakeoil" and deride them so directly that you have a stated opinion on the accuracy of the results of the tests.

But, again, I am sorry.

Since we are into "appreciating" what the other person can do - I would "appreciate" you dropping the condescension and arrogance.


Let's use the Double Blind ABX test for something we know is different. Take shades of the same primary color group. Test it similar to the ABX audio test with the null hypothesis being "One cannot perceive the difference between shades." I am almost certain, in an ABX situation, the null hypothesis would not be disproven. Yet, we know when the colors are placed touching each other (direct visual comparison), the perceptible difference is there thus proving the null untrue. This would point to a flaw in the methodology.



In shade differentiation, we probably have the case that we can't remember as may shade variations as we can discriminate in direct comparison.

For visual comparisons we must standardize the ambient light. Likewise, in audio testing we should standardize the overall level and the background noise.

Also, in audio testing we cannot run both units simultaneously in order to create a comparison analogous to the side by side color comparison.

For both types of test we should quarantine the viewers and listeners in a standard environment for a while in an effort to prevent recent history, such as a bright flash or loud noise from polluting the human's perception.

I avoid making any critical aural comparisons if I have been driving or using public transportation in the past few hours.
Userlevel 2
In shade differentiation, we probably have the case that we can't remember as may shade variations as we can discriminate in direct comparison.

Also, in audio testing we cannot run both units simultaneously in order to create a comparison analogous to the side by side color comparison.


That is precisely the point I am making. Once human memory is involved, the accuracy of our testing ability isn't our sensory capabilities it's our ability of our memory to store and replay accurately. A feat most human's are near incapable of.

That's why I pointed out direct visual comparison, the only fully accurate (and accepted) test of the human senses. It's our inability to run "side-by-side" sense tests for anything other than visual that prevents the reliability in the tests.

Unfortunately, that's as good as it gets for us. But I think ABX tests compound the issue as opposed to actually making a more valid test.

* Caveat - I am not saying an AB test on hi-res vs standard would yield a different result. I'm saying it would probably be a more reliable test for the nuances.

For both types of test we should quarantine the viewers and listeners in a standard environment for a while in an effort to prevent recent history, such as a bright flash or loud noise from polluting the human's perception.

I avoid making any critical aural comparisons if I have been driving or using public transportation in the past few hours.


I agree 100%.

At the end of the day, we can only work with what we have --- and we have to test it in the best environment possible to reduce errors.
See - this is where you get all caught up in agenda. I have no "audiophile hogwash" since I don't really care about audiophile stuff.

Yet you argue about it constantly. :rolleyes:


I don't sit around reading Scientific Journals - and I would bet you don't either.


Actually, I do. It's part of my job.

I also don't have any agenda and simply don't care enough to research it. Although I am willing to bet that you can find many arguments and studies against blind testing on human senses. Heck, we spent a whole week one semester discussing the bias human memory introduces in psychology and more in law.


You make definitive statements on the legitimacy of scientifically accepted, peer-reviewed studies and you don't feel the need to back up those statements with legitimate, peer-reviewed science? And I'm the one who is arrogant?


That's just an asinine statement. And my background is as a scientist.


It's asinine to view random statements with no scientifically acceptable studies to back them up as illegitimate use of the scientific method? Somebody call Galileo.


Good for them and good for Naim. I sincerely doubt Naim, with any amount of money, could concoct a study that could disprove your study. If for no other reason than the human memory issue I've stated - even if there is a slight difference it would be near impossible to scientifically test it using a human being.

With that said, Naim (whoever they are) should hire me. At the end of the day, trying to prove the subtle sound difference is a losing war. It's far more accurate and far easier to discredit the test. It would not take me much money to design a study that invalidated ABX testing for auditory memory. I doubt I could design a test to prove their stuff actually has a perceptible difference ... but I could poke serious holes in the currently accepted testing methodology.


No one has yet. I wonder why?


The problem is that people would assume that I had an audiophile agenda and was coming at it from this perspective. This alone would cause any such study to be ignored by people that accept the current testing methodology.

But again, you really don't hear what I'm saying -- I don't know and don't care if there is a difference between hi-res and non-hi-res. I also don't know or care if $6000000/ft audio cable is the same as a coat hanger. At the end of the day, it makes not one lick of difference to me. I will wake up tomorrow the same as I did today with either truth being absolute.

I am simply saying the methodology currently being used to derive the conclusion is flawed. And to a point, any type of AB (including ABX) testing involving memory is flawed based on our very makeup. This is an immutable truth. But, we are forced to work with what we're given, right? This is why it is accepted by science.



Accepted by science, yet not accepted by you. And again, I'm the arrogant one?



I was deriding the technique. And I still would. I do not agree that ABX tests are a good measure for human senses. The human mind isn't good at this type of testing.



So you accept the results from a technique you deride as not fit for purpose? Or do you accept the technique, but deride it anyway? Or you do not accept the technique or the results, and were just placating Majik by "whole-heartedly" agreeing with his statement above? I'm confused.



I'm sorry I misquoted you. It surely seemed that when you call things "snakeoil" and deride them so directly that you have a stated opinion on the accuracy of the results of the tests.

But, again, I am sorry.

Since we are into "appreciating" what the other person can do - I would "appreciate" you dropping the condescension and arrogance.


You stated I claimed the studies listed showed there were definitively no differences. Which is absurd, given any high school science or logic student knows the "prove a negative" logical fallacy. This is an insult to any scientist. Nothing I have said about audiophile snakeoil or the studies listed ever came close to stating anything except the results showed the differences were imperceptible to human hearing in a DBT. You twisting those words to attempt to label me as like "people that stamp their feet about how much of an audible difference there is between them" was a gross misrepresentation of what I've said in this thread, and was intended only to smear.
Gents, can we please stick with playing the ball, not the man? Thanks.
Userlevel 2
I wish there was a smilie here eating popcorn.

Ah, there is:
[img]http://www.freesmileys.org/smileys/smiley-basic/popcorn.gif[/img]
Userlevel 2
I've owned my Sonos system for several years. I'm not active on the forums, but every few months I catch up on this thread to see if Sonos are finally going to support HiRes. I've always been happy with my Sonos system from day one - HiRes support is the only upgrade I have been waiting for.
I have recommended Sonos to a lot of friends. They are impressed when they see my system in operation, they are convinced by the obvious benefits of wireless music streaming, but ultimately they don't buy Sonos.
I know someone who has tried a Project Streambox but found it a bit buggy. He is now exploring the dedicated silent PC route.
Another went for the Transporter.
But nowadays, they mostly are buying the Touch. When I ask them why, they say its because of support for HiRes. They have seen the HiRes files available for download, they are interested in sound quality, and they want to try them. Also in several cases they have digitised their vinyl at 24/96 and want to access those files. They really couldn't give a flying fig about ABX testing or double-blind testing. When they bought their amps and their speakers, the salesman didn't insist that they should perform DBT before allowing them to spend their money on what they wanted.
Of course they grumble about the software updates, having to roll back to the previous version, and re-scan the data-base, which apparently takes hours.
I know I could never live with that hassle. But they don't ever re-consider. For them the Sonos is simply missing a feature that is on their checklist, so it is discounted.

I don't know what plans Sonos has for HiRes support. But I think it is lucky that no serious competitor has emerged. If the Touch gets sorted out, or a new product comes along with Sonos features plus HiRes support, it will be hard for Sonos to compete.

OK, so my sample of friends, acquaintances, people I know through forums is small and unscientific. But it is my experience that lack of HiRes support is costing Sonos sales.
Userlevel 2
Its a funny thing when people discuss on such forums with great intensity. I think it should be of note that the adding of 24/96 or 24/192 will increase compatabilty to what is now being offered in the realm of playback. You cant say it isnt better and you cant say it is. The reason is the same for cd. Some cd's are recorded badly some very well, this plays on the sound quality and so doesnt make sense to be argued. This is just one point of many i can make that each individual music track album fluctuates in quality. If science offers this understanding with blind test/ double blond tests currently in regards to peer review this is what is current with science as best as we know. It doesnt mean it is an absolute truth it can also not be dismissed frivously. For a company if it is inline with selling more units and meeting demands then it should be a consideration. why dont sonos poll it? Obviously sonos is a multiroom streaming service that has incredible software and control, there is no other system like it. Is there no way the software could be sold to be customised to computers and the like to be upgraded to devices that can handle 24/96-192 respectively? without yhe implication for its units to do so?
This would mean you satisfy both markets without confusion and unit misunderstanding if its software based. Sell it on cd for x units and im sure you will get sales. Still keeping the sonos units as they are now. Does that make sense? or is someone going to shout at me. Not having a solution will be a deciding factor for more and more people at a consumer level i hear this a lot. Thanks
The use of computers as substitutes for Sonos players has been discussed at length before. The general consensus is a PC is incapable of synchronized streaming due to limitations in the onboard timing. So Sonos is not just software, it also needs some pretty specialized hardware in order to do what it does.

See this thread for more:

PC or Mac Based Software Zone Player


PS - You also mentioned a poll. There is a poll here:

Hires support (take 2)

Basically what happened is audiophiles recruited non-Sonos owning hi-Rez fans to register and vote, thus skewing the results. Notice all the usernames with a single post (or worse yet, no posts at all) in the poll. I'm quite sure Sonos has done more accurate market research on the number of 24/96 users, and so far the results seem to be that they are not losing enough customers to be concerned. Of course that could change if the market changes.
Userlevel 2
Badge
If it is problem in hardware, make a new player, and we will buy it.
Have sonos 3 years, and i was very close to put it out from my system. Now I have mac mini for hires, so sonos is staying. But, i would love to have hires suport in Sonos.
If it is problem in hardware, make a new player, and we will buy it.
Have sonos 3 years, and i was very close to put it out from my system. Now I have mac mini for hires, so sonos is staying. But, i would love to have hires suport in Sonos.


Read the thread. It's not as simple as making a new player. Since Sonos is first and foremost a multi-room audio system, you have to consider what it will take to sync the new player with older players that cannot do hi-rez. Otherwise, you might as well just buy a standalone player, such as your Mac or any of a dozen others.
Badge +20
One idea would be to have a "Sonos Server" running, just like Logitech have with the Squeezebox, however the Sonos system would have to be switched (advanced settings) to the "Sonos Server" if the user needed it features.

With a system of current hardware the Server would not be required, but also in a new system consisting of just Sonos hi-res devices.

The server could then transcode music so it plays on all equipment or a straight passthrough for a Sonos hi-res device. During development the "Sonos Server" would also be specified to handle very large music collections greater than 65k 😉
One idea would be to have a "Sonos Server" running

Given all Sonos devices act as a Media Service (in the UPnP sense) anyway, then there's actually no technical reason why this functionality couldn't be extended to support an external UPnP Media server. This could either be server software provided by Sonos (and which tightly coupled into the ecosystem) or nominated third-party apps, like Twonky.

Personally I'm in favour of such an approach to deal with extended local libraries and (perhaps) extended tag support and indexing options.

Transcoding is also an option (although a potentially complex one) within such a system, and this could provide a way to dynamically transcode material to a format the current systems could play. This, alone, could satisfy many people. Note that Henkelis's Python server emulates a WMP server and, I believe, has options to support transcoding, so it's definitely possible.

However, streaming hires to a different player from the server is a completely different architecture to how Sonos works. The problem is sync. Sonos deals with sync by taking a single stream and coordinating distribution of the stream and sync between players. This design is partly responsible for Sonos's superior sync capability. Such a system can only deal with one resolution of stream at any time.

Other systems (like Squeezebox) which have used a server with separate streams to each player have struggled to get close to Sonos in sync capabilities. It literally took Squeezbox about 4 years to get anything close to decent sync, and it still not as slick or reliable as Sonos. It's a totally different architecture, and not a very good one for multi-room support. It would require a ground-up change from Sonos to do this. I can't see that happening.

There are also technical challenges with wireless networks and the volumes of data, which will probably mandate that hires support can only be achieved on wired players. Not a great selling point for a "wireless" music system.

The important thing to consider is that any of the developments being proposed will consume considerable development and testing time and money. There are many possibilities. the big question is whether the market demand for hires is enough to convince Sonos to invest heavily in developments to achieve it, especially if in doing so they detract from or dilute the main product significantly.

Cheers,

Keith
Userlevel 2
Badge
Keith, how many sonos devices you have?:p:)
Userlevel 2
Badge +1
Other systems (like Squeezebox) which have used a server with separate streams to each player have struggled to get close to Sonos in sync capabilities. It literally took Squeezbox about 4 years to get anything close to decent sync, and it still not as slick or reliable as Sonos. It's a totally different architecture, and not a very good one for multi-room support. It would require a ground-up change from Sonos to do this. I can't see that happening.

Cheers,

Keith


Keith,

I can only speak for myself, but the 7 squeeze devices I have work just fine when synced together, with no noticable delay or timing issues.

Syncing is easy, just select the players you want to sync on the menu and they sync. Select unsync and they unsync. Multi room could hardly be easier. Not sure how you would make this slicker?

I thought we agreed some pages back that squeeze now use a single stream for synced players? So I am not sure about the architecture issues you refer to.

I can think of many reasons to prefer Sonos over Squeeze, and thats just fine, but lets not keep harping on about a problem that doesn't exist any more.

I thought we agreed some pages back that squeeze now use a single stream for synced players? So I am not sure about the architecture issues you refer to.


Ah, I missed that, sorry. Clearly they had to change the architecture to make syncing work.

Certainly for many years there were bold claims by their fans (with the encouragement of Logitech employees) that syncing worked perfectly when clearly it didn't with any degree of reliability.

[Incidentally, this is a devious form of marketing as you can use customers to get messages into the market that aren't actually true without getting into trouble with the advertising regulators.]

My point is still valid: multiple streams from a server are difficult to sync. On the other hand if you have a single stream, that stream can only support a single resolution.

Cheers,

Keith
We also agreed a few pages back that the delay for Squeezebox sync is ~30 ms, whereas Sonos is ~3 ms. Also, sync on Squeeze is atill lacking in some areas, including when using their online server, gapless playback, ALAC transcoded by the serrver, and others.
Userlevel 2
Badge +1
I don't want to pretend that Squeeze is perfect, it isn't. My point was that of all the reasons you might want to buy Sonos instead of Squeeze, syncing and multi-room support isn't one of them. I see this argument put forward a lot on this forum, and I can only assume it dates back to the days before Squeeze addressed the issue.

At the risk of going back on-topic, being in the fortunate situation of having a streaming system that will play hi-res, I would say that a 24/96 download and the same file downsampled to 16/44.1 sound the same to my ears. Hi-res doesn't appear to bring any benefit in of itself. What does appear to be true however is that some 24/96 downloads have had more care taken over their production than the 16/44.1 version. So while technically 16/44.1 vs 24/96 seems a wash, in certain (limited) cases there is better quality music available on 24/96. I could imagine a situation developing where CD / MP3 continues with the current compressed rubbish, but 24/96 versions are available (at a premium, no doubt) with compression levels aimed less at the iPod crowd and more at enthusiasts.

At the risk of going back on-topic, being in the fortunate situation of having a streaming system that will play hi-res, I would say that a 24/96 download and the same file downsampled to 16/44.1 sound the same to my ears. Hi-res doesn't appear to bring any benefit in of itself. What does appear to be true however is that some 24/96 downloads have had more care taken over their production than the 16/44.1 version. So while technically 16/44.1 vs 24/96 seems a wash, in certain (limited) cases there is better quality music available on 24/96. I could imagine a situation developing where CD / MP3 continues with the current compressed rubbish, but 24/96 versions are available (at a premium, no doubt) with compression levels aimed less at the iPod crowd and more at enthusiasts.


+1 to all that.

Cheers,

Keith
I don't want to pretend that Squeeze is perfect, it isn't. My point was that of all the reasons you might want to buy Sonos instead of Squeeze, syncing and multi-room support isn't one of them. I see this argument put forward a lot on this forum, and I can only assume it dates back to the days before Squeeze addressed the issue.

At the risk of going back on-topic, being in the fortunate situation of having a streaming system that will play hi-res, I would say that a 24/96 download and the same file downsampled to 16/44.1 sound the same to my ears. Hi-res doesn't appear to bring any benefit in of itself. What does appear to be true however is that some 24/96 downloads have had more care taken over their production than the 16/44.1 version. So while technically 16/44.1 vs 24/96 seems a wash, in certain (limited) cases there is better quality music available on 24/96. I could imagine a situation developing where CD / MP3 continues with the current compressed rubbish, but 24/96 versions are available (at a premium, no doubt) with compression levels aimed less at the iPod crowd and more at enthusiasts.


On the bolded we agree. The studies also agree that most (if not all) observable differences are due to higher production quality, not the final sampling rate or resolution. Which is why some of us suggest downsampling to a Sonos compatible format in order to listen to these higher quality recordings.