Copied from another thread, courtesy of jgatie - full question was:
"Do you believe digital audio, outside of mastering/production techniques, can be improved by playback resolutions greater than 16/44.1?"
Page 2 / 4
IMO the bigger question is what happens when 16/44 components and chipsets are no longer mass produced/cheap and all that is available for integration/assembly are hi res capable innards because that is all that is being made. Can Sonos integrate these into its boxes with no change in performance, backward compatibility and price - with emphasis on the first two? That to me is a more important question. My guess is yes, but that is just a guess.
And will Sonosnet cope? 32 streams (for the 32 zones) of uncompressed CD-quality data requires 45 Mbps. That should be OK on 802.11n. 24/96 music would require 150 Mbps for 32 streams. ALAC compression halves this to 75 Mbps.
One thing I'm not sure about is whether the mesh network can transmit simultaneously between distinct node pairs. Perhaps the weak link is internet speed for 32 simultaneous streams of music. 75Mbps is at the upper end of speeds now and for the near future. Mind you, anyone with 32 Sonos zones should be able to afford hi-speed internet 🙂
One thing I'm not sure about is whether the mesh network can transmit simultaneously between distinct node pairs. Perhaps the weak link is internet speed for 32 simultaneous streams of music. 75Mbps is at the upper end of speeds now and for the near future. Mind you, anyone with 32 Sonos zones should be able to afford hi-speed internet 🙂
At an application level the mesh can simultaneously handle multiple streams (obviously) but depending on the physical layout the capacity of the shared channel would undoubtedly be an issue for high bandwidth traffic. FWIW the best that I've managed to get out of a single SonosNet hop is ~13Mbps.
As for the chipsets, I think you'd be hard pressed to find anything these days which only supports 16/44. We've known since inception that a ZP's internals use 24 bits, so I'd be very surprised if the DAC wasn't already 24-bit capable.
As for the chipsets, I think you'd be hard pressed to find anything these days which only supports 16/44. We've known since inception that a ZP's internals use 24 bits, so I'd be very surprised if the DAC wasn't already 24-bit capable.
That has been the case for 15-20 years. It's not been a problem so far.
Cheers,
Keith
interesting chat with sonos
http://www.whathifi.com/news/sonos-says-high-res-audio-support-not-roadmap
http://www.whathifi.com/news/sonos-says-high-res-audio-support-not-roadmap
In that case, everything about hi res is all noise and no signal. As Sonos has clearly understood.
An attempt to link today's What HiFi article -- Sonos says high-res audio support "not on the roadmap" -- has been swallowed by the automoderator. Maybe it will emerge eventually.
I posted it earlier but was also swallowed by the moderator but now its showing up (3 posts back)
I posted it earlier but was also swallowed by the moderator but now its showing up (3 posts back)
Thanks Nick. Should have spotted that one.
The 'learning filter' clearly isn't.
EDIT: My post was just rescued.
From the interview:
"I think the future is going to be a variable bitrate."
What does he mean by that?
"I think the future is going to be a variable bitrate."
What does he mean by that?
You are right to point out that the study finds differences, not improvements. That is the first step. Presumably the next step is to establish why these differences are perceived, and then whether there is a preference for hi-res. It has every bearing on this thread without providing a definitive answer yet.
I think it is unwise to level an accusation of lying based on a PR statement. Journalists and/or University PR Departments routinely misunderstand or misquote scientists. Dr Reiss makes no such statement in his published paper.
"I think the future is going to be a variable bitrate."
What does he mean by that?
I don't really know. It was appended to a remark (dismissal?) about MQA, and I can't quite see the relevance.
He's talking about getting everyone onto 16/44 (and away from MP3) as an initial aspiration. Compressed lossless is inherently variable rate -- depending on musical complexity -- so maybe that's what he's referring to.
BTW, it's interesting to see the first few comments on that WHF piece are supportive of Sonos' common sense approach. My impression is that WHF rather bangs the HiRes drum, along with whole swathes of the industry desperate to get us to replace everything (kit, media) all over again.
WHF bangs whatever drum is at hand. I don't blame them; if they did not do that they would have little to publish except official specs and photos of kit. Saying what is under review is the next great thing is what they do every month, which is what every other magazine does, to have something to exclaim about. Advertising keeps them in that mode, but even if there were no ads, how else to sell the latest issue?
Given the sound quality obtainable from Apple Music via 256 kbps AAC, I doubt there will ever be a stampede in the direction of 16/44 now.
Given the sound quality obtainable from Apple Music via 256 kbps AAC, I doubt there will ever be a stampede in the direction of 16/44 now.
We'll see. With Spotify rumoured to be launching a lossless service I'd be surprised if Apple didn't follow suit. And Apple has the financial muscle to price it aggressively -- at say a premium of just 50%.
Yes, time will tell. All Apple music sales, even the "Mastered for iTunes" albums, have been in 256 AAC, so what happens there is also an open question though I am sure those sales are on a sharply declining trend anyway.
But if the new service will just be the entire existing one, but now lossless for a premium, I know I won't be changing over.
But if the new service will just be the entire existing one, but now lossless for a premium, I know I won't be changing over.
I think it is unwise to level an accusation of lying based on a PR statement. Journalists and/or University PR Departments routinely misunderstand or misquote scientists. Dr Reiss makes no such statement in his published paper.
WHAT!!?? Dr. Reiss was the one making the PR statement!
The full quote is here:
https://www.sciencedaily.com/releases/2016/06/160627214255.htm
Dr Joshua Reiss from QMUL's Centre for Digital Music in the School of Electronic Engineering and Computer Science said: "Audio purists and industry should welcome these findings -- our study finds high resolution audio has a small but important advantage in its quality of reproduction over standard audio content."
I stand by my accusation. Dr. Reiss is lying in his PR statement. If you think this is "unwise", then the onus is on you to prove the quote is false. Good luck!
As to whether he can prove anything about the superiority of Hi-Res, my question was quite specific, this "study" says nothing towards that question, and is thus irrelevant. I imagine we have a better chance of him actually including studies that find there is no difference, like the M&M study, in his next meta-analysis than we have of him proving there is a preference. That is; slim to none. This study was a carefully crafted shill for the Hi-res audio industry, and even so, it proved nothing. The author had to lie in his PR statement in order to accomplish his goals.
The full quote is here:
https://www.sciencedaily.com/releases/2016/06/160627214255.htm
I agree, and I had not read this before.
The man has overstepped beyond what he found in his own study and has been economical with the truth about the study findings.
There, does that sound better?;)
Let me point out, having been "quoted" in a few PR statements myself, that often the quote isn't what the person said, it's what the marketing people want the person to say.
I don't know whether he really said that, or was told to say that.
On the other hand, I'm enjoying this discussion. Thank you all for adding to my knowledge.
I don't know whether he really said that, or was told to say that.
On the other hand, I'm enjoying this discussion. Thank you all for adding to my knowledge.
You state this as fact rather than opinion. You need to provide solid evidence or withdraw the remark.
Another statement of fact for which you will need to provide proof. In this case, it would be appropriate to contest a peer-reviewed paper with a dissenting publication.
I remind you of this statement you made recently:
You state this as fact rather than opinion. You need to provide solid evidence or withdraw the remark.
Another statement of fact for which you will need to provide proof. In this case, it would be appropriate to contest a peer-reviewed paper with a dissenting publication.
I remind you of this statement you made recently:
So preface it all with "in my opinion", just as everybody else already inferred. Except the "it proved nothing" statement, which is a clause describing the use of the study as a shill for the Hi-res industry, and it indeed provided no proof of any advantage for Hi-res audio. Your convenient snipping of the context of my posts does not go unnoticed.
Then go ahead and pat yourself on the back for switching the conversation to semantical childishness after you were made to look stupid about the PR quote. Then maybe we can get this coversation away from the kid's sandbox and back to discussing things like adults?
By the way, no comment on the PR quote and what it means with regards to any bias shown in the disclusion of seminal studies on Hi-res audio like the M&M study? Also, explain why studies that did factor in the existence of IM distortion were left out, whereas studies that didn't consider IM were included?
Or are you just going to play gotcha?
IMO() before dabbling in the meta domain, may I be pointed to even one ABX done in line with well established principles and accepted in the world of science that establishes:
1. That differences were heard by a statistically significant part of the sample to establish that these are audible
2. If so, from those that heard the difference, how many preferred it, how many did not, and how many found both just as good.
And this for any of the audiophile supported exotics like DACs/Hi Res/MQA etc etc. - anything except speakers where I don't need to be convinced that audible differences will survive blind testing. Which is good because speaker testing will involve hard to achieve speaker handling to eliminate all but one variable - the speakers being compared, with each speaker being compared having to be kept in the exact same place with reference to the room and listener; compared to this, upstream component ABX is a lot easier, though it will still need instruments and strict protocols.
Peer reviewed would be good, but peers with credibility, not the tester's good friends/bar buddies/spouses - by definition, they qualify as peers. Scholarly peer review would be more like it.
Meta can only be step 2; it is meaningless without step 1.
Unless proved wrong, my claim is that not one such review exists in the world. And there is corroboration to that claim: Someone here said - wrongly by the way - that if Sonos kit was in the audiophile league, Sonos would have not hesitated to say so in their marketing. So for sure if such a test existed it would have been very visible in advertising by now. Someone that knew of such evidence would not have been shy of advertising it to sell their product. Why haven't any done so? Because no such test that establishes a difference exists. And because makers with even mediocre lawyers know what can be the consequences of false claims.
1. That differences were heard by a statistically significant part of the sample to establish that these are audible
2. If so, from those that heard the difference, how many preferred it, how many did not, and how many found both just as good.
And this for any of the audiophile supported exotics like DACs/Hi Res/MQA etc etc. - anything except speakers where I don't need to be convinced that audible differences will survive blind testing. Which is good because speaker testing will involve hard to achieve speaker handling to eliminate all but one variable - the speakers being compared, with each speaker being compared having to be kept in the exact same place with reference to the room and listener; compared to this, upstream component ABX is a lot easier, though it will still need instruments and strict protocols.
Peer reviewed would be good, but peers with credibility, not the tester's good friends/bar buddies/spouses - by definition, they qualify as peers. Scholarly peer review would be more like it.
Meta can only be step 2; it is meaningless without step 1.
Unless proved wrong, my claim is that not one such review exists in the world. And there is corroboration to that claim: Someone here said - wrongly by the way - that if Sonos kit was in the audiophile league, Sonos would have not hesitated to say so in their marketing. So for sure if such a test existed it would have been very visible in advertising by now. Someone that knew of such evidence would not have been shy of advertising it to sell their product. Why haven't any done so? Because no such test that establishes a difference exists. And because makers with even mediocre lawyers know what can be the consequences of false claims.
Hi all,
I am the author of the meta-analysis paper being discussed, and I was asked to comment on some of the points made in this discussion.
The paper being referred to is available at http://www.aes.org/e-lib/browse.cfm?elib=18296 , and it links to additional resources with all the data and analysis.
Also, note that this was unfunded research. At no point has any of my research into high resolution audio or related topics ever been funded by industry or anything like that.
On to the specific comments;
“Dr. Reiss was the one making the PR statement… Dr. Reiss is lying in his PR statement” - I didn’t write the press release! Press releases are put forward by organisations with the aim of trying to get the press to cover their story, and as such are a combination of spin, marketing, opinion and fact. In this case, it was written by a press officer at my university, and then AES issued another similar one. The ‘advantage’ quote was based on a conversation that I had with the press officer, but it was not text directly from me (I just checked my email correspondence to confirm this). It most likely came from trying to translate the phrase ‘small but statistically significant ability of test subjects to discriminate’ to something that can be easily understood by a wide audience.
“explained by the presence of intermodulation distortion” – This was looked into in great detail, see the paper and supplemental material. First note that intermodulation distortion in these studies would primarily arise from situations where the playback chain was not fully high resolution, e.g., putting high resolution content through an amplifier that distorts high frequencies. Anyway, quite a lot of studies did look into this and other possible distortions (see Oohashi 1991, Theiss 1997, Nishiguchi 2003, Hamasaki 2004, Jackson 2014. Jackson 2016) and took measures to ensure it wasn’t an issue. This includes most studies that found a strong ability to discriminate high resolution content. In contrast, some studies that claim not to find a difference, either make no mention of distortion or modulation (like Meyer 2007), or had low resolution equipment that might cause distortion (like Repp 2006).
“I have yet] to come across even one for hi res that does a decent job of doing level matched blind AB, leave alone a full protocol ABX… may I be pointed to even one ABX done in line with well established principle” – see the paper. There were a lot of studies that do double blind, level matched ABX testing. Many of those studies reported strong results. They all could suffer issues of course, but the point of the paper was to investigate all those studies.
“absolutely no evidence in the meta-analysis that there is an ‘advantage in its quality’” – I would not go as far as that. I neither claim there is or there isn’t; ‘advantage’ is too subjective. However, many of the studies looked at preference, or at what sounded ‘closer to live’, or asked people to comment on subjective qualities of what they heard. They do suggest an advantage to audiophiles, but I would argue that the data is not rigorous or sufficient in this regard.
“his cherry picking of studies” – A strong motivation for doing the meta-analysis was to avoid cherry-picking studies. For this reason, I included all studies for which there was sufficient data for them to be used in meta-analysis. That way, I could try to avoid any of my own biases or judgement calls. Even if I thought it was a poor study, its conclusions seemed flawed or it disagreed with my own conceptions, if I could get the minimal data to do meta-analysis, I included it.
“chance of him actually including studies that find there is no difference, like the M&M study … slim to none… disclusion of seminal studies on Hi-res audio like the M&M study” – I did include the M&M study (Meyer 2007)! See Sections 2.2 and 3.7 and Tables 2, 3, 4 and 5. I couldn’t include it in the Continuous results because Meyer and Moran never reported their participant results, even in summary form, and no longer had the data (I asked them), but I was able to use their study for Dichotomous results and it didn’t change the outcome.
‘explain why studies that did factor in the existence of IM distortion were left out, whereas studies that didn't consider IM were included’ – see previous points. I included every study with sufficient data, some of which considered IM and some didn’t. The Ashihara study (references 25, 60 and 61), was a detection threshold test, demonstrating only that IM could be heard and could be a factor in discrimination tests. Nor did they report results in a form that could be used for meta-analysis.
I am the author of the meta-analysis paper being discussed, and I was asked to comment on some of the points made in this discussion.
The paper being referred to is available at http://www.aes.org/e-lib/browse.cfm?elib=18296 , and it links to additional resources with all the data and analysis.
Also, note that this was unfunded research. At no point has any of my research into high resolution audio or related topics ever been funded by industry or anything like that.
On to the specific comments;
“Dr. Reiss was the one making the PR statement… Dr. Reiss is lying in his PR statement” - I didn’t write the press release! Press releases are put forward by organisations with the aim of trying to get the press to cover their story, and as such are a combination of spin, marketing, opinion and fact. In this case, it was written by a press officer at my university, and then AES issued another similar one. The ‘advantage’ quote was based on a conversation that I had with the press officer, but it was not text directly from me (I just checked my email correspondence to confirm this). It most likely came from trying to translate the phrase ‘small but statistically significant ability of test subjects to discriminate’ to something that can be easily understood by a wide audience.
“explained by the presence of intermodulation distortion” – This was looked into in great detail, see the paper and supplemental material. First note that intermodulation distortion in these studies would primarily arise from situations where the playback chain was not fully high resolution, e.g., putting high resolution content through an amplifier that distorts high frequencies. Anyway, quite a lot of studies did look into this and other possible distortions (see Oohashi 1991, Theiss 1997, Nishiguchi 2003, Hamasaki 2004, Jackson 2014. Jackson 2016) and took measures to ensure it wasn’t an issue. This includes most studies that found a strong ability to discriminate high resolution content. In contrast, some studies that claim not to find a difference, either make no mention of distortion or modulation (like Meyer 2007), or had low resolution equipment that might cause distortion (like Repp 2006).
“I have yet] to come across even one for hi res that does a decent job of doing level matched blind AB, leave alone a full protocol ABX… may I be pointed to even one ABX done in line with well established principle” – see the paper. There were a lot of studies that do double blind, level matched ABX testing. Many of those studies reported strong results. They all could suffer issues of course, but the point of the paper was to investigate all those studies.
“absolutely no evidence in the meta-analysis that there is an ‘advantage in its quality’” – I would not go as far as that. I neither claim there is or there isn’t; ‘advantage’ is too subjective. However, many of the studies looked at preference, or at what sounded ‘closer to live’, or asked people to comment on subjective qualities of what they heard. They do suggest an advantage to audiophiles, but I would argue that the data is not rigorous or sufficient in this regard.
“his cherry picking of studies” – A strong motivation for doing the meta-analysis was to avoid cherry-picking studies. For this reason, I included all studies for which there was sufficient data for them to be used in meta-analysis. That way, I could try to avoid any of my own biases or judgement calls. Even if I thought it was a poor study, its conclusions seemed flawed or it disagreed with my own conceptions, if I could get the minimal data to do meta-analysis, I included it.
“chance of him actually including studies that find there is no difference, like the M&M study … slim to none… disclusion of seminal studies on Hi-res audio like the M&M study” – I did include the M&M study (Meyer 2007)! See Sections 2.2 and 3.7 and Tables 2, 3, 4 and 5. I couldn’t include it in the Continuous results because Meyer and Moran never reported their participant results, even in summary form, and no longer had the data (I asked them), but I was able to use their study for Dichotomous results and it didn’t change the outcome.
‘explain why studies that did factor in the existence of IM distortion were left out, whereas studies that didn't consider IM were included’ – see previous points. I included every study with sufficient data, some of which considered IM and some didn’t. The Ashihara study (references 25, 60 and 61), was a detection threshold test, demonstrating only that IM could be heard and could be a factor in discrimination tests. Nor did they report results in a form that could be used for meta-analysis.
So, once again, the meta-analysis results do not state there is an qualitative advantage and thus irrelevant to this thread.
One question to the author, if the quote which stated the opposite was not what you said, why have you not requested a retraction? It's a pretty embarrassing statement, and a poor reflection of one's scholarship. If it were me, I'd be scouring the internet for every reference of that quote and requesting it be changed. Yet I see it in dozens of places touting the benefits of Hi-res music where none definitively exist.
One question to the author, if the quote which stated the opposite was not what you said, why have you not requested a retraction? It's a pretty embarrassing statement, and a poor reflection of one's scholarship. If it were me, I'd be scouring the internet for every reference of that quote and requesting it be changed. Yet I see it in dozens of places touting the benefits of Hi-res music where none definitively exist.
“I have eyet] to come across even one for hi res that does a decent job of doing level matched blind AB, leave alone a full protocol ABX… may I be pointed to even one ABX done in line with well established principle” – see the paper.
I have seen the paper. But is there any way to see a fully documented test itself? A link to one perhaps - that is the only way to it for most of us.
Unless I read the documented test, I can't conclude to how good it is. I may still not be able to do that, but it will be more than anything I have been able to find till now. By good, I mean how well single variable ABX protocols have been complied with.
Thank you in advance.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.