High resolution sound

  • 10 March 2018
  • 61 replies
  • 10179 views

I wonder why Sonos does not support 24 bit / 192 kHz, when for instance Bluesound has chosen to support that format

This topic has been closed for further comments. You can use the search bar to find a similar topic, or create a new one by clicking Create Topic at the top of the page.

61 replies

I’m fine with listening to 41.1/16 files. I can’t really tell the difference from 192/24 when listening to stereo files. My ears are no better than anyone else’s and they’re certainly not golden, but I do have experience with with studio work. I feel (please note, I didn’t just write “know”) that something happens to the sound when mixing 24+ mono tracks down to stereo, if sample rate is low. I have no idea if it is really the sample that matters, or if it could be the noise floor of the bit depth. Funny thing is, this feels the same whether mixing is done in the analog or digital domains.

Same here. I have a few tracks that I have purchased in both CD and high-res quality and I cannot tell the difference even through a good pair of headphones (Sennheiser HD660s). I have not tried any long term testing of course but I am happy enough with CD quality to not feel the urge to pursue higher resolution sound. This is with headphones listening anyway. With speakers it is even less important and that is with a medium range system fed via a Sonos connect. I find that the problems with my living room acoustics (some bass boom I struggle to get rid of) would spoil any perceived difference however small. If anything, I would love sonos to provide a full range EQ option in the settings so I can better adjust the sound tonality 🙂.
I’m fine with listening to 41.1/16 files. I can’t really tell the difference from 192/24 when listening to stereo files. My ears are no better than anyone else’s and they’re certainly not golden, but I do have experience with with studio work. I feel (please note, I didn’t just write “know”) that something happens to the sound when mixing 24+ mono tracks down to stereo, if sample rate is low. I have no idea if it is really the sample that matters, or if it could be the noise floor of the bit depth. Funny thing is, this feels the same whether mixing is done in the analog or digital domains.
You still haven't answered the question posed, i.e. are you questioning DB ABX testing itself, or the methodology of implementation? You seem to infer the former, yet you only state the latter as the cause of concern. All the things you mention; limited sample size, lack of variable control, etc., do not point to a lack of efficacy for the concept of DB ABX testing, only the particular implementation(s) lacking the things you state. So again, which is it?

Kumar, such a world (devoid of healthy scepticism and critical inquiry) sounds quite stifling to me.

Au contraire, after spending a decade as an audiophile, I found THAT to be a stifling world, cult like in many ways, full of exotic terms that were really gibberish. I am now happier, and listen to more varied music in a year than I did in a decade of audiophilia, that was little more than playing with older boys toys, with music just a test signal, I came to realise.
Trust your ears, be aware of unavoidable biases so as not to be fooled by them, and thus have more music to enjoy without obsessing about file sizes and codecs is a better place to be, I find.
And home audio hardware is far too trivial a subject to devote too much time doing critical inquiry into it, in a life that is too short. Unless one designs, tests and sells it for a living.


Kumar, I was also referring to the audiophile world you mentioned 🙂

I also agree about the last part. Experiments / research is part of what I do for living and hence why I am interested in learning more 🙂 I fully appreciate there are more important things to worry about in life

Kumar, such a world (devoid of healthy scepticism and critical inquiry) sounds quite stifling to me.

Au contraire, after spending a decade as an audiophile, I found THAT to be a stifling world, cult like in many ways, full of exotic terms that were really gibberish. I am now happier, and listen to more varied music in a year than I did in a decade of audiophilia, that was little more than playing with older boys toys, with music just a test signal, I came to realise.
Trust your ears, be aware of unavoidable biases so as not to be fooled by them, and thus have more music to enjoy without obsessing about file sizes and codecs is a better place to be, I find.
And home audio hardware is far too trivial a subject to devote too much time doing critical inquiry into it, in a life that is too short. Unless one designs, tests and sells it for a living.
Testing whether a group of people can hear differences between amplifiers requires a lot more than just the ABX test which has not been dealt with in the studies I have come across.
Does it, though? I really don't think there's any evidence for that. Do point out the studies that concern you, and let us know what aspects of these studies were flawed.

The world of audiophila is littered with statements that a different resolution / bitrate / amplifier / power cable / speaker cable / interconnect / DAC made a 'night and day difference'. And this 'obvious' difference endures right up to the point where people have to prove it in a properly controlled test where there are no clues as to which system they are listening to. And then they can't.

(Note that I don't include components like speakers in this list. Or turntables. These are quite capable of introducing different forms of character (i.e., distortion) into music playback, which can be easily recognisable.)


hey pwt, I am really not looking at audiophilia, just trying to learn about the abx test methodology. My comment regarding methodology issues refers to the studies that were linked in this thread above. I have not found others so far but I have found some critical commentary on the limitations of the methodology itself. As I mentioned above some of the current limitations of the methodology include limited sample, lack of confounding variables control (e.g. age range of respondents, training, preparation, time of day, longitudinal factors etc.), lack of or limited peer review from scientific community, lack of repeated tests etc. There is a very long literature on each of these limitations which I think is out of scope for this thread but I am happy to discuss more via pm if anyone is interested. Cheers

PS: Lack of scientific objection to audio abx testing is indeed troubling but directly related to the lack of scientific work on the method in the specific area. In essence there is not enough scientific work in the area of audio abx testing (I am not counting forum discussion as scientific work). Lack of objection on audio abx does not mean that the methodology is without limitation nor does it guarantee it is valid. In this case it most likely shows there is not a lot of scientific work in the area to generate healthy debate. ABX variants are used in clinical research of course and the way it is implemented there is a clear indication of where the audio abx methodology needs to advance.

I've seen no serious, scientifically reasoned objection to DB ABX testing.

Except - surprise, surprise - in the Alice in Wonderland world of audiophiles, where the conclusion is a decided one, and things that conflict with it are tossed aside for reasons that range from kindergarten quality to sophistry at the other end of the scale.
It isn't a surprise because these beliefs, some of which are sincerely held, are the vital underpinnings of a lifelong elitist hobby in many cases, and are pandered to by the industry/specialist media that needs these to survive. Trying to get to the bottom of this edifice of argument is wasted effort because it often gets into things similar to the blind faith v science argument for something very trivial, with arguments in favour of audiophilia also supported by human biases that are hard wired into all of us.
Sonos is in a place where it does not need to do such pandering for commercial success and its user base is saved of having to pay the costs of such pandering in Sonos product prices.


Kumar, such a world (devoid of healthy scepticism and critical inquiry) sounds quite stifling to me. I prefer environments that encourage learning and discovery even if it means challenging established beliefs. Also means being ready to admit when I am wrong about something and then learn from it 🙂. Sorry for the philosophical detour

I've seen no serious, scientifically reasoned objection to DB ABX testing.

Except - surprise, surprise - in the Alice in Wonderland world of audiophiles, where the conclusion is a decided one, and things that conflict with it are tossed aside for reasons that range from kindergarten quality to sophistry at the other end of the scale.
It isn't a surprise because these beliefs, some of which are sincerely held, are the vital underpinnings of a lifelong elitist hobby in many cases, and are pandered to by the industry/specialist media that needs these to survive. Trying to get to the bottom of this edifice of argument is wasted effort because it often gets into things similar to the blind faith v science argument for something very trivial, with arguments in favour of audiophilia also supported by human biases that are hard wired into all of us.
Sonos is in a place where it does not need to do such pandering for commercial success and its user base is saved of having to pay the costs of such pandering in Sonos product prices.
Userlevel 7
Badge +20
Testing whether a group of people can hear differences between amplifiers requires a lot more than just the ABX test which has not been dealt with in the studies I have come across.
Does it, though? I really don't think there's any evidence for that. Do point out the studies that concern you, and let us know what aspects of these studies were flawed.

The world of audiophila is littered with statements that a different resolution / bitrate / amplifier / power cable / speaker cable / interconnect / DAC made a 'night and day difference'. And this 'obvious' difference endures right up to the point where people have to prove it in a properly controlled test where there are no clues as to which system they are listening to. And then they can't.

(Note that I don't include components like speakers in this list. Or turntables. These are quite capable of introducing different forms of character (i.e., distortion) into music playback, which can be easily recognisable.)
Hi guys, sorry for the delay, tons of work keeping me busy 😞.
To clarify, I am personally in favour of the ABX test but (for now) I have reservations on the way it is used and the attempt to reach scientifically valid results. Testing whether a group of people can hear differences between amplifiers requires a lot more than just the ABX test which has not been dealt with in the studies I have come across. This is where the overall methodology design and then testing administration and results interpretation come into play. Based on the studies I have read though, my understanding so far is that the methodology needs work and the results (i.e. showing no difference between amps) are not definitive or generalisable. I will keep reading to see what else is out there. Cheers.

You still haven't stated whether you are unconvinced by the DB ABX approach itself, or by the manner in which it has been employed (i.e., the specific studies). I can't untangle it in what you've posted so far.

Which is it?


I too am confused by this, and await clarification before I comment any more.
Userlevel 7
Badge +20
... What I propose is that based on established scientific methodology (e.g. psychology and social science hypothesis testing research methods etc.), abx testing needs work both in terms of methodology validation as well as the hypothesis testing results themselves. ...
You still haven't stated whether you are unconvinced by the DB ABX approach itself, or by the manner in which it has been employed (i.e., the specific studies). I can't untangle it in what you've posted so far.

Which is it?
Sampling frame and sample size requirements are an integral part of research design and research methodology that employs hypothesis testing. They are important for various reasons including the degree by which results can be generalised to a population as well as reducing the false positives etc. Controlling for confounding variables affects the explanatory power of the test. What I propose is that based on established scientific methodology (e.g. psychology and social science hypothesis testing research methods etc.), abx testing needs work both in terms of methodology validation as well as the hypothesis testing results themselves. In that respect the test results are indicative but not definitive. Sorry for borrowing the 'gold standard' phrase, it is not exactly scientific. All this though is not my opinion at all, just standard social and psychological research method design principles. Having said that I fully respect your opinions and I am not trying to disprove them. Thank you for the very useful links on the abx, it helped a lot 🙂
In my view the abx is the way forward but the lack of peer reviewed, repeatable studies indicated to me that it is not yet at the gold standard level. The lack of peer reviewed studies is not the only problem. Sample sizes are small, confounding variables not controlled etc. With a bit ore work I think that abx may become scientifically established.

What do you mean by "gold standard"? And what makes you think DB A/B/X is not "scientifically established"? It is accepted by the scientific community as a standard, otherwise the scientific community would reject it in their publications. To date, there are no studies that prove it is unacceptable. Like any other accepted premise in science, until there are studies disproving the premise, it is accepted. How much more do you need to make it "gold" or "established"?

As to why there are no studies "proving" DB A/B/X is a standard, I suggest to you that eliminating all outside variables in order to prove one variable is different from another is merely common sense, and "proving" it is truly a lesson in the mundane. It is only the audiophiles who try to "disprove" something so basic, and they haven't done it yet.
Userlevel 7
Badge +20
In my view the abx is the way forward but the lack of peer reviewed, repeatable studies indicated to me that it is not yet at the gold standard level. The lack of peer reviewed studies is not the only problem. Sample sizes are small, confounding variables not controlled etc. With a bit ore work I think that abx may become scientifically established.
So, to be clear, it’s not the methodology that you are questioning, but the studies performed using it? Because there is absolutely no question that DB ABX is scientifically established as a gold standard methodology. (If you feel it falls below this standard, feel free to propose an alternative.)

Absolutely, let’s have more audio studies conducted. However, note that since one can’t logically prove a negative hypothesis (“Hi-Res Audio cannot be distinguished from CD quality audio”), one can do studies forever without convincing the skeptics.
In my view the abx is the way forward but the lack of peer reviewed, repeatable studies indicated to me that it is not yet at the gold standard level. The lack of peer reviewed studies is not the only problem. Sample sizes are small, confounding variables not controlled etc. With a bit ore work I think that abx may become scientifically established.
Userlevel 7
Badge +20
it appears there is still some debate on the double blind test methodology
There's still debate on a lot of things where there really shouldn't be, because the science is clear. However, people are free to believe whatever unproven nonsense they like ... as long as they make sure they go to that 'Homeopathic A&E' when they have a serious medical emergency.

I've seen no serious, scientifically reasoned objection to DB ABX testing. All it's saying is that in order to prefer one thing over another, a minimum condition is that you can actually tell the two things apart under properly controlled conditions. If you can't tell them apart, how can you possibly have a preference?

What new 'gold standard' would you prefer? If you want to theorise objections to the logic and methodology of DB ABX (an interesting exercise), you'll also need to theorise experiments that will validate those objections.

Or, are you actually doubting the methodology, or the results of its application?
Great input and thanks for the links folks 🙂 I have been doing some reading and it appears there is still some debate on the double blind test methodology:
https://www.audiosciencereview.com/forum/index.php?threads/limitations-of-blind-testing-procedures.1254/
Very few (if any) peer reviewed studies conducted unfortunately (AES study was not peer reviewed apparently). The implication to my mind is that ABX testing methodology is certainly the way to go but not yet at the gold standard level (in terms of methodological validity and results generalisability). Seems like it needs additional validation and refinement.
I will keep reading on this for sure. Super interesting

That had me laughing. Not only did the 'golden ears' of the Audiophile/Take Home Group fail to detect in a long-term test whether they'd been handed a black box with a 2.5% distortion circuit, it would appear that their in-built aversion to A/B stopped them from even switching the tape loop in and out as a comparator.


Yeah, you couldn't write this as a script; no one would believe it. That paper really says more about audiophilia than it does about A/B testing.
Have you seen this:

https://www.audiosciencereview.com/forum/index.php?threads/aes-paper-digest-sensitivity-and-reliability-of-abx-blind-testing.186/

It is a summary and discussion of a 1991 paper in which the audiophile preferred "long term listening" test was compared with quick switching A/B/X testing. The self-described "golden eared" audiophiles failed to identify deliberate distortion put into the source chain in a long term test, whereas engineers using A/B/X found it immediately.

That had me laughing. Not only did the 'golden ears' of the Audiophile/Take Home Group fail to detect in a long-term test whether they'd been handed a black box with a 2.5% distortion circuit, it would appear that their in-built aversion to A/B stopped them from even switching the tape loop in and out as a comparator.
Have you seen this:

https://www.audiosciencereview.com/forum/index.php?threads/aes-paper-digest-sensitivity-and-reliability-of-abx-blind-testing.186/

It is a summary and discussion of a 1991 paper in which the audiophile preferred "long term listening" test was compared with quick switching A/B/X testing. The self-described "golden eared" audiophiles failed to identify deliberate distortion put into the source chain in a long term test, whereas engineers using A/B/X found it immediately.
You will not find any scientific literature completely discounting double blind A/B testing, either with audio or in general. On the other hand, you will find scientific studies, from respected publications, that accept double blind A/B testing as a standard for measuring heard differences between two audio sources. In reality, that is all you need. And until someone proves the ineffectiveness of double blind A/B testing of audio sources and submits it to a scientific publication for critique, I'm afraid all the protesting you hear on audiophile sites is about as relevant as them screaming in the wind.

Totally agree with you,
Audiophile sites is exactly what I would like to avoid as I try to educating myself on this area 🙂. Subjective / perceived listening sometimes makes for fun reading but the ABX method is a great attempt at controlling for subjectivity (to an extent). I have attempted to research the area but I have so far found debate (quite strong actually) but limited actual peer reviewed validation. My goal is to learn more about it (certainly avoid the audiophile subjective articles) and apologies for derailing the thread.
If you have any links / recommendations for scientific literature that employs the methodology I would be grateful for the input :-)

Cheers
You will not find any scientific literature completely discounting double blind A/B testing, either with audio or in general. On the other hand, you will find scientific studies, from respected publications, that accept double blind A/B testing as a standard for measuring heard differences between two audio sources. In reality, that is all you need. And until someone proves the ineffectiveness of double blind A/B testing of audio sources and submits it to a scientific publication for critique, I'm afraid all the protesting you hear on audiophile sites is about as relevant as them screaming in the wind.

PS - The Boston Audio Society paper was peer-reviewed.
As the last part in the first post on the linked thread says, blind AB tests are commonly used in medical trials used to test efficacy of new drugs to rule out placebo effects. The rest of the thread deals with the subject, as it relates to home audio, in its entirety.

https://hydrogenaud.io/index.php/topic,16295.0.html

There is no argument in general about AB testing being a recognised part of the scientific method. But as someone here has also remarked, life is too short to split hairs on the subject in as trivial a field as home audio, and given that I hear no differences either, I too haven't the inclination to do much more on the subject than understand how what holds true for medical testing/science/human behaviour patterns in general is based on principles that apply equally well to home audio.

The thing is that those that make extraordinary claims have the burden of proving these as well, and no one has posted scientific proofs for claims in the domain of home audio that differences are heard even after all but one variable have been rigorously eliminated. But these days, I am happy to let such claimants live happily in their own world of beliefs as long as I am allowed to live happily in mine.
Folks, can anyone recommend a few scientific sources that address ABX testing? Trying to research this further and to clarify if the ABX is indeed established as the golden test. I am finding some debate on it but not an established agreement that it is flawless. While the Boston Audio Society and Acoustic Engineering Society are both interesting sources, I am mostly looking for peer-reviewed articles. If anyone has some links / suggestions can you please share? Thanks

.. on a personal note, I have also not been able to differentiate between Lossless and High res (using a good dac and headphones). Not a scientific test though.