"The Beginner’s Guide to Hi-Res Audio"

  • 7 December 2021
  • 92 replies
  • 2261 views


Show first post
This topic has been closed for further comments. You can use the search bar to find a similar topic, or create a new one by clicking Create Topic at the top of the page.

92 replies

Userlevel 6

 

“HD” for Amazon sort of has two different meanings.  HD is the name of the service level you can get within Amazon music.  Within that service, you’ll have SD (Standard definition) tracks of the lower quality.  HD tracks are 16-bit/44.1 kHz (CD quality)  and then there are Ultra HD tracks at 24 bit.  There is also now the atmos music as part of this service.  When playing Amazon music on Sonos, Sonos will have no label for SD, HD for HD, Ultra HD label, and Atmos label.

 


In the UK at least, Amazon has recently updated so HD/Ultra HD/Dolby Atmos are now all under the service “Amazon Music Unlimited”, there is no separate “HD” service.

The respective audio resolution badge only appears when initiating playback from Amazon Music, within the Sonos app. No badge currently appears when casting to Sonos from the Amazon Music app by airplay etc.

Userlevel 5
Badge +9

Also I am curious what the difference is with Amazon’s HD vs Sonos Radio HD or are they the same?

Sonos Radio HD streams in 16-bit/44.1 kHz 

IIRC Amazon HD streams in up to 24-bit/192 kHz, but on Sonos it is 24/48. 
 

 

“HD” for Amazon sort of has two different meanings.  HD is the name of the service level you can get within Amazon music.  Within that service, you’ll have SD (Standard definition) tracks of the lower quality.  HD tracks are 16-bit/44.1 kHz (CD quality)  and then there are Ultra HD tracks at 24 bit.  There is also now the atmos music as part of this service.  When playing Amazon music on Sonos, Sonos will have no label for SD, HD for HD, Ultra HD label, and Atmos label.

 

 

When comparing services, for me at least, it’s helpful to set aside the marketing speak, (SD, HD, Ultra HD) because the meaning varies on the context.

On Sonos if you don’t own an Atmos enabled product, Amazon Unlimited can stream in up to 24/48 (may vary depending on your location)

 

 

 

“HD” for Amazon sort of has two different meanings.  HD is the name of the service level you can get within Amazon music.  Within that service, you’ll have SD (Standard definition) tracks of the lower quality.  HD tracks are 16-bit/44.1 kHz (CD quality)  and then there are Ultra HD tracks at 24 bit.  There is also now the atmos music as part of this service.  When playing Amazon music on Sonos, Sonos will have no label for SD, HD for HD, Ultra HD label, and Atmos label.

 


In the UK at least, Amazon has recently updated so HD/Ultra HD/Dolby Atmos are now all under the service “Amazon Music Unlimited”, there is no separate “HD” service.

The respective audio resolution badge only appears when initiating playback from Amazon Music, within the Sonos app. No badge currently appears when casting to Sonos from the Amazon Music app by airplay etc.

 

You’re right, it’s Unlimited in the US as well.   

 

As far as the resolution badge, I think it’s in SD (no badge)  when initiating playback from the Amazon app or Alexa because Amazon doesn’t know what resolution the specific Sonos room can play.  That’s my theory anyway.

Userlevel 6

As far as the resolution badge, I think it’s in SD (no badge)  when initiating playback from the Amazon app or Alexa because Amazon doesn’t know what resolution the specific Sonos room can play.  That’s my theory anyway.

 

 

Yes, I also believe it is SD rather than the respective badge just not showing.

I’ve noticed whilst playing around with Ones, it takes a few seconds/attempts before Ultra HD appears. Sonos must do some monitoring of the network to establish what quality to play, although Dolby Atmos music plays straight away with the Arc. Hmmmm that’s maybe why I can’t get the paired Ones and Sub to play Ultra HD… Need to troubleshoot some more…

 

Userlevel 1
Badge +2

In New York City….After Sonos made the hi-res announcement, I subscribed to.Amazon Unlimited. I am playing Rush -- Ultra HD on a new Beam 2 and my app says: 

Tom Sawyer
Rush (with the Amazon “music” logo underneath) - Ultra HD Rush and
Dolby Atmos, to the far right.

Took a while, but an “HD” now appears where Dolby Atmos was when the playlist moved to another song, Limelight.

And now just switched to “Ultra HD” on far right...

Whenever I watch HD quality video streams, I don’t need any badges on the screen to let me know what quality I am seeing - and when the stream quality drops occassionally because of broadband issues, this too is immediately obvious. 

Clearly this analogy does not apply for Hi Res Audio, seeing how people are looking for badges to get a confirmation of what is streaming, because, I suppose, their ears are not telling them this. If so, why bother?

Userlevel 1
Badge +2

@ Kumar. I think we are just seeking an additional data point to make sure that it is working. Otherwise, your point is sensible.

@ Kumar. I think we are just seeking an additional data point to make sure that it is working. Otherwise, your point is sensible.

From what I read about this search it seems to me that a lot of angst is generated when the data point is not available; angst that must be coming in the way of just sitting back, trusting ones ears and enjoying the music. There are enough more real things happening around all of us to save the worry and the energy to deal with those.

Further to the preceding - I believe that this Hi Res thing is being peddled just to sow dissatisfaction among vulnerable users of technology which has peaked in terms of what it can objectively deliver, such that they spend more money to pad the pockets of these peddlers. 

Instead of doing what none of the peddlers of hardware and services are doing to address the big issue which causes much more aggravation - distracting sound level changes from one track to the next when playing playlists while using the random shuffle feature. Having to constantly move the volume control around to deal with this is very aggravating and damages the listening experience.

Atmos is a different case; that does sound very different from HD/SD, but there too personal preference may mean that it isn't an improvement. But it at least does not need labels to announce itself; the sound does that job.

I guess it’s a case that people want to know that they are getting the quality of audio they are paying for. Whether it’s actually needed, or not, becomes a separate question. 

Some say in some circumstances, that they can hear a difference between HD (16-bit) audio and UltraHD (24-bit) audio.. I’m not one of those people. I have been quite happy in the past with the lesser 320 AAC lossy audio, so I’m more than happy with the Amazon HD standard.

It’s not actually costing me any more to have the higher quality Amazon audio anyway on Sonos… (24/48 Atmos/UltraHD audio in some instances) …and as long as it all plays without interruption and sounds good (to me), then I’m okay with that, especially as a Prime Movie/Music Unlimited annual subscriber, this ‘merged’ Amazon music service is actually costing me £50 (per year) less than before.

 

Some say in some circumstances, that they can hear a difference between HD (16-bit) audio and UltraHD (24-bit) audio.. I’m not one of those people. I have been quite happy in the past with the lesser 320 AAC lossy audio.

Anything can be said, but no one has even claimed to have picked this difference in a controlled level matched blind listening test, using the same source file in the 16 bit version and comparing that with the 24 bit version. Not even on the most accurate headphones, which may perhaps be able to pick the difference between 16 bit and lossy 320 - in even a quiet domestic environment using the best/most expensive speakers out there, this too has not been demonstrated, room acoustics will come in the way.

Perhaps this is all pickable if the tester is the being in the HMV logo of many years ago...with training!

On the other hand, the psychological reasons for hearing all kinds of differences are well known and not just in the world of home audio. And of course, where sound levels are not accurately matched between the alternatives, the reason isn't even psychological.

Userlevel 1
Badge +2

I guess it’s a case that people want to know that they are getting the quality of audio they are paying for. Whether it’s actually needed, or not, becomes a separate question. 

 

Pretty much, this ^. That is all. And as a psychologist, lectures about psychology are even further drivel to my ears, or eyes.

Userlevel 5
Badge +14

Why can’t the music industry standardize these designations and require compliance. Not sure who or how it would be done, just want transparency and honesty not marketing speak. 

Because it worked so well with the CEC consortium, and the various issues many manufacturers have had implementing that?

I get where you’re coming from, and support it, but getting various companies to pay attention to a “standard” as opposed to their own bottom line, and marketing strategies seems like a false hope. 

 

I get where you’re coming from, and support it, but getting various companies to pay attention to a “standard” as opposed to their own bottom line, and marketing strategies seems like a false hope. 

Won’t work probably because it is snake oil to start with, in terms of what is audible about it. As opposed to the HD and beyond video side of things where there is a universally adopted standard definition for things like HD Ready, HD, 4K etc,

Agreed, Kumar, but even an agreement on the nature of the snake oil nostrum might benefit folks who look for these false “key words” in marketing statement might ease some of the posts we deal with. 

In a broader context home audio is a mess anyway where definitions are concerned - HiFi and Audiophile quality as two classic examples. So in that tradition even someone like Amazon plays fast and loose with the definition of HD, applying it to the CD format. And if I am not mistaken, Sonos has also jumped on that HD bandwagon thereafter.

PS: And of course, the total mess over definition of output power in watts ranging from rms to PMPO...that one is a doozy.

Userlevel 7
Badge +21

In the US we have had some regulation of the audio amplifier power rating since 1974. This is a look at the rule and the problems.

[quote]Many manufacturers have taken advantage of this vacuum by publishing a confusing array of unrealistic power claims. Some go so far as to slap a sticker on the front panel with an inflated power figure that's based on only one-channel driven at 6-ohms and 10% THD. [/quote]

https://www.audioholics.com/amplifier-reviews/ftc-amplifier-rule-help-protect-home-audio-consumers-today

 

If the industry won’t establish and enforce standards then we are reduced to getting government involved which is rarely the best solution.

Why can’t the music industry standardize these designations and require compliance. Not sure who or how it would be done, just want transparency and honesty not marketing speak. 

 

An industry, or more accurately just an industry related group, can create a standard, but they can never require compliance. All they can really do is market and educate the public on what the standard is, why it’s important, and make sure the standard is followed strictly by product that applies and claims to meet the standard.  if the public doesn’t know or care about the standard, and it’s loosely enforced, then it’s pointless.  All that takes a lot of money, and generally speaking, if it doesn’t help increase sales, why bother.   I think the different music services primary means of competing is on audio quality, so they don’t have a big interest in standards.

That said.  it seems standards pretty much existed and worked in the days of physical media.  Once everything  stated going digital and ‘customers’ pirated music in whatever format and quality they wanted, things got all shot to hell.  Even when you could start buying music digitally, the industry didn’t want you to know that the quality was worse than CD.

 

 

My quick summary:

Regarding 16 vs 24bit/sample resolution: As a streaming/transport format there will be no audible gain as long as studios are finally compressing the dynamic range of their master recording to fit into the 96dB provided by a 16bit representation of a signal. Also, as has been said before, I doubt anyone (even with golden ears) can hear the difference as 96dB SNR sounds "fantastic" while 144dB (which is what you theoretically get from 24bit) is just overkill. However, as an internal format in studio- as well as in listetning equipment 24bit/sample resolution makes total sense for doing proper volume control and eq in the digital domain. But this is happening anyways even if your transport format is "just" 16bit/sample.

Regarding sampling rate, the discussion is a litte different though:

There seems to be common consensus that a sampling frequency of 44.1kHz is sufficient to accurately reproduce the audible frequency spectrum. In fact, according to the Nyquist Theorem this allows for reproducing frequencies up to 22.05kHz and only young children can hear frequencies above 20kHz while the hearing of an average adult Joe is capped at 18 or even just 16kHz. So, all good here? Well not quite...

Ever since the CD appeared in the 80's many audiophiles keep claiming that a good analog record still offers more accurate reproduction of the sound stage and more precise positioning and depth of the instruments. They are right!

This is because there is a (incorrect) notion that equates the frequency spectrum with only the amplitude spectrum but it neglects the corresponding phase spectrum. While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

I say "would" because every track from Amazon labeled "Ultra HD" which I have seen (or been listeing to) so far is just 24bit/44.1kHz. So it gives me the "useless" 24bit/sample resolution but falls short of providing higher sampling rates which could really make an audible difference.

 

As long as you get HD it’s “fantastic”, there is currently no audible difference to “Ultra HD”. My hope is they provide more and more content higher sampling frequencies in the future. Then it will make a difference!
    

 

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

On that basis the difference between Red Book and Hi Res in any blind test should be like ‘night and day’, and yet: https://www.realhd-audio.com/?p=6993

My quick summary:

Regarding 16 vs 24bit/sample resolution: As a streaming/transport format there will be no audible gain as long as studios are finally compressing the dynamic range of their master recording to fit into the 96dB provided by a 16bit representation of a signal. Also, as has been said before, I doubt anyone (even with golden ears) can hear the difference as 96dB SNR sounds "fantastic" while 144dB (which is what you theoretically get from 24bit) is just overkill. However, as an internal format in studio- as well as in listetning equipment 24bit/sample resolution makes total sense for doing proper volume control and eq in the digital domain. But this is happening anyways even if your transport format is "just" 16bit/sample.

Regarding sampling rate, the discussion is a litte different though:

There seems to be common consensus that a sampling frequency of 44.1kHz is sufficient to accurately reproduce the audible frequency spectrum. In fact, according to the Nyquist Theorem this allows for reproducing frequencies up to 22.05kHz and only young children can hear frequencies above 20kHz while the hearing of an average adult Joe is capped at 18 or even just 16kHz. So, all good here? Well not quite...

Ever since the CD appeared in the 80's many audiophiles keep claiming that a good analog record still offers more accurate reproduction of the sound stage and more precise positioning and depth of the instruments. They are right!

This is because there is a (incorrect) notion that equates the frequency spectrum with only the amplitude spectrum but it neglects the corresponding phase spectrum. While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

I say "would" because every track from Amazon labeled "Ultra HD" which I have seen (or been listeing to) so far is just 24bit/44.1kHz. So it gives me the "useless" 24bit/sample resolution but falls short of providing higher sampling rates which could really make an audible difference.

 

As long as you get HD it’s “fantastic”, there is currently no audible difference to “Ultra HD”. My hope is they provide more and more content higher sampling frequencies in the future. Then it will make a difference!
    

 

 

Cite?

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

On that basis the difference between Red Book and Hi Res in any blind test should be like ‘night and day’, and yet: https://www.realhd-audio.com/?p=6993

The author nails it in this paragraph: “Is I’ve often stated in these articles, it is the production path that establishes the fidelity of the final master. Things like how a track was recorded, what processing was applied during recording and mixing, and how the tracks were ultimately mastered. If all of these things are done with maximizing fidelity as the primary goal, a great track will result.”

Again, 16bit/44.1kHz is completely sufficient in terms of fidelity, as it keeps the quantization noise low enough (-96dB) and reproduces all the audible frequencies (up 22.05kHz).

I should have said, that most current music productions do not take full advantage of the higher spatial resolution you get when using 96kHz sampling frequency and it’s debatable whether a rock/pop production would ever exploit it. With classic music, when done properly, you can definetly hear it.

Anecdotically, I remember when hearing a keynote from one of the invertors of THX at an IEEE signal processing conference back in 2000. He really made the point why 192kHz sampling frequency is required if you want to accuately reproduce the gun fire of a laser blaster flying across a cinema theatre.   

My quick summary:

Regarding 16 vs 24bit/sample resolution: As a streaming/transport format there will be no audible gain as long as studios are finally compressing the dynamic range of their master recording to fit into the 96dB provided by a 16bit representation of a signal. Also, as has been said before, I doubt anyone (even with golden ears) can hear the difference as 96dB SNR sounds "fantastic" while 144dB (which is what you theoretically get from 24bit) is just overkill. However, as an internal format in studio- as well as in listetning equipment 24bit/sample resolution makes total sense for doing proper volume control and eq in the digital domain. But this is happening anyways even if your transport format is "just" 16bit/sample.

Regarding sampling rate, the discussion is a litte different though:

There seems to be common consensus that a sampling frequency of 44.1kHz is sufficient to accurately reproduce the audible frequency spectrum. In fact, according to the Nyquist Theorem this allows for reproducing frequencies up to 22.05kHz and only young children can hear frequencies above 20kHz while the hearing of an average adult Joe is capped at 18 or even just 16kHz. So, all good here? Well not quite...

Ever since the CD appeared in the 80's many audiophiles keep claiming that a good analog record still offers more accurate reproduction of the sound stage and more precise positioning and depth of the instruments. They are right!

This is because there is a (incorrect) notion that equates the frequency spectrum with only the amplitude spectrum but it neglects the corresponding phase spectrum. While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

I say "would" because every track from Amazon labeled "Ultra HD" which I have seen (or been listeing to) so far is just 24bit/44.1kHz. So it gives me the "useless" 24bit/sample resolution but falls short of providing higher sampling rates which could really make an audible difference.

 

As long as you get HD it’s “fantastic”, there is currently no audible difference to “Ultra HD”. My hope is they provide more and more content higher sampling frequencies in the future. Then it will make a difference!
    

 

 

Cite?

Cite what??

Cite what??

 

Actual evidence for your claim.