"The Beginner’s Guide to Hi-Res Audio"

  • 7 December 2021
  • 92 replies
  • 2844 views


Show first post
This topic has been closed for further comments. You can use the search bar to find a similar topic, or create a new one by clicking Create Topic at the top of the page.

92 replies

Their conclusion was that based on their test methodology they could not find any significant preference for hi-res audio over the CD standard, even when using high-end headphones or speaker systems. However, they very correctly noted that “it is very difficult to use negative results to prove the inaudibility of any given phenomenon or process”. The most intriguing part of this arcticle, however, was their final note on high-resolution recordings: “Though our tests failed to substantiate the calimed advantages of high-resolution encoding for two-channel audio, one trend became obvious...throughout our testing: virtually all of the SACD and DVD-A recdoings sounded better than most CD’s - sometimes much better...Partly because[...]engineers and producers are give the freedom to produce recodings that sound as good as they can make them, without having to compress or equalize the signal to suit lesser systems.

 

 

How does one resolve the internal contradiction in the quoted except by attributing the “sounded better” thing to better mastering, which is said in the quote to be partly the reason for the better sound; and no one here disputes that better mastering can deliver audibly better sound. But, if that is partly the reason, what is rest of it? That is left hanging in the air...and where mastering is the same and sound levels are accurately matched, these differences have not survived in any published blind test, where the better master is downsampled to CD format and compared against the original version.

We can of course wait for the black swan, but I for one am not holding my breath.

To repeat also: all this applies only to 2 channel audio, where all the information needed to the extent audible, is captured by the bit rate and sampling frequency of the CD format.

Ahh, I see, good catch!

 

This is the way it was always going to turn out.  The conversation continues until you get caught in a trap.  There was never going to be another outcome.

Yeah, internet formus seem to attract those flat-earthers who use negative results to prove the non-existence of any given phenomenon.  I don’t really bother especially if the most compelling counter argument they present is that they have never heard about it garnished with a hint on the many yeears of experience they have under their belt.

But for arguments sake, you may want to read J. Robert Stuarts’s paper on “Coding for High-Resolution Audio Systems”, published 2004 in the Journal of the Audio Engineering Society. Everything I have stated about sampling rate and high frequency content you can more or less find in Chapter 5 of this paper, and in particular section 5.1 Psychoacoustic Data to Support Higher Sampling Rates:”...It has been suggested that perhaps higher sampling rates are preferred because, somehow, the human hearing system will resolve small time differences which might imply a wider bandwidth in a linear system. In considering this it is important to distinguish between perceiving separate events which are very close together in time (implying wide bandwidth and fine monaural temporal resolution) and those events which help build the auditory scene, for which the relative arrival times are either binaural or well separated. In the first case, wider bandwidth is required to discriminate acoustic events that are closer together in time. This seems to be an alternative statement of the problem to determine the maximum bandwidth necessary for audible transparency...Events in time can be dis- criminated to within very fine limits, and with a resolution very substantially smaller than the sampling period. This point is crucial because provided we treat all channels identically to ensure no skew of directional information, there is no direct relationship between the attainable tem- poral resolution and the sampling interval.

So independet of whether you follow the author’s hyphotheses and findings or not, it is a well known paper from an AES Fellow, so you cannot really say that you never heard about this stuff.  

Following this paper in 2007 there has been an article from Meyer and Moran, also published in the AES Journal, whith the objective to find out if there are any audible gains from high-res audio playback by doing some extensive, formalized testing. Their conclusion was that based on their test methodology they could not find any significant preference for hi-res audio over the CD standard, even when using high-end headphones or speaker systems. However, they very correctly noted that “it is very difficult to use negative results to prove the inaudibility of any given phenomenon or process”. The most intriguing part of this arcticle, however, was their final note on high-resolution recordings: “Though our tests failed to substantiate the calimed advantages of high-resolution encoding for two-channel audio, one trend became obvious...throughout our testing: virtually all of the SACD and DVD-A recdoings sounded better than most CD’s - sometimes much better...Partly because[...]engineers and producers are give the freedom to produce recodings that sound as good as they can make them, without having to compress or equalize the signal to suit lesser systems.

And here we go. I truly belive there are advances to be made in capturing and reproducing more accurately what our ears actually perceive in a concert hall. I am a big fan of innovation. And the proliferation of Hi-Res audio formats as we are witnessing right now is certainly one way to inspire more innovation to come forward in this field. Even if we are not (yet) experiencing it in the UltraHD tracks we get to listen to today.

 

 

 it's a product of prime numbers (2*2*3*3*5*5*7*7), which makes calculations easier,,

Who would have thought that?! Interesting.

A small digression out of curiosity - why was an “odd” number- 44100 - selected in the first place? If 40000 needed a margin of safely, why not 48000? Or even 44000?

 

Several reasons.  It was the Sony standard for PCM, it's a product of prime numbers (2*2*3*3*5*5*7*7), which makes calculations easier,, and one of the things the CD consortium insisted on was Beethoven's 9th Symphony would fit on one disc (but this was more related to the debate on the size of the disk).

A small digression out of curiosity - why was an “odd” number- 44100 - selected in the first place? If 40000 needed a margin of safely, why not 48000? Or even 44000?

This is the way it was always going to turn out.  The conversation continues until you get caught in a trap.  There was never going to be another outcome.

 

What trap is that?  The poster contradicted their own earlier posts and I asked for an explanation.  Any trap the poster fell into was of their own making.

Userlevel 3
Badge +2

Ahh, I see, good catch!

 

This is the way it was always going to turn out.  The conversation continues until you get caught in a trap.  There was never going to be another outcome.

Ahh, I see, good catch!

I think most audio and dsp engineers would agree with my statment that 44.1kHz is not enough to fully capture the audible relevant phase properties of a complex music source, such as an orchestra in a concert hall. There are complex dynamic patterns of phase variants and thus interaural phase differences created by musicians moving their instruments as they perfrom which are lost in a signal sampled at 44.1kHz. This I would call an accepted fact.

 

 

Please cite where this is proven as “accepted fact”.  Just stating it does not make it so.  Quite frankly, I’ve been around this stuff for a long time and I’ve never heard an inkling about these supposed phase differences being lost at 44.1 kHz.  

 

In a first but not necessarly sufficient step one could increase the sampling rate to capture more of this information. However, psychoaccoustic experiments indicate, for example, the perception of the phase to be non-linear across the spectrum. Also these effects seem to be time-variant and dependent on the source. So increasing the resolution of the phase evenly across the spectrum by increasing the sampling frequency alone might not be sufficient to deliver the desired result. This is something to be investigated.

 

So your definitive statements in your first post were mere bluster, thinking we’d accept your post at face value?  Sorry, this ain’t the forum for that type of BS.

Shame on me :)

I basically agree with the statement above. And while it is fairly obvious (at least to me) that there is no audible benefit by increasing the bit/sample resolution form 16 to 24, I don’t think there is a similar conclsuion on the increase in sampling rate, yet. For 2D stereo recordings and stationary sound sources there are some studies out there suggesting that there is no audible benefit from increasing the sampling rate above 48kHz. This is where we might just stop the discussion with a simple 16bit/44.1kHz is good enough, period.

OTOH the psychoacoustic propeties around the detection of minimum audible angle and depth of a sound source from low to high frequencies are still very much under investiagtion and not entirely understood. All I was stating is, that when the CD was developed the focus was primarily on accurate reconstruction of the amplitude spectrum while the phase spectrum was of lower importance.  

 

CIte for the following definitive claim in your original post, please:

While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

 

Because in your above quoted post, you seem to say the claim is “still very much under investiagtion (sic) and not entirely understood.”  So which is it, a definitive fact or something to be investigated?

Ahh, I see, good catch!

I think most audio and dsp engineers would agree with my statment that 44.1kHz is not enough to fully capture the audible relevant phase properties of a complex music source, such as an orchestra in a concert hall. There are complex dynamic patterns of phase variants and thus interaural phase differences created by musicians moving their instruments as they perfrom which are lost in a signal sampled at 44.1kHz. This I would call an accepted fact.

In a first but not necessarly sufficient step one could increase the sampling rate to capture more of this information. However, psychoaccoustic experiments indicate, for example, the perception of the phase to be non-linear across the spectrum. Also these effects seem to be time-variant and dependent on the source. So increasing the resolution of the phase evenly across the spectrum by increasing the sampling frequency alone might not be sufficient to deliver the desired result. This is something to be investigated.

Shame on me :)

I basically agree with the statement above. And while it is fairly obvious (at least to me) that there is no audible benefit by increasing the bit/sample resolution form 16 to 24, I don’t think there is a similar conclsuion on the increase in sampling rate, yet. For 2D stereo recordings and stationary sound sources there are some studies out there suggesting that there is no audible benefit from increasing the sampling rate above 48kHz. This is where we might just stop the discussion with a simple 16bit/44.1kHz is good enough, period.

OTOH the psychoacoustic propeties around the detection of minimum audible angle and depth of a sound source from low to high frequencies are still very much under investiagtion and not entirely understood. All I was stating is, that when the CD was developed the focus was primarily on accurate reconstruction of the amplitude spectrum while the phase spectrum was of lower importance.  

 

CIte for the following definitive statement in your original post, please:

While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

 

Because in your above quoted post, you seem to say the claim is “still very much under investiagtion (sic) and not entirely understood.”  So which is it, a definitive fact or something to be investigated?

BTW, THX is a quality standard, not a codec like Atmos or DTS.

Cite what??

 

Actual evidence for your claim.  

 

If you are refering to my last statement about the audible difference between currently available HD and UltraHD tracks, that is more a conclusion, based on those fundamentals of digital signal porcessing which I tried to briefly summarize before, rather than a claim.

 

You’ve clearly done enough research on the topic to know that what you’ve stated is far from accepted fact. You had to know that if you post a theory like this on a forum with others knowledgeable on the subject, it’s not going to be blindly accepted.

Shame on me :)

If you are refering to my last statement about the audible difference between currently available HD and UltraHD tracks, that is more a conclusion, based on those fundamentals of digital signal porcessing which I tried to briefly summarize before, rather than a claim.

 

Your "fundamentals of digital signal processing" are gibberish.  You made some claims that are not backed up by any mathematical or practical evidence.  All evidence points to there being no benefit of either 24 bits or sample rates over 48 kHz in the playback of digital audio.

I basically agree with the statement above. And while it is fairly obvious (at least to me) that there is no audible benefit by increasing the bit/sample resolution form 16 to 24, I don’t think there is a similar conclsuion on the increase in sampling rate, yet. For 2D stereo recordings and stationary sound sources there are some studies out there suggesting that there is no audible benefit from increasing the sampling rate above 48kHz. This is where we might just stop the discussion with a simple 16bit/44.1kHz is good enough, period.

OTOH the psychoacoustic propeties around the detection of minimum audible angle and depth of a sound source from low to high frequencies are still very much under investiagtion and not entirely understood. All I was stating is, that when the CD was developed the focus was primarily on accurate reconstruction of the amplitude spectrum while the phase spectrum was of lower importance.  

Cite what??

 

Actual evidence for your claim.  

 

If you are refering to my last statement about the audible difference between currently available HD and UltraHD tracks, that is more a conclusion, based on those fundamentals of digital signal porcessing which I tried to briefly summarize before, rather than a claim.

 

You’ve clearly done enough research on the topic to know that what you’ve stated is far from accepted fact. You had to know that if you post a theory like this on a forum with others knowledgeable on the subject, it’s not going to be blindly accepted.

If you are refering to my last statement about the audible difference between currently available HD and UltraHD tracks, that is more a conclusion, based on those fundamentals of digital signal porcessing which I tried to briefly summarize before, rather than a claim.

 

Your "fundamentals of digital signal processing" are gibberish.  You made some claims that are not backed up by any mathematical or practical evidence.  All evidence points to there being no benefit of either 24 bits or sample rates over 48 kHz in the playback of digital audio.

Cite what??

 

Actual evidence for your claim.  

 

If you are refering to my last statement about the audible difference between currently available HD and UltraHD tracks, that is more a conclusion, based on those fundamentals of digital signal porcessing which I tried to briefly summarize before, rather than a claim.

 

How about the claim that “digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.”

Cite what??

 

Actual evidence for your claim.  

 

If you are refering to my last statement about the audible difference between currently available HD and UltraHD tracks, that is more a conclusion, based on those fundamentals of digital signal porcessing which I tried to briefly summarize before, rather than a claim.

He really made the point why 192kHz sampling frequency is required if you want to accuately reproduce the gun fire of a laser blaster flying across a cinema theatre.   

But is it necessary to render an improvement, audible in a controlled blind listening test, to a 2 channel music recording? 

 

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

On that basis the difference between Red Book and Hi Res in any blind test should be like ‘night and day’, and yet: https://www.realhd-audio.com/?p=6993

The author nails it in this paragraph: “Is I’ve often stated in these articles, it is the production path that establishes the fidelity of the final master. Things like how a track was recorded, what processing was applied during recording and mixing, and how the tracks were ultimately mastered. If all of these things are done with maximizing fidelity as the primary goal, a great track will result.”

Again, 16bit/44.1kHz is completely sufficient in terms of fidelity, as it keeps the quantization noise low enough (-96dB) and reproduces all the audible frequencies (up 22.05kHz).

I should have said, that most current music productions do not take full advantage of the higher spatial resolution you get when using 96kHz sampling frequency and it’s debatable whether a rock/pop production would ever exploit it. With classic music, when done properly, you can definetly hear it.

You should read the methodology in detail. The test samples auditioned by several hundred individuals compared the full resolution originals with RedBook equivalents. The conclusion speaks for itself: “Hi-Res Audio or HD-Audio provides no perceptible fidelity improvement over a standard-resolution CD or file. “

The author of that study originally set up a recording label specifically for the production of high-resolution audio, from recording to delivery. He of all people would surely have wished that Hi Res was detectable. According to his findings it wasn’t.

Cite what??

 

Actual evidence for your claim.  

 

My quick summary:

Regarding 16 vs 24bit/sample resolution: As a streaming/transport format there will be no audible gain as long as studios are finally compressing the dynamic range of their master recording to fit into the 96dB provided by a 16bit representation of a signal. Also, as has been said before, I doubt anyone (even with golden ears) can hear the difference as 96dB SNR sounds "fantastic" while 144dB (which is what you theoretically get from 24bit) is just overkill. However, as an internal format in studio- as well as in listetning equipment 24bit/sample resolution makes total sense for doing proper volume control and eq in the digital domain. But this is happening anyways even if your transport format is "just" 16bit/sample.

Regarding sampling rate, the discussion is a litte different though:

There seems to be common consensus that a sampling frequency of 44.1kHz is sufficient to accurately reproduce the audible frequency spectrum. In fact, according to the Nyquist Theorem this allows for reproducing frequencies up to 22.05kHz and only young children can hear frequencies above 20kHz while the hearing of an average adult Joe is capped at 18 or even just 16kHz. So, all good here? Well not quite...

Ever since the CD appeared in the 80's many audiophiles keep claiming that a good analog record still offers more accurate reproduction of the sound stage and more precise positioning and depth of the instruments. They are right!

This is because there is a (incorrect) notion that equates the frequency spectrum with only the amplitude spectrum but it neglects the corresponding phase spectrum. While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

I say "would" because every track from Amazon labeled "Ultra HD" which I have seen (or been listeing to) so far is just 24bit/44.1kHz. So it gives me the "useless" 24bit/sample resolution but falls short of providing higher sampling rates which could really make an audible difference.

 

As long as you get HD it’s “fantastic”, there is currently no audible difference to “Ultra HD”. My hope is they provide more and more content higher sampling frequencies in the future. Then it will make a difference!
    

 

 

Cite?

Cite what??

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

On that basis the difference between Red Book and Hi Res in any blind test should be like ‘night and day’, and yet: https://www.realhd-audio.com/?p=6993

The author nails it in this paragraph: “Is I’ve often stated in these articles, it is the production path that establishes the fidelity of the final master. Things like how a track was recorded, what processing was applied during recording and mixing, and how the tracks were ultimately mastered. If all of these things are done with maximizing fidelity as the primary goal, a great track will result.”

Again, 16bit/44.1kHz is completely sufficient in terms of fidelity, as it keeps the quantization noise low enough (-96dB) and reproduces all the audible frequencies (up 22.05kHz).

I should have said, that most current music productions do not take full advantage of the higher spatial resolution you get when using 96kHz sampling frequency and it’s debatable whether a rock/pop production would ever exploit it. With classic music, when done properly, you can definetly hear it.

Anecdotically, I remember when hearing a keynote from one of the invertors of THX at an IEEE signal processing conference back in 2000. He really made the point why 192kHz sampling frequency is required if you want to accuately reproduce the gun fire of a laser blaster flying across a cinema theatre.   

My quick summary:

Regarding 16 vs 24bit/sample resolution: As a streaming/transport format there will be no audible gain as long as studios are finally compressing the dynamic range of their master recording to fit into the 96dB provided by a 16bit representation of a signal. Also, as has been said before, I doubt anyone (even with golden ears) can hear the difference as 96dB SNR sounds "fantastic" while 144dB (which is what you theoretically get from 24bit) is just overkill. However, as an internal format in studio- as well as in listetning equipment 24bit/sample resolution makes total sense for doing proper volume control and eq in the digital domain. But this is happening anyways even if your transport format is "just" 16bit/sample.

Regarding sampling rate, the discussion is a litte different though:

There seems to be common consensus that a sampling frequency of 44.1kHz is sufficient to accurately reproduce the audible frequency spectrum. In fact, according to the Nyquist Theorem this allows for reproducing frequencies up to 22.05kHz and only young children can hear frequencies above 20kHz while the hearing of an average adult Joe is capped at 18 or even just 16kHz. So, all good here? Well not quite...

Ever since the CD appeared in the 80's many audiophiles keep claiming that a good analog record still offers more accurate reproduction of the sound stage and more precise positioning and depth of the instruments. They are right!

This is because there is a (incorrect) notion that equates the frequency spectrum with only the amplitude spectrum but it neglects the corresponding phase spectrum. While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

I say "would" because every track from Amazon labeled "Ultra HD" which I have seen (or been listeing to) so far is just 24bit/44.1kHz. So it gives me the "useless" 24bit/sample resolution but falls short of providing higher sampling rates which could really make an audible difference.

 

As long as you get HD it’s “fantastic”, there is currently no audible difference to “Ultra HD”. My hope is they provide more and more content higher sampling frequencies in the future. Then it will make a difference!
    

 

 

Cite?

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

On that basis the difference between Red Book and Hi Res in any blind test should be like ‘night and day’, and yet: https://www.realhd-audio.com/?p=6993

My quick summary:

Regarding 16 vs 24bit/sample resolution: As a streaming/transport format there will be no audible gain as long as studios are finally compressing the dynamic range of their master recording to fit into the 96dB provided by a 16bit representation of a signal. Also, as has been said before, I doubt anyone (even with golden ears) can hear the difference as 96dB SNR sounds "fantastic" while 144dB (which is what you theoretically get from 24bit) is just overkill. However, as an internal format in studio- as well as in listetning equipment 24bit/sample resolution makes total sense for doing proper volume control and eq in the digital domain. But this is happening anyways even if your transport format is "just" 16bit/sample.

Regarding sampling rate, the discussion is a litte different though:

There seems to be common consensus that a sampling frequency of 44.1kHz is sufficient to accurately reproduce the audible frequency spectrum. In fact, according to the Nyquist Theorem this allows for reproducing frequencies up to 22.05kHz and only young children can hear frequencies above 20kHz while the hearing of an average adult Joe is capped at 18 or even just 16kHz. So, all good here? Well not quite...

Ever since the CD appeared in the 80's many audiophiles keep claiming that a good analog record still offers more accurate reproduction of the sound stage and more precise positioning and depth of the instruments. They are right!

This is because there is a (incorrect) notion that equates the frequency spectrum with only the amplitude spectrum but it neglects the corresponding phase spectrum. While it's true that the human ear (and brain) cannot hear the amplitude of frequencies above, say, 18kHz our two ears can extremely well detect phase differences between frequencies that are much higher! So while we cannot hear those frequencies as tones, we can detect the tiny differences in runtime which it takes those inaudible frequencies to arrive at the left and right ear respectively. In other words, our spatial location capabilities are of much higher resolution than our frequency hearing capabilities. Btw, this effect is heavily used by 3D sound systems like Dolby Atmos or THX.

This is why digital audio with increased sampling rates of 96kHz or even 192kHz would indeed provide a very noticeable benefit as it allows for more precise positioning and depth of the sound sources.

I say "would" because every track from Amazon labeled "Ultra HD" which I have seen (or been listeing to) so far is just 24bit/44.1kHz. So it gives me the "useless" 24bit/sample resolution but falls short of providing higher sampling rates which could really make an audible difference.

 

As long as you get HD it’s “fantastic”, there is currently no audible difference to “Ultra HD”. My hope is they provide more and more content higher sampling frequencies in the future. Then it will make a difference!
    

 

Why can’t the music industry standardize these designations and require compliance. Not sure who or how it would be done, just want transparency and honesty not marketing speak. 

 

An industry, or more accurately just an industry related group, can create a standard, but they can never require compliance. All they can really do is market and educate the public on what the standard is, why it’s important, and make sure the standard is followed strictly by product that applies and claims to meet the standard.  if the public doesn’t know or care about the standard, and it’s loosely enforced, then it’s pointless.  All that takes a lot of money, and generally speaking, if it doesn’t help increase sales, why bother.   I think the different music services primary means of competing is on audio quality, so they don’t have a big interest in standards.

That said.  it seems standards pretty much existed and worked in the days of physical media.  Once everything  stated going digital and ‘customers’ pirated music in whatever format and quality they wanted, things got all shot to hell.  Even when you could start buying music digitally, the industry didn’t want you to know that the quality was worse than CD.

 

 

Userlevel 7
Badge +22

In the US we have had some regulation of the audio amplifier power rating since 1974. This is a look at the rule and the problems.

[quote]Many manufacturers have taken advantage of this vacuum by publishing a confusing array of unrealistic power claims. Some go so far as to slap a sticker on the front panel with an inflated power figure that's based on only one-channel driven at 6-ohms and 10% THD. [/quote]

https://www.audioholics.com/amplifier-reviews/ftc-amplifier-rule-help-protect-home-audio-consumers-today

 

If the industry won’t establish and enforce standards then we are reduced to getting government involved which is rarely the best solution.