Answered

Auto volume levelling... still an issue???

  • 11 October 2021
  • 14 replies
  • 250 views

Surely the levelling of volume is achievable? Yes it would mean analysing the tracks and making calculations to play louder or quieter than the previous track(s) but this shouldn’t be asking too much given the tech in the Sonos gear/software/infrastructure?

It’s my major gripe with the system in that if another system came out claiming to deliver on that one thing I’d consider swapping and selling all my 12 Sonos bits.

Surely it’s on a roadmap somewhere?

icon

Best answer by ratty 12 October 2021, 12:24

View original

This topic has been closed for further comments. You can use the search bar to find a similar topic, or create a new one by clicking Create Topic at the top of the page.

14 replies

Volume normalisation on music from where? There is some support for ReplayGain and iTunes tagging on local library music. There is none with online services so far as I’m aware, presumably because the services don't send the metadata.

Unless there is an industry wide standard, this is not practical. In order to accomplish this, all available tracks would need to be analyzed and processed. If SONOS attempted to accomplish this for a given listener, all tracks likely to be played in a listening session would need to be analyzed and processed -- prior to playing the first track. Also, the “best” strategy would need to consider the current listening environment. “Best” in a quiet room would be different from “best” in a car, on the beach, or during a party.

The real solution, in my opinion, would be to incorporate an industry standard compression function in every player. This would be a given, just as bass and treble controls are a given. There would be a learning curve for everyone and there will be initial cries of “too complicated”, but we have gladly learned to use multiple complicated functions on our phones and pads. If you want “complicated” consider the complexity of taking a picture, then strapping this to an email or document, along with some audio and video. An audio compression function is much easier to implement and use. It’s all a question of  motivation and perceived value.

I hear you, but as with programs like Traktor the next song is analysed on demand, not all up front, and that includes the iPhone version, takes about 10 seconds for a track and it’s done while current one is playing, then a level is stored against the track meaning it doesn’t need doing again. Yes ‘best’ is questionable, but basic volume consistency is what we are looking for, not a fool proof system, just something that means you don’t have to change volume almost every song in some cases.

Compression might work also, I just think while Apple Music can do it, and almost all DJ packages out there, why can’t Sonos when more than the afore mentioned Sonos should be a platform that delivers music without you having to keep messing with it.

Volume normalisation on music from where? There is some support for ReplayGain and iTunes tagging on local library music. There is none with online services so far as I’m aware, presumably because the services don't send the metadata.

Good points, I’m asking why the songs cannot be analysed while the current is playing, like any decent dj software does.

I get each Sonos device is a processor in it’s own right so that brings complications, i.e. the audio isn’t streamed to the devices from one point of processing. But surely this is not rocket science.

I’m asking why the songs cannot be analysed while the current is playing, like any decent dj software does.

 

What happens if I skip to another track? This could be a skip to another album.

There is some support for ReplayGain and iTunes tagging on local library music. There is none with online services so far as I’m aware, presumably because the services don't send the metadata.

Spotify offers this for in app play on the device hosting the app, but I am not sure how well this works because on Sonos or Echo via Spotify casting, this does not work.

I agree with the OP about its potential value to enhance the listening experience, that is far higher than all the Hi Res/Lossless stuff that is the red herring that gets pursued instead.

 

Compression might work also, I just think while Apple Music can do it

Apple Music does not do this, as far as I remember. Unless you are referring to how if offers lossy 256 kbps music, which is a completely different thing.

 

Compression might work also, I just think while Apple Music can do it

Apple Music does not do this, as far as I remember. Unless you are referring to how if offers lossy 256 kbps music, which is a completely different thing.

I mean iTunes rather than Apple Music, iTunes analyses the tracks and assigned them a volume level in an attempt to do this.

I’m asking why the songs cannot be analysed while the current is playing, like any decent dj software does.

 

What happens if I skip to another track? This could be a skip to another album.

Good point, maybe analysis on load and a subtle volume change after 10-15 seconds?
I get it’s difficult, but not impossible… I hope

maybe analysis on load and a subtle volume change after 10-15 seconds?

Yuk. Not an experience I’d enjoy at all.

 

I get it’s difficult, but not impossible… I hope

A complete re-architecting of the audio pipeline, including a parallel thread to look ahead at the entire track while the main thread plays it out real-time? Many of the players probably wouldn’t have the CPU and/or memory to cope. In any case it would be a serious upheaval.

Besides, analysing every track, every time it’s played, would be hugely inefficient. Much better to receive, and respect, gain adjustment tags. Sonos does this to a degree on local music: the REPLAYGAIN_TRACK_GAIN tag has some influence. As I said at the outset, I don’t believe any of the services actually send gain metadata.

Perhaps worth noting that Sonos Move and Roam already have auto trueplay tuning to do while tracks are playing.  Adding in processing of current and next tracks to sort of smooth out the volume would be an additional complication.

Also, it’s not that uncommon for tracks to vary greatly in volume throughout, intentional by the artist.  Not the best example out there, but Bohemian Rhapsody comes to mind.  I’m not sure how you could take a 10-15 segment of that track and determine how to properly set the gain to match the next track in the queue.  And of course, that only applies if there is a queue, which cannot exist when the source is aux input or TV.

Not the best example out there, but Bohemian Rhapsody comes to mind.  I’m not sure how you could take a 10-15 segment of that track and determine how to properly set the gain to match the next track in the queue. 

The same way it’s done today, I guess, by integrating the average loudness across the entire track. According to my records Foobar2k produced a ReplayGain adjustment of -4.2 dB (against the standard 89 dB target loudness) for the version on ‘Greatest Hits I’.

Volume normalisation remains something of a forlorn subject as far as Sonos is concerned. Hidden away, no longer admitted to (the FAQ article has long since disappeared), and only partial in its implementation.

Given where we are, with the diversity of sources, buzz’s suggestion of a dynamic range compression option makes a lot more sense. I’m sure there’s DSP expertise in Sonos to implement it; after all there’s already soft-knee limiting being applied on the Port/ZPx0 digital outs in Variable mode.

Not the best example out there, but Bohemian Rhapsody comes to mind.  I’m not sure how you could take a 10-15 segment of that track and determine how to properly set the gain to match the next track in the queue. 

The same way it’s done today, I guess, by integrating the average loudness across the entire track. According to my records Foobar2k produced a ReplayGain adjustment of -4.2 dB (against the standard 89 dB target loudness) for the version on ‘Greatest Hits I’.

 

I meant to say 10-15 seconds of the track.  I apologize.  I agree that you could determine a gain adjustment using the entire track.

 

 

 

Not the best example out there, but Bohemian Rhapsody comes to mind.  I’m not sure how you could take a 10-15 segment of that track and determine how to properly set the gain to match the next track in the queue. 

The same way it’s done today, I guess, by integrating the average loudness across the entire track. According to my records Foobar2k produced a ReplayGain adjustment of -4.2 dB (against the standard 89 dB target loudness) for the version on ‘Greatest Hits I’.

 

I meant to say 10-15 seconds of the track.  I apologize.  I agree that you could determine a gain adjustment using the entire track.

Ah, right. That’s more like a dynamic range compressor then, with attack and release times.