My understanding is that regular trueplay tuning does essentially make some adjustments based on speaker position already. When tuning, all the speakers in your room are producing audio at different frequencies while the iOS device is listening in, at the main listening area and around the room, to calibrate the audio. I’m no expert this, but it seems like a better plan to calibrate the audio to where people will be listening rather than at the speaker locations.
Sonos has a version of trueplay tuning that uses the speakers in the Era speakers (and Move, Roam) but it is not relevant for a home theatre setup because the Arc/Beam/Ray would dictate the tuning. It is entirely possible that Sonos is working on a tuning alternative that uses the mics in the Arc and surround speakers, but it would thought of as alternative to normal trueplay tuning that is not as good and should be used when you can’t use an iOS device.
Fair points Danny. I believe the reasoning for the iPhone based approach was because it was easy to standardise (versus android and the infinite variety of possible microphones). I certainly wouldn’t suggest that making the current Truplay experience redundant is the way to go. It was more about minimising subjectivity and variation by centrally automating things within the speakers themselves.
I was just coming through it at the point of view that Sonos speakers by and large now have their own microphones so rather than have rely on a user device to calibrate (which could also be somewhat subjective if the waving around isn’t optimal or consistent), the speakers would be because they should be in fixed positions.
I suspect there are challenges for balancing multiple speakers as a ‘group’, using just the microphones on the speakers themselves. Easier to balance that one speaker, but then fold that into a ‘room’ such as a Home Theater room might be a bit more thought, and consequently larger amounts of code. Working with a waved around mic is probably a lot easier, from an engineering perspective.
Fair points Danny. I believe the reasoning for the iPhone based approach was because it was easy to standardise (versus android and the infinite variety of possible microphones). I certainly wouldn’t suggest that making the current Truplay experience redundant is the way to go. It was more about minimising subjectivity and variation by centrally automating things within the speakers themselves.
I was just coming through it at the point of view that Sonos speakers by and large now have their own microphones so rather than have rely on a user device to calibrate (which could also be somewhat subjective if the waving around isn’t optimal or consistent), the speakers would be because they should be in fixed positions.
Well, when it comes to home theater, trueplay does actually have you calibrate to the main listening area, in addition to waving it around. It’s taking account where you will most likely be hearing the audio primarily. It’s been a while since I’ve done tuning with non-Sonos equipment, but I believe they also have you place a mic where you will be listening the most. It honestly really doesn’t matter what it sounds like where the speaker is because that’s not where the listener will be. Best you can do with a mic at the speaker is measure the audio coming back to the speaker, or coming from other speakers, and sort of triangulate what it might sound like in the listening area and make adjustments accordingly...as I understand it. And that seems to lineup with what Sonos and others in the industry state.
Again though, if Sonos were to enable a form of auto trueplay with the Arc involved, it would surely be an inferior alternative to current trueplay...unless the algorithms have improved to the point where there is little difference.