B&W Formation


Userlevel 5
Badge +12
https://www.bowerswilkins.com/en-gb

Anyone noticed this announcement? Thoughts?

This topic has been closed for further comments. You can use the search bar to find a similar topic, or create a new one by clicking Create Topic at the top of the page.

43 replies

https://www.cepro.com/article/bowers_wilkins_formation_wireless_multiroom_audio
https://www.soundandvision.com/content/formation-wireless-technology-explained

Much is made of sync accuracy and latency, but I have to confess I have never, in many years, been aware of soundstage drift between Sonos L&R stereo pairs.


So yes, everything is 5 GHZ. That makes me think that there is going to be issues if you're going to use the speakers in opposite ends of the house with nothing in between. I'm surprised that they aren't offering a Boost like product to bridge the gap(s) and strengthen their network.

The fact that they are planning on licensing their wireless platform could actually mean more competition for Sonos. Right now, it seems like they are only licensing it for use in single room HT setups, but if/when they are used in multiroom, with more affordable speakers...
Badge +7
At those prices, though, is it really competing with Sonos? Even Bluesound looks very affordable given B&W current MSRPs.
Userlevel 7
Badge +22
I thought about that possibility, but if B&W was bought in 2016 with smart speakers in mind, and they know Sonos will litigate to protect patents, I'd assume they took these 2/3 years to develop their own mesh system. I'm a little curious to see how well it works.

Sonos not only patented the technology that they use but a fair amount of stuff that could be used to work around the core patents. Something like 630 issued patents, and 570 applications.

https://www.sonos.com/en-us/legal/patents

Things will be interesting.
This article gets it right, at least at the bottom

https://www.cnet.com/news/bowers-wilkins-formation-high-end-formation-wireless-system-sonos-sairplay-2/

Creating a new wireless multiroom standard seemingly out of thin air can be fraught with peril. In the face of competition from Sonos and Google, systems such as Qualcomm's AllPlay and LG's Music Flow have fallen by the wayside. Formation is less of a "Sonos-killer" though, as its appeal to the high-end makes it more of a Bluesound or Naim mu-so competitor.
5GHz.

Much is made of sync accuracy and latency, but I have to confess I have never, in many years, been aware of soundstage drift between Sonos L&R stereo pairs.

Speculating wildly, the playback sync mechanisms may differ between in-room and multi-room. It could be that in-room the devices are synchronous or at least plesiochronous. Sonos of course operates over any asynchronous medium, by the exchange of timestamps. Interesting stuff.

Just as a reality check, sound takes about 1ms to travel a foot in air. Move your head slightly off the centreline, and even with two perfectly sychronised sources you'll easily inject hundreds of microseconds of offset between the channels at the listening position. And that's before room acoustics work their mischief.


Head movements while listening will introduce a sort of spatial jitter. Perhaps the audiophile community will coin a term for this.

Using paired speakers, if one plays a mono source and sits quietly with a sandwich and a drink, one can detect a gentle adjustment of the SONOS sync once in a while. However, if one blinks, one will probably miss the adjustment. I'm pathologically sensitive to this sort of phenomenon. In my opinion this is an audiophile rant about a theoretical possibility, rather than a real world problem. If the audiophile wants absolutely perfect theoretical imaging, they might consider using a head clamp. Unfortunately, small head movements are part of the localization process.
I did wonder whether the head clamp would be on offer. Presumably this would be part of the 'audiophile' package which also includes surgical replacement of one's auditory pathways with those of a dog.
Userlevel 7
Badge +22
I went looking but failed to find a simple program to let you test how different amounts of delay added to a stream sounds.

I guess I could try putting two Sonos about 70 feet apart and seeing how they sounded together but I'd have to go out to the back yard to get the distance.
You can adjust the delay on PLAYBAR/PLAYBASE/BEAM, some TV's, and AVR's. Even inside the house if you have a 20 foot dimension or more, you can get a feel for what happens. Outside would be ideal because you can easily control the "delay" (distance) and your observation point -- without distracting wall reflections. Assuming the speakers are time aligned, the more distant speaker is "delayed".
I did wonder whether the head clamp would be on offer. Presumably this would be part of the 'audiophile' package which also includes surgical replacement of one's auditory pathways with those of a dog.
While wildly impractical, this one would at least be based on science even if distantly, compared to that part of the package that ionises the air for better sound transfer.
And even the dog in the HMV label used to have his head tilted to one side, if memory serves me well.
Userlevel 7
Badge +20
Well, I like the look of those stand-mount speakers. I'll be interested to listen to them ... just for fun, you understand 🙂 I'm not planning to defect.

If Sonos currently achieves L/R synchronisation to within 1ms, that equates to ~30cm in terms of sound wave travel. Variations within that range might well be detectable, not that I've ever noticed anything with my various stereo pairs.
I think that the speakers, while not the best of B&W for looks, are more credible looking that the play 5 units. I have always thought the Sonos portfolio misses out on a pair that looks like it has HiFi cred. Based on pure science it should not matter, but it does. Even to people that know it should not.
Out of curiosity, does anyone know how the brain compensates, if at all, for the ms differences in sound waves coming from different sources, but appear to be related? I've seen documentaries (Brain Games) to be precise, that talk alot about how the brain will take in data from the environment and adjust it so that it makes more sense, based on your passed experience and what creates a more palatable view of the world. So it seems quite possible that if the brain heard the same/similar note coming from 2 different sources, a few ms off, it could compensate by interpreting the sound as occurring at the same time. That's if it's capable of even determining the time differences.
Userlevel 7
Badge +20
So it seems quite possible that if the brain heard the same/similar note coming from 2 different sources, a few ms off, it could compensate by interpreting the sound as occurring at the same time. That's if it's capable of even determining the time differences.
Interesting thought. It's actually important that the auditory system can detect such timing differences: it's this and volume differences that help create the illusion of instrument/vocal placement in a stereo soundstage.

Interesting thought. It's actually important that the auditory system can detect such timing differences: it's this and volume differences that help create the illusion of instrument/vocal placement in a stereo soundstage.


Right, but say there is a situation where you have 2 devices that make the same identical sound, a foot apart, while you are say 12 feet away. If the 2 devices made a sound 10 ms apart from each other, the human ear probably wouldn't be able to tell the different in time between the two sounds. Perhaps not even recognize that there are 2 devices, not one, making the sound. As a real world example, 4 violinists playing the same tune in a symphony. If the audience has their eyes closed, would they be able to tell how many violinists there are?

But back to the 2 devices example, what if the 2 devices played identical sound around 100 ms. Is there a range where the human ear can recognize that the same sound is arriving at the human ear at different times, but the brain choices to interpret it as occurring at the same time, because it effectively sees the difference as irrelevant, not constructive, or just disconcerting/unpleasant information? It could be a matter of compensating for the varying speed of sound in different media. The same sound coming from different locations would be relevant information for sure. And of course, there would be a length of time (in ms) between sounds where it becomes relevant data and necessary for making evaluations and enjoying music

I suppose it's ultimately irrelevant in figuring out whether humans don't hear differences in sound due to limitations of the human ear or decisions of the brain. The end result is the same. It's a Friday discussion.

As aside: I am reminded of a free 'concert' I went to as a kid. It was city wide event in which lights were displayed off downtown buildings, along with fireworks. Accompanying music was transmit through a radio station, and those at the event were supposed to listen through their radio. As could be expected, the audio was kind of a mess. Factoring in the FM signal coming to each radio, the radio processing the signal, and then sound travel to my ear...it was way off from the light show. I could also hear several other concert go-ers radios playing a the same time, obviously not in sync with my radio due to the same 3 factors mentioned already.
Userlevel 7
Badge +20

Interesting thought. It's actually important that the auditory system can detect such timing differences: it's this and volume differences that help create the illusion of instrument/vocal placement in a stereo soundstage.

Right, but say there is a situation where you have 2 devices that make the same identical sound, a foot apart, while you are say 12 feet away. If the 2 devices made a sound 10 ms apart from each other, the human ear probably wouldn't be able to tell the different in time between the two sounds.

This discussion is related to the ability of the auditory system to detect time differences between sounds at the left and right ears -- so called interaural time differences or ITDs. For ITDs, the threshold is actually more like 10 microseconds, although it is frequency dependent. So, yes, the human ear would probably be able to detect the difference in your scenario, and I would argue that the brain has no incentive to mask the time difference because evolution has developed it as a great way to sense the direction from which a sound is coming. Which takes us back to the stereo soundstage effect.

As you say, off topic, but a good Friday discussion 🙂
Userlevel 7
Badge +22
Distance or delay issues also raise a problem with the sources output arriving in a different phase depending on the frequency. So at one frequency you'd have A+B and at another you'd be hearing A-B.

https://www.jdbsound.com/art/frequency%20wave%20length%20chart%202013.pdf
With respect to assigning direction, it is more complicated than time differences. If it was simply time, how could one detect that the sound origin is above or behind your head? See HRTF (Head Related Transfer Function) for more insight. We use small head movements to help us localize sound.

There are other dimensions. It is somewhat disorienting for the recording’s listener if an actual live head was used to host the recording microphones and the microphone host walks around the room. The aural message is that there is movement, but this movement does not correspond to the listener’s actual movement.
Userlevel 1
Badge +3

Even priced right, they look more fit for planet Mars.More like the garbage bin.


Well one could argue that the playbar in this case belongs to the museum.