Ability to play 24bit/96 files (like the competition: slimdevices transporter)
Page 32 / 41
Right, but one important difference.
Quote from above:
A price can be justified by a rational buyer under the belief that another party is willing to pay an even higher price.[2][3] Or one may rationally have the expectation that the item can be resold to a "greater fool" later.[4]
Unquote
Still a rational reason for buying.
But no one buys silly priced audio gear for this reason!
By the way, sampling at 40kHz does actually capture frequency information above 20kHz as well. The problem is those frequencies aren't usable because of a form of distortion called "aliasing" that naturally occurs (and cannot be prevented).
Aliasing is complex and it's not very intuitive, but you basically get a mirrored version of the original spectrum extending downwards from the sample frequency. When you hit the halfway point, (e.g. 20kHz at a 40kHz sample rate) they start to overlap.
http://www.nathaneadam.com/OLD_SITE/4200/Images/4200/alias_spectrum.jpg
By the way, the site I grabbed this image from looks to have a useful discussion of how sampling works:
http://www.nathaneadam.com/OLD_SITE/4200/Study_Notes/3-3_Conversion_Sampling.html
Note how there is no stepped waveform. The sampling is at discrete points. Even then, be careful how much you analyse this sort of representation.
Cheers,
Keith
As noted, the scientist gave up in disgust and the engineer was soon close enough to engage with the girl.
The salesman spent all his time trying to negotiate for a greater reduction of the distance on each cycle, so never even started.
Again the perspective of the Science student.
The tale as told by the Humanities student was probably from the girl's perspective, and related how she dismissed both of the boffins, as the Salesman knew how to listen, had a bigger pay cheque, and better personal hygene. 😃
Aliasing is complex and it's not very intuitive, but you basically get a mirrored version of the original spectrum extending downwards from the sample frequency. When you hit the halfway point, (e.g. 20kHz at a 40kHz sample rate) they start to overlap.
http://www.nathaneadam.com/OLD_SITE/4200/Images/4200/alias_spectrum.jpg
By the way, the site I grabbed this image from looks to have a useful discussion of how sampling works:
http://www.nathaneadam.com/OLD_SITE/4200/Study_Notes/3-3_Conversion_Sampling.html
Note how there is no stepped waveform. The sampling is at discrete points. Even then, be careful how much you analyse this sort of representation.
Cheers,
Keith
Interesting, thanks. It also covers how the aliasing you refer to is handled by a filter.
Kumar,
While there may be some secondary, practical considerations for picking a higher sampling rate, once the underlying theory is understood, some simple math is all that is required to pick the minimum sample rate. After decades of human studies, there was no compelling evidence demonstrating that system response beyond 20KHz is required.
The number of bits is similar to the number of decimal places and this requirement depends on the application. For example, it might make someone feel warm and fuzzy to know that the inch marks on that child's ruler in the book bag are 1.000000 inches apart, but this sort of accuracy is not required for any child application and preserving this level of accuracy would require us to track the temperature of the ruler and object being measured and worry about ruler damage as it wrestles with other items in the bag.
With respect to the number of bits required for our audio application, we need to consider the characteristics of the human's auditory perception system in addition to any practical hardware points. In terms of cost and convenience for computer storage and manipulation of data, it would be convenient if we limited the bit depth to 16 bits. Earlier, when the CD format was laid out, 18 or 20 bit samples would have increased the cost significantly and limited the playing time to the point where format success in the mass marketplace was unlikely. We would have needed to wait for a Blu-ray like technology to develop.
But, is 16 bits enough? We needed well managed trials with human subjects to make this determination. After these trials it was clear that 16 bits was enough. Unfortunately, due to implementation issues caused by inexperienced analog audio designers taking their first steps in the digital domain or digital engineers taking their first stab at analog design, there were early issues with CD audio. Some of those early CD players were rather nasty sounding. And, one could measure some really ugly things that were never discussed by reviewers, regardless of their audiophile pedigree.
The audiophiles claimed that these difficulties were caused by "digital". When, in fact, the difficulties were caused by misunderstanding of the math and unfortunate circuit board layout. At this point any recent graduate should understand the math and, while circuit board layout still has an element of "art", if one follows well established rules, the results are generally satisfactory and one can prove their results after a few careful measurements. There are plenty of consultants who can fix a board layout if necessary.
In my opinion, if the audiophiles want to make a valid claim that 16 bits is inadequate, they need to demonstrate this with some well controlled studies. To date, these studies have not surfaced. If the difference between 16 bits and something higher is so obvious, this should be very easy to demonstrate.
Unlike some of the audiophiles, science welcomes a challenge. This is the heart of science. Witness the current discussions about "the big bang".
While there may be some secondary, practical considerations for picking a higher sampling rate, once the underlying theory is understood, some simple math is all that is required to pick the minimum sample rate. After decades of human studies, there was no compelling evidence demonstrating that system response beyond 20KHz is required.
The number of bits is similar to the number of decimal places and this requirement depends on the application. For example, it might make someone feel warm and fuzzy to know that the inch marks on that child's ruler in the book bag are 1.000000 inches apart, but this sort of accuracy is not required for any child application and preserving this level of accuracy would require us to track the temperature of the ruler and object being measured and worry about ruler damage as it wrestles with other items in the bag.
With respect to the number of bits required for our audio application, we need to consider the characteristics of the human's auditory perception system in addition to any practical hardware points. In terms of cost and convenience for computer storage and manipulation of data, it would be convenient if we limited the bit depth to 16 bits. Earlier, when the CD format was laid out, 18 or 20 bit samples would have increased the cost significantly and limited the playing time to the point where format success in the mass marketplace was unlikely. We would have needed to wait for a Blu-ray like technology to develop.
But, is 16 bits enough? We needed well managed trials with human subjects to make this determination. After these trials it was clear that 16 bits was enough. Unfortunately, due to implementation issues caused by inexperienced analog audio designers taking their first steps in the digital domain or digital engineers taking their first stab at analog design, there were early issues with CD audio. Some of those early CD players were rather nasty sounding. And, one could measure some really ugly things that were never discussed by reviewers, regardless of their audiophile pedigree.
The audiophiles claimed that these difficulties were caused by "digital". When, in fact, the difficulties were caused by misunderstanding of the math and unfortunate circuit board layout. At this point any recent graduate should understand the math and, while circuit board layout still has an element of "art", if one follows well established rules, the results are generally satisfactory and one can prove their results after a few careful measurements. There are plenty of consultants who can fix a board layout if necessary.
In my opinion, if the audiophiles want to make a valid claim that 16 bits is inadequate, they need to demonstrate this with some well controlled studies. To date, these studies have not surfaced. If the difference between 16 bits and something higher is so obvious, this should be very easy to demonstrate.
Unlike some of the audiophiles, science welcomes a challenge. This is the heart of science. Witness the current discussions about "the big bang".
And that the sample size determines the dynamic range that is captured.
Correct, and the dynamic range is the difference between the peak (or "0 dbFS") and the noise floor.
Note that, regardless of medium, all commercially released material has had it's dynamic range artificially reduced or "compressed" simply because otherwise the lowest-level sounds wouldn't be audible under most listening conditions. This happens even with hires material.
The difference between a well mastered track and a badly mastered one is often down to how much compression is used, and how well it has been applied. Highly compressed tracks contain more energy and, therefore, sound louder but they aren't as nice to listen to in many ways as they don't have much "light and shade" variation. Obviously if you are a thrash metal band, then a "full-on" sound might be what you are after, but it doesn't suit classical or jazz where there are passages which are meant to be significantly quieter.
But dynamic range compression is pretty much always used on every recording, even if it's use is subtle. In practice, this means even the most dynamic recordings have their dynamic range limited to under 70dB, with most actually having a dynamic range of less than 50dB.
The standard resolution formats have an actual dynamic range of 96dB so are more than capable of capturing this. With dithering the effective dynamic range is nearer 120dB.
And, again, this is a slightly counter-intuitive concept. Looking at oscilloscope displays and "jaggies" will mislead you. The dynamic range scale is continuous, without any "gaps".
If you think in terms of "resolution" being "frequency response" and "dynamic range" then you won't go far wrong.
Cheers,
Keith
I will add that some of the early problems associated with "digital" were related to the recording studio technicians lack of familiarity and understanding of it as a medium.
There are recording practices that have evolved as "best practice" over the decades of using analogue tape which absolutely should not be applied when using digital recording.
As I said before these "traditions" have been handed down and passed on within the industry and it took quite a few years for it to be widely recognised in professional circles that the old analogue practices didn't work for digital.
But even today I see bad advice, based on old tape recording practices, published in articles and books.
Cheers,
Keith
The audiophiles claimed that these difficulties were caused by "digital". When, in fact, the difficulties were caused by misunderstanding of the math and unfortunate circuit board layout.
Unlike some of the audiophiles, science welcomes a challenge. This is the heart of science. Witness the current discussions about "the big bang".
Understood. It took some time for the implementation engineering to overcome the teething troubles. But my feeling is that these solutions are now widely known and have filtered down to budget digital systems.
As to science, it only progresses by moving from one hypothesis to the next better one, to replace the one proved to be imperfect via experimental observations. Else we would still be with the Greeks of antiquity. Even non engineers understand this!
If you think in terms of "resolution" being "frequency response" and "dynamic range" then you won't go far wrong.
I remember reading is that good music is heard in the silences between the notes...
Understood about the resolution concept.
There are recording practices that have evolved as "best practice" over the decades of using analogue tape which absolutely should not be applied when using digital recording.
What this means is that we are still at the mercy of the mastering guys while the other end of the digital audio chain is now a solved problem.
What this means is that we are still at the mercy of the mastering guys while the other end of the digital audio chain is now a solved problem.
For professional studio recordings, I would say that is the case.
Cheers,
Keith
There are recording practices that have evolved as "best practice" over the decades of using analogue tape which absolutely should not be applied when using digital recording.
Yes, since there is some high frequency roll off in the analog tape and disk cutting and bass difficulty inherent in the record cutting process, the master tapes passed on to the pressing plants were overtly equalized with a "box" or clever choice of microphone or microphone placement during the recording session. This was all considered "best practice" at the time. Early transfers to CD were over bright because pressing plants were being "purest" or were bound by contracts into transferring their master tape directly to the CD master. The CD pressings did not have the high frequency losses. Some thoughtful, gentle equalization would have helped considerably. Many early CD pressers also refused to use any dithering or pre-emphasis.
Sometimes a more appropriate mix from the original multi-track sesson tape can be helpful. But, the best recordings are planned from the beginning, with every aspect being carefully handled along the way.
In the days of HDCD many of these releases sounded better than the same release using classic methods -- even when played on a non HDCD player. This was because of the more careful processing by the studio's A-Team.
Even more basic than that, the old analogue tape practice of recording "slighty hot" with the VU meters peaking into overload in order to maximise the available dynamic range of the media. Analogue tape has a natural compression characteristic which handles this and it can actually sound quite nice.
But if you overload an analogue to digital converter, it is a brick wall. You clip the peaks off your signal which sounds awful. And, in any case, digital media has ample dynamic range capability and no generational loss to worry about, so it's not necessary.
The equivalent practice for digital is to record so that your highest peaks are well down from 0dbFS, with -6dBFS or below being a good maximum to aim for.
However, I still see books and articles aimed at home recording which refer to the old analogue practices when recording on digital systems. This is bad advice!
Cheers,
Keith
But if you overload an analogue to digital converter, it is a brick wall. You clip the peaks off your signal which sounds awful. And, in any case, digital media has ample dynamic range capability and no generational loss to worry about, so it's not necessary.
The equivalent practice for digital is to record so that your highest peaks are well down from 0dbFS, with -6dBFS or below being a good maximum to aim for.
However, I still see books and articles aimed at home recording which refer to the old analogue practices when recording on digital systems. This is bad advice!
Cheers,
Keith
Although I am not an engineer, I have a lot of respect for the discipline and its practitioners. This learning has raised that respect by another notch.
For the moderators here as well.
For the moderators here as well.
I always hated that. I was notorious for not running "hot". My tapes were a little noisy as a result, but they were remarkably free from distortion and high frequency compression. I always had to check out the deck's setup before I made a recording because each manufacturer used a slightly different strategy for its meter calibration. And I had to setup the deck for the tape that I was using.
A question:
If one was to use a tuning fork vibrating under the influence of a steady impulse, with a frequency of 1 unit and amplitude of 1 unit, that is all one needs to know to have the fork reproduce an identical vibration/sound from a digital recording of that impulse using Nyquist.
Representing this as a sound wave on paper, where both axes have the same scale for each unit, one just has to plot the data points as dots, and then connect the dots via straight lines. No other information is needed to do this, there is only one way to join two dots using straight lines.
Why then are curved lines used? These mislead people into thinking that more information is needed to plot the curves, and the more the information, the closer one gets to the actual curve.
If one was to use a tuning fork vibrating under the influence of a steady impulse, with a frequency of 1 unit and amplitude of 1 unit, that is all one needs to know to have the fork reproduce an identical vibration/sound from a digital recording of that impulse using Nyquist.
Representing this as a sound wave on paper, where both axes have the same scale for each unit, one just has to plot the data points as dots, and then connect the dots via straight lines. No other information is needed to do this, there is only one way to join two dots using straight lines.
Why then are curved lines used? These mislead people into thinking that more information is needed to plot the curves, and the more the information, the closer one gets to the actual curve.
Hi res audio reminds me of Supertweeters that were in the market about ten years ago. They were as expensive as most speakers they were supposed to be added to, by placing them on top of any full range speaker and appropriately wiring them in.
When it was pointed out that the 30khz they supposedly reproduced had very little energy in most music, and was inaudible even if it did, the marketing spiel went as follows:
By taking away the load of trying to produce higher frequencies than the main speakers are capable of, the main speakers are left free to concentrate their efforts on what they are capable of doing, thereby improving the sound of the main speakers in the audible range, even if there is no audible output from the super tweeter.
Ingenious?!
They died a natural death. I think.
When it was pointed out that the 30khz they supposedly reproduced had very little energy in most music, and was inaudible even if it did, the marketing spiel went as follows:
By taking away the load of trying to produce higher frequencies than the main speakers are capable of, the main speakers are left free to concentrate their efforts on what they are capable of doing, thereby improving the sound of the main speakers in the audible range, even if there is no audible output from the super tweeter.
Ingenious?!
They died a natural death. I think.
Why should Sonos break their existing architecture just to service the needs of those reluctant to transcode once at the point of storage, and who instead insist that Sonos carts huge files around and downconverts them every time they're played.
It makes no sense, and Sonos have thankfully drawn a line under the matter.
Thanks for the honesty.
Right. Now we know the Sonos architecture isn't capable of this. But there are actually ways to transcode a bunch of hirez 24 bit files (on the fly) so an external server ..(that is external to Sonosnet) does the "heavy lifting" realtime for Sonos customers, then pass them onto SonosNet for play distribution around our Sonos systems!
This takes away the need for Sonos to do the "heavy" file conversion.. as you rightly point out...
That way, Sonos customers wouldn't have to go thru the laborious task of file conversion or running two file type libraries..
MinimServer might be able to do such transcoding (on the fly)..but then Sonos blocks the use of external (non Sonos) UPnP servers.. right?
The UPnP server MinimServer doesn't appear as a source within the Sonos GUI....
It's not planned. They're not doing it. It doesn't matter why.
Either transcode the files to 16/44 -- and that is where the inaudibility applies, whether you like it or not -- or begin migrating to a system that can play these alleged hi-rez files. One of those options is practical and fairly inexpensive. The other, drastic and expensive.
Just stop whining about it.
Ok. As a matter of interest I am able to "transcode" the files (on the fly even)... real time from the UPnP server running my library on a tiny little NAS..so now Sonos customers who want to play these files can transcode their files as they leave the library in real time to go out over their networks from 24bit>16 bit.. No problem... this would be acceptable to me.. but how to get this working??? Becasue guess what? I can't seem to find away of pointing Sonos to a separate UPnP server... 🙂
So if Sonos is not actually biased against their customers playing back 24bit files (even transcoded on the fly in a16bit format)..why is it that there seems to be no support for independent UPnP servers? Well I can't find a way to do this? Can anyone help me on this here? Cheers!
Holy mother of God what a ponderous bore!!
Excuse me?
^ I'm trying to improve the capabilities of the Sonos platform here....
Don't you actually want it to be made easier for customers to play all their files within the Sonos environment?
Don't you actually want it to be made easier for customers to play all their files within the Sonos environment?
This thread is becoming a ponderous bore! Sonos isn't going to help you play your freaking files! Convert them once and be done with it! Then leave for another 6 months or a year until you flip out and find something new to bemoan about Sonos!
Don't you actually want it to be made easier for customers to play all their files within the Sonos environment?
First, it's not "my platform". Second, no I don't want you speaking for me when it comes to Sonos. For the love of God no!
Sonos has been very clear on its stance on 24/96 files. Seems like beating a dead horse. If one want to play 24/96 files (or 24/192) files, it is easy enough to do with several products. For example, a squeezebox Touch or Transporter will play 24/96 natively and a SB Touch will play 24/192 with a free applet installed on it. For this purpose, the squeezeboxes work well. Or use a Vortexbox system running squeezelite as a player (I believe it will play up to 24/384 files!).
But re: high rez files don't forget:
http://people.xiph.org/~xiphmont/demo/neil-young.html
But re: high rez files don't forget:
http://people.xiph.org/~xiphmont/demo/neil-young.html
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.