I need help to select a music streamer


I am so far looking at three music streamers to purchase.
1.  Bluesound
2.  Roon
3.  Bel Canto eOne

So far, I think the Bel Canto to be the best choice.  I wonder what the members of this group would recommend in the $1,500 budget range?  If you recommend a certain brand, I would like to know why it might be a better choice.

I will be streaming this to an ARCAM AVR 550.

Thank you.
128x128larry5729

Showing 9 responses by ahofer

@larry5729 There's a fair amount of blind testing suggesting that listeners can't distinguish resolution above redbook (CD), or even above the highest resolution mp3.

But even if you don't buy that, remember that an awful lot of allegedly high-res recordings have actually been converted from Redbook, and therefore all the remastering did was potentially add some jitter.

There's been some good discussion of MQA on Archimago -  https://archimago.blogspot.com/search?q=MQA

And this -  https://www.soundstagehifi.com/index.php/opinion/1057-mqa-one-year-later-suddenly-more-questions
Here’s a good compilation of blind tests. You could spend a solid half day going through them, and it is pretty sobering.

https://www.head-fi.org/threads/testing-audiophile-claims-and-myths.486598/

and if you want to get further on which populations have the best ears, at least in terms of detecting distortion and frequency response variations, Olive and Toole have published a few on that - http://www.aes.org/e-lib/browse.cfm?elib=12206

Since that paper is gated, here's a taste of their research, explained:
https://www.audiosciencereview.com/forum/index.php?threads/are-our-preferences-different-in-audio.284/

Here's a large N internet-based test of resolution audibility, with very explicit methodology-
http://archimago.blogspot.com/2013/02/high-bitrate-mp3-internet-blind-test.html
(link to results at the bottom of the explanation.  Notice the demographic composition of the test group)

You can test yourself on lossless vs MP3 here - https://www.npr.org/sections/therecord/2015/06/02/411473508/how-well-can-you-hear-audio-quality
(I find I do a lot better with Orchestral music on resolution tests. Others say you just need something fairly dense. I could distinguish between levels of mp3 with 100% reliability on any of the material, but more trouble with 312k to lossless)

What do I think? (I assume you meant me). I think MQA is more of a scheme to grab licensing revenue than an important improvement in streaming audio quality. I am a fan of true Hi-Res recordings, although I think the hi-res availability is often more of an indicator of the engineer/label’s goals than a significant step up from Redbook (ie you are less likely to get an entry in the loudness wars). I certainly think studios should have hi-res masters, starting with the widest dynamic range possible. Recording quality is a HUGE variable relative to a 16 bit vs 24 bit version of the exact same recording, IMO.

I use both Qobuz and Tidal at the highest resolution tier. I browse and favorite recordings that sound good, regardless of resolution.

I often hear differences in uncontrolled listening that, I’m afraid to say, are unlikely to be replicated under controlled conditions. Of course I don’t listen under controlled conditions, so contributions from factors that may not be strictly audible are important and worth understanding.
@geoffkait not quite, it is also possible for bad test design or mishandling to effect one component and not another, so a positive can also be false.

Positives, whether false or not, are awfully hard to find.  That has to mean that it is difficult to demonstrate audible differences (for amps, cables, digital sources/resolutions) in a controlled environment. If it was easy, cable and amp manufacturers would be yelling it from the rooftops, no?

On a related matter - there are very few speaker distortion comparisons as well. Yet speakers, I'm told, introduce the majority of the distortion in the chain. That's also interesting.
Positive results for a single test, on the other hand, are more credible since positive results were obtained in spite of any problems or errors that may have occurred in the test.


Most blind tests are simply to hear a difference.  So you are defining positive results as one where the user heard a difference and we are sure there was no error that produced an audible difference where there otherwise would not be one.  But if you knew there was operator error, you'd throw out those results, so, if there's error, it's unknown by definition.  

I would describe both a)hearing a difference due to unknown operator error, as well as b) the well supported bias towards hearing a difference* as false positives, and difficult to quantify.  

Anyway, legit or false, positive results (blind tests of cables, amps, hi-res vs redbook) remain vanishingly rare, so there's not much to be credulous over.  

* See the Stereophile tests in the link, particularly the devastating footnotes, for a hilarious read on this.
Geoff, I explicitly referred to the possibility that a problem exists *of which we are unaware*.

Once again, this semantic discussion has little bearing on the evidence at hand.

Evidence is not proof. That’s precisely why one test proves nothing. Preponderance of the evidence requires multiple tests as I be already said at least twice.


I agree with this statement entirely.   All we can say is that these tests have failed to reject the null hypothesis - listeners can't tell the difference between cables/amps/resolutions using only their ears.  The point here is that there have been lots of tests, and they all fail to reject the null hypothesis.  If there were a reasonable volume of tests that could, with reasonable confidence, reject the null hypothesis, I would be, in fact, pleased to accept that as (colloquially) 'proof' of strictly audible differences.

I'm working with the compilation I've linked.  I'd love to include others if readers can bring them to my attention.

@geoffkait

Well, for one thing blind tests are used for a number of reasons, including testing a single component or audio device or tweak. So don’t hand me a whole load of horseman knew her.

Heh, interesting spelling variation there. 

Well, right, but every test I've seen has either been a) can you tell a difference with or without the device or tweak, or between the options or b) rank order several options, with a) being more prevalent.

Let’s go to the definition of false positive: " a test result which incorrectly indicates that a particular condition or attribute is present."  Or if you like, the type 1 error, the rejection of a true null hypothesis (components sound the same).

Happy to end the discussion, but "you just don’t get it" is the run of the mill insult when you can’t prove or even adequately support your claims.
Here is the stereophile test I was referring to (It was linked in the head-fi compilation earlier in the thread) -  https://www.stereophile.com/features/113/index.html

It's one of the only tests in the head-fi compilation (or Archimago's many tests) that indicate an audible difference (in amps/cables/resolution), but, as you'll see if you read the footnotes and letters, that conclusion was far from justified, and the null hypothesis (no audible differences) remains unchallenged after properly controlling responses.

But it did suggest something interesting - an overall tendency to hear a difference between options, revealed by the results when the two options were identical. I've observed this in myself.

A good discussion of the flaw in the test design, and how it potentially revealed a bias towards hearing difference is in the letters on this page, particularly the second letter (also obliquely discussed in a footnote):

https://www.stereophile.com/content/blind-listening-letters-part-4

In the absence of a basic flaw in your experiment, the above statistical analysis suggests to me that Stereophile's amplifier test may have neatly pinpointed a key element in the high-end audio business: a propensity (is compulsion too strong a word?) on the part of aficionados to hear differences.

footnote 8 reveals that Larry Atkinson had also realized the problem, but the letter explains best the flaw in the data analysis that invalidates the (badly overstated) conclusion of the main article.
 I think Stereophile's listening  before the Carver match was sighted (ie not only with ears). Only after he matched the amps was some blind testing done.   

https://en.wikipedia.org/wiki/Bob_Carver#Amplifier_modeling
https://www.bobcarvercorp.com/carver-challenge
https://www.stereophile.com/content/carver-challenge-page-2

If I'm right that the pre-matched listening wasn't controlled, this test would only fail to reject the null hypothesis that the *matched* amplifiers were audibly indistinguishable.   It says nothing about strictly audible differences between the *unmatched* amps. An objectivist would probably suggest that Carver had the edge even before he modified his amp.

The null testing Carver employed to match amps has been used to compare expensive and generic cables right out of the box, with generally inaudible results.

But I remain interested in any *ears only* (blind) testing that rejects the null hypothesis.