Theoretical Pre Amp Question


Real world answer would be to listen to it both ways and pick, because execution matters, but theoretically...

If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better? Is there likely to be more distortion or noise from a pre at lower or higher input level, even though either would use less than unity gain? If specifically using a tube pre amp, SHOULD the source level have an impact on how much “tubiness” comes through even though there is negative gain? What about potential interconnect effects? Wouldn’t a higher level signal be more resistant to noise as a %?

In an ideal theoretical case there is no distortion or noise. In a real world, empirical test the implementation dictates results. I’m just curious about the in between case of typical expected results based on standard practice and other people’s experience 


cat_doorman
I didn’t think through the circuit. The answer is pretty obvious after that. Of course there are 3 basic categories 
case 1:buffer, gain, attenuation - this might have issues with variable output impedance similar to a passive depending on implementation
case 2: buffer, variable gain, buffer - I now remember something about the PS Audio Gain Cell varying gain instead of attenuating signal.
case 3: attenuation, gain, buffer - this keeps the gain and output impedance constant
For a tube pre I think case 1 would impart more constant tube character because it is running at constant power and only attenuating after. Case 3 would be more dependent on implementation of the gain stage. With sufficient bias a linear response wouldn’t color the signal more at higher volume than lower. 
Seems like running hot is the way to go. Unless there ends up being another reason not to.

Thanks for pointing me in the right direction guys. 
PS Audio Gain Cell varying gain instead of attenuating signal.
Trouble with that one your usually varying the feedback to give your different gains and that in it self is "changing the sound" of that gain section.
More feedback less gain (lower distortion).
Less feedback more gain (more euphonic/distorted).
Kinda the opposite of what you want/need.

There's no free lunch is there.

Cheers George   
Benchmark's recommends highest gain settings at pre and the lowest at the power amp.  They provide option of 22dBu (9.8VAC) input in AHB2 power amp.  I understand that they want to move gain from noisy environment (power amp) to quieter environment (pre), not to mention less interconnects sensitivity to ambient electrical noise (better S/N).  Long time ago, mostly in Europe, they had -10dBV  (0.316VAC) standard for line level.  They believed that it will save money since only one item (amp) had more gain stages, while multiple sources had less.  I assume it didn't work (too noisy?).  The most common for line level in US is likely +4dBu (1.23VAC), but I assume that for preamp output it has to be higher since AHB2s lowest level input setting is 8.2dBu (2VAC).  Is there any standard for power amp input? Most of the time 2VAC is mentioned.
For a tube pre I think case 1 would impart more constant tube character because it is running at constant power and only attenuating after.
@cat_doorman   The harder you run the tube circuit the more distortion it will make. So its not a matter of how much signal the preamp is getting since that goes through the volume control first. Its more a function of how loud the system is playing. That said, tube preamps tend to have very low distortion figures relative to amplifiers.