Which is more accurate: digital or vinyl?


More accurate, mind you, not better sounding. We've all agreed on that one already, right?

How about more precise?

Any metrics or quantitative facts to support your case is appreciated.
128x128mapman
When you bake a tape, it takes a few years for it to regain the moisture chased out while baked. IOW, you have plenty of time to work with the tape- certainly more than 48 hours.

The reason you have to bake them has nothing to do with whale oil :) Modern tapes are made with polyesters, which can absorb moisture at the ends of broken molecular strands. The water molecule allows the magnetic substrate to come unglued. Baking chases out the moisture so the substrate can function normally.

Older tapes from the 1950s were made with acetate. Acetate does not have the moisture issue, so although they have less performance and break easily, they do store much better.
Question about digital sampling. The missing information from the digital samples must be added by the play back component, correct? When recording say a violin, is the first sample taken at the start of a note played and does it also sample at the very end of the note regardless of the samples in between? If it does not then how can digital play back components perform proper decay and bloom of the music played regardless of the sample rate?
I know nothing about digital recording but feel it is missing the soul and heart of the music IMO.
"The missing information from the digital samples must be added by the play back component, correct?"

This is incorrect, at least in theory. Read up on the Nysquist sampling theorem for more information.

Assuming the theory is sound, then the sampling is sufficient to capture all the information including high frequencies that matter, ie that most humans, even those with the best hearing, are capable of hearing.

Of course, not everyone may agree that the theory is sound and that the CD redbook implementation specifically is sufficient to capture everything that matters.

Then as you get into higher resolution digital audio sampling formats, the possible issues become even less likely to be real, so hi res is an insurance policy at minimum of sort.

The CD redbook format I think was well done in the sense of applying the best theory at the time towards being good enough to deliver very high quality sound, however, practically, a line had to be drawn in the sand at at that time now about 30 years ago regarding what was sufficient moving forward yet practical from a data volume and processing perspective at a commercial scale.

That fact that newer hi res formats have not caught on faster than they have 30 years later when the technology is far more advanced is testament actually to the robustness of teh original CD design.

NEwer CD recording and playback systems I find do increasingly better jobs of producing better recordings (when teh producers choose to) and a lot of progress has been made since CD was started in regards to providing better playback peformance with the now 30 year old format.

So it is extremely grey at best whether or not even the 30 year old redbook CD format is really missing anything of consequence to most as predicated by the theory it was based on.

Of course there may be "golden ears" out there that can hear something missing perhaps, but I take that with a grain of salt as well in that I do not know of any authority that certifies individuals as having golden ears.

I am not up to date these days unfortunately on the theory behind digital audio, so I am not sure if there is any newer theories out there or refinements to teh Nyquist principles applied 30 years ago that would indicate cleary that teh CD redbook format is now technically lacking in theory.

Maybe others know of something?

Yes. Nyquist assumes an analog sample of unlimited resolution, not a 16-bit sample. Its application to digital audio is thus, not. Ah, people don't like to talk about this! Or they do but it just turns into a ridiculous argument. But I suggest anyone look into the life of Nyquist:
http://en.wikipedia.org/wiki/Harry_Nyquist

(you will note that Nyquist had no concept of digital audio back when he proposed his sampling theorem)

and

http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem#The_sampling_process

If you read carefully, you will note that the samples are not defined as '16 bit', instead they are samples of the 'bandwidth-limited' signal, which have an analog value.

Now 16 bits can define a fairly precise value, but that is by no means the same as saying it can define the exact value. Further, the significance of 'bandwidth limited' should not be ignored. Current Redbook specs put the sampling frequency at 44.1KHz, if you think about it, the significance is that anything above about 19-20Khz is ignored. It is not so much that Nyquist is out to lunch that it is that Redbook specs are poorly applied.

The Redbook specs were created in the late 1970s and early 1980s. Seems to me I heard one of the first CD players about 1981. Back then, the IBM PC was king; a $10 cell phone has *considerably* more computing power! IOW, Redbook was **intentionally** limited in order to cope with the limitations of the hardware of the day. It is quite anachronistic that we still take it seriously today...
If I sit and play an instrument for recording purposes onto an analog tape I will record all that I play. Is this also true for digital recording or is the device recording parts of the sound (sampling) I am playing and the computer puts it together sort of like digital morphing of one image to another. If it is the latter then why call it a sample you are just asking for trouble and confusion.