In conversation today I was asked to explain 8-bit vs 16-bit.
Let me say at the outset that I'm on a no-sugar diet, so I'm craving cake! I can't stop thinking about cake. Mmmm ... cake ... Folk who know me know I can explain most things with cake analogies, and bit depth is no different. Here goes ...
The purpose of editing in 16-bit is to completely negate the risk of visible degradation during pixel editing. You'll be most familiar with this degradation as "banding" - where large areas of similar detail "break up" and leave visible blockiness after editing. Blue skies are notorious for it, and so are white backdrops. I discussed banding and bit depth a little in this post, and this one.
Let's say you have a delicious round chocolate cake, sliced into eight pieces. You're meant to save it for later, when dinner guests arrive. But it's so tempting, you eat a piece, then try to re-arrange the remaining seven pieces evenly so as to disguise your theft. It's not going to work, right? People are going to notice the gaps between the pieces.
What if the cake were sliced into hundreds of very thin pieces? You could easily eat one or two of those pieces, and stealthily rearrange the remainder, and nobody would be any the wiser!
Do you see where I'm going with this? 8-bit images contain 256 levels of information per channel (2 to the power of 8), which isn't much. If you get aggressive with your edits on an 8-bit image, you might begin to see problems (eg in skies or backdrops as mentioned above).
16-bit images have about a gazillion teensy tiny levels of information per channel, which is utterly impervious to banding.
Raw files are high bit. Most cameras don't actually capture 16-bits per channel of information - your camera is likely to be a 10, 12 or 14 bit camera. Once that data gets to Photoshop, it is kept in a 16-bit container, and treated as 16-bit data, even if it's not quite that much. It makes no real-world difference - anything higher than 8 is very powerful.
So, should you always process your raw files in 16-bit? No, not necessarily.
If you work in ProPhoto RGB, then yes, 16-bit is a must. ProPhoto is such a big "cake", as it were, that any missing pieces are going to be terribly obvious in 8-bit.
But if you work in sRGB or Adobe RGB, 8-bit is fine - provided you do your major tonal adjustments in your raw processor, and just do the regular stuff in Photoshop. No 8-bit image ever turned nasty because somebody ran a vintage action on it, or greened up some grass, or whatever. That kind of thing is perfectly safe. You just have to make sure you've corrected any significant exposure problems in Raw first.
Of course, if you feel safer doing so, and you have the computer space and power to handle it, then by all means work entirely in 16-bit. It's certainly the most bulletproof workflow. Just remember that you can't save 16-bit files as Jpegs - they have to be converted to 8-bit for that. But as long as you stick with PSDs for your master files, and only save Jpegs for output (print or web), you'll do great.
One more thing ... high bit data won't save you if you've got noise. Noise is noise, at any bit depth. And if you've got noise coupled with underexposure, it's going to get worse as you improve the exposure, even in high-bit raw. 16-bit is not a magic cloak to guard you from noise, I'm afraid. But the same advice applies ... do as much noise reduction as you can in your raw processor, and you'll see the best possible results.