Can you not learn that by actually taking photographs and then looking at them? Do they look the same on-screen, uprezzed, at 8x10, at 20x30?
That's the slow way of learning, and in many cases, leads to no learning at all. Many things that seem obvious once they are known, are unknown until someone discovers them. In my own experience, my "obsession" with measurement has lead to a lot of useful information that people can use in their shooting. I was the one who first noticed, AFAIK, that the highest ISO or two on Canon DSLRs had fake ISOs (others may have noticed it independently, but I was the first one to bring the idea to usenet and DPReview; by the time I got to this forum and the old Rob Galbraith, it was common knowledge). Once I noticed this, I could demonstrate it, and people who found this out could choose to shoot at the highest real ISO, and keep more highlight headroom in their RAW captures. All ISO 3200 does on current Canons is throw away a stop of highlights in an under-exposed ISO 1600. This is a decision made by your heros with MAs in engineering. The logical thing to do, if you're not going to use ISO 3200-level analog gain is to leave the full DR of the camera's digitized capture, and just tag the RAW file to indicate that the white level is 1024 instead of the traditional 2048. That way, a converter can use these extra highlights - and you certainly want them when shooting in typical ISO 3200 lighting - a mix of light sources and shadows (theatres, concerts, streets at night).
Now, suppose that I didn't measure and test these things (as well as anyone else who may have discovered them independently), and the ball is in your court; you shoot one scene at ISO 1600 and print it, and then shoot a completely different scene at ISO 3200. How can you tell which is better to use, from these two separate events of "just taking pictures"? You'd probably never know, because you don't have a solid point of comparison. You *must* look at the RAW histograms, and/or measure noise in the same subjects (and/or blackness) in controlled settings to get a hint that this issue exists.
Another one that I brought to consciousness is the "extra ISOs" on the 30D; they are just pushes and pulls of the 20D ISOs that lose DR (the 125/250/etc set) or shift it around relative to metered middle grey (the 160/320/etc set), both posterizing the RAW data a little. People now know that ISO 125 is false economy (same read noise as ISO 640!) because I took the time to measure the noise, and examine the RAW data. Canon won't tell you this stuff, and I really doubt if many at Canon engineering are even fluent in this stuff. I can't have the same kind of blind faith in the engineers and experts at the company that you have; close inspection of their decisions shows that either they are not totally competent, or don't care about maximum image quality (where the cost is just firmware coding!), riding on the momentum of their name-brand recognition.
Another case in point - Canon's White papers for the big pixel (1D* and 5D) cameras state that they take high-ISO sensitivity to another level, by capturing more photons with bigger pixels, and that is an outright
LIE; they capture about the same number of photons per pixel as the 20D at the same ISO, and *less* per square mm of sensor area, by a wide margin. The reason why the 1DmkII with its bigger pixels has lower practical noise than the 20D/30D is that the larger pixel pitch is less demanding on the lenses (they have more MTF contrast at their pixel pitch), so RAW data doesn't need as mush sharpening, so you get less sharpening of noise. Same principle applies to the AA filter - the AA filter is extremely weak in the 1D cameras, resulting in higher pixel-to-pixel contrast, requiring less sharpening (but causing more aliasing).
To believe and trust the powers that be in camera engineering, as you suggest in another thread, is misplaced blind faith, IMO.