Bill Miller
New member
Asher here you go. This was shot at f22 1/80th ISO 400 and here is a 100% crop
If for convenience, we treat the Kell factor, K, as being about 0.75, and assume a wavelength of 550 nm, then that situation of equivalence occurs when the f/number of the lens is approximately numerically equal to twice the pixel pitch in micrometers (microns).
Thank you for easing my mind and clearing things up Doug. For the past couple of months now I was afraid I had made a terrible mistake buying my D3. At that time, silly me, I never once even considered the Rayleigh criterion or Kell's rule. Can you believe that? I had a slight, hazey rememberence of these phrases but thought perhaps they were only things I had heard on Star Trek back in the 60's. Now that you have explained this so succinctly and jogged my memory a bit, I will rest easy tonight knowing that, as far as this topic is concerned anyway, I could easily have made a much worse purchasing decision than getting that D3.
James Newman
PS - I really do enjoy "trying" to read and understand some of your work. I must admit though that I do it purely for the sport. I like seeing if I can still use those portions of my brain required to decipher and understand what it is you are telling me.
Doug, Jack and Bart and anyone else that smart!
Can you give an example like this: at a viewing distance of 25cm, 10", looking at 8x10 and 20x24 prints of a wedding gown and detailed veil, where would pixel size and f stop degrade the detail on the veil and the fine shading in silk and brocade of a brides dress and her dark or blonde fine hair?
Asher
Great practical approach.Asher, you wanted fabric and lace - f18 1/200 ISO 200 1Ds
Exif got wiped out in conversion
100% crop Elinchrom strobes converted from RAW and cropped no sharpening in PS. There is a little loss in comversion to jpg. This is still a 500Kb file @100dpi
If for convenience, we treat the Kell factor, K, as being about 0.75, and assume a wavelength of 550 nm, then that situation of equivalence occurs when the f/number of the lens is approximately numerically equal to twice the pixel pitch in micrometers (microns).
A tangential question: Assuming this is a reasonable approximation of where diffraction begins to rear its head at the sensor -- and I agree it is, though in many cases one can add another full stop with little deleterious effect on final image quality -- do we still then need to factor in the sum variables of sensor size, output magnification and viewing distance to determine maximum printable sizes? Or can we derive an equally simple formula for calculating a standard pixel to print-DPI density relative to viewing distance?
Doug, Jack and Bart and anyone else that smart!
Can you give an example like this: at a viewing distance of 25cm, 10", looking at 8x10 and 20x24 prints of a wedding gown and detailed veil, where would pixel size and f stop degrade the detail on the veil and the fine shading in silk and brocade of a brides dress and her dark or blonde fine hair?
I think Jack's absolutely right about pixel pitch replacing COC, as good prints show softening much earlier now than when shooting 35mm film in my experience.
The calculation for the diffraction spot diameter (first zero) is : 2.44 x wavelength x Fnumber. which for 555nm (=0.555 micron) equates to 1.354 x Fnumber, which when it exceeds 1.5x the sensel pitch would visibly deteriorate the intrinsic image quality. Your 2x factor would represent a (somewhat) recoverable factor, depending on one's criteria for quality, and the recovery algorithm used.
Hi Jack,
This is an excellent question, not a tangential one! Obviously one could debate about the actual sensel pitch factor to use. I say 1.5x the sensel pitch before deterioration becomes visible, Doug suggests 2x the sensel pitch before it becomes significant enough to be still recoverable.
Of course, viewing at a certain viewing distance adds another variable (and we know that, at least, photographers will inspect images at closer distances than 'normal' human beings ;-) ).
There are other considerations as well. As Norman Koren explains in his excellent web-pages, we humans tend to respond stronger to some spatial frequencies than others. Be it as it may, there is a technical lower limit! It's the limit of resolving power at the viewing distance. Even if the limiting resolving power of human visual acuity is somewhat debatable (especially if one considers Vernier acuity), one could adopt generally accepted limits. In general, the limit of 5-8 line pairs/mm in output is an accepted criterion (it'll be hard to produce an image that 'looks' sharper than that). So when contemplating the optimal resolution of a DSLR image (at 1.5x the sensel pitch), there is a magnification factor at which the image becomes a compromise at close inspection.
If we accept the very critical criterion of 1.5x the sensel pitch, for uncompromised diffraction resolution, then the magnification factor of the sensor array dimensions becomes the limiting factor for uncompromised viewing. Example, a sensel pitch as for the Canon 1Ds Mark III of 6.4 microns x 1.5 will suggest an uncompromised output size of (0.0064mm x 1.5 = 0.0096mm, therefore a magnification factor of) 21 to 13 times the sensor array size before the resolution exceeds visual acuity (at a normal viewing distance of 8-10 inches or 20-30 centimetres, at normal illumination levels). Anything larger will only fail at a direct comparison between similar capture samples (still assuming an uncompromised capture with an aperture at no narrower than producing a diffraction diameter than at 1.5x the sensel pitch).
So, if one wants to simplify even further, 13-21 x times the magnification of the sensor array size will result in an uncompromised resolution (at normal reading distances), assuming an optimal aperture (of at most 1.5x the diffraction spot diameter, or sensel pitch). Viewing at larger distances will proportionally relax the magnification criterion.
Bart
IOW assuming optimal capture aperture and near perfect lens performance, wouldn't a 6 micron full frame (35mm) sensor deliver a final image 50% larger with comparable IQ to a similar 9 micron sensor?
Example, a sensel pitch as for the Canon 1Ds Mark III of 6.4 microns x 1.5 will suggest an uncompromised output size of (0.0064mm x 1.5 = 0.0096mm, therefore a magnification factor of) 21 to 13 times the sensor array size before the resolution exceeds visual acuity (at a normal viewing distance of 8-10 inches or 20-30 centimetres, at normal illumination levels). Anything larger will only fail at a direct comparison between similar capture samples (still assuming an uncompromised capture with an aperture at no narrower than producing a diffraction diameter than at 1.5x the sensel pitch).
It's those pesky linepairs again (and it was late when I wrote it).
Also, smaller sensels result in a lower dynamic range which is another factor in image quality. Smaller sensels also place higher demands on the lenses one uses, and small errors in camera technique (e.g. vibration) get more noticable.
Physically larger sensor arrays automatically reduce the need to magnify for a given output size. One needs to strike a balance between sensor array size and sensel pitch. Each choice has its pros and cons.
FWIW, and we're getting OT here, but I want to mention that while I understand the theory behind this DR comment, in practice I am not seeing that to be the case. In a recent test I did on the P25+ (9u sensul), P45+ (6.8u sensel) and P65+ (6u sensel), each step smaller on sensul netted a 1/3 stop GAIN in DR.
Physically larger sensor arrays automatically reduce the need to magnify for a given output size.
But what I'm trying to get to here is a direct simple formula for determining maximum print size from linear pixel count -- regardless of array size or sensul diameter -- for any given sensor. My main hurdle is as print sizes increase, viewing distance also increases, and thus we theoretically should be able to get away with larger and larger prints. My initial gut tells me this should be a relatively linear relationship, but as I get into it I am thinking it may be inverse geometric...
Hi, Jack,
An issue here is the definition of DR. If one is using a noise-based definition, then it may well be that, "other things being equal", a smaller sensel will produce a smaller DR.
But we aren't always using a noise-based definition of DR - in fact in many case, we have no idea what the underlying definition is.
For example, in three three cases you used, how did you get the indication of DR, and do you know what definition the determination was based on?
Best regards,
Doug
Hi Jack,
Well, the simple rule-of-thumb approach would be: If it prints well at 8x10 inches, it'll print well at larger sizes, as long as the viewing distance increases proportionally. Or, if we add a little math, we can state our resolution requirements for the output (e.g. in the form of PPI), and relax the requirements when we view from a greater distance. Viewing from twice the distance would allow to use half of the required resolution.
The higher resolution print allows us to walk in and see more detail. That's not possible with prints matching the satisfactory 8x10 resolution, just proportionally enlarged. Even moving one's head or leaning forward, the higher resolution print will show itself. So I'd say that one needs to double or quadruple the resolution for all nearby pictures in a room where a 16x20 might be shown. That way as one gets nearer, one finds more. That follows natural behavior.That's exactly where I was going Bart!
So if we assume it prints well at 8x10 and critical (8"?) viewing distance, then it should have *identical* appearance at 16x20 and 16" viewing distance; 32x40 and 32" viewing distance and 64x80 at a 64" viewing distance -- is that a sound assumption? (This is where I worry it is an inverse geometric relationship, not a linear one.)
The higher resolution print allows us to walk in and see more detail. That's not possible with prints matching the satisfactory 8x10 resolution, just proportionally enlarged. Even moving one's head or leaning forward, the higher resolution print will show itself. So I'd say that one needs to double or quadruple the resolution for all nearby pictures in a room where a 16x20 might be shown. That way as one gets nearer, one finds more. That follows natural behavior.
For billboards, that's not needed. For massive posters that one uses in a display both where one is quite close, then the pixel density of the 8x10 is needed over the whole area!
Asher
For a Monet or Van Gogh masterpiece, it's the pain stroke texture that becomes the extra wonderful detail that's revealed as we get closer. Photographs don't have that unique quality.
That's exactly where I was going Bart!
So if we assume it prints well at 8x10 and critical (8"?) viewing distance, then it should have *identical* appearance at 16x20 and 16" viewing distance; 32x40 and 32" viewing distance and 64x80 at a 64" viewing distance -- is that a sound assumption? (This is where I worry it is an inverse geometric relationship, not a linear one.)
If so, then we soon hit a defensible argument that once we hit a certain level of linear pixel density at the print level, anything more is simply irrelevant for 'normal' viewing purposes, correct?
For example, if we assume our example above is correct (which it likely isn't, but I am sure we can get it there), and we assume 360PPI in an 8x10 print viewed at 8" is greater than the unaided human eye can discriminate (which it probably is), we'd only ever need a native 360PPI in the 8x10 print --- In which case a sensor that delivers 2880x3600 pixel resolution -- or a measly 10.5 MP, -- is all we'd ever need and anything more is virtually useless... Is that correct?
So my question relates more to how we calculate what our minimum acceptable print size and resolution is, then extrapolate from there?