• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

High F-Stops on DSLR's

Bill Miller

New member
Asher here you go. This was shot at f22 1/80th ISO 400 and here is a 100% crop

3A1W0077_spcl.jpg


3A1W0077_1.jpg
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Bill,

Thanks for being sporting and trying to address this issue. However, to do it justice, one needs to look at detail in a particular range for different size photosites. So your camera is a 1Ds. We'd need some object with resolveable detail, not smooth objects and compare f 5.6, 8, 11, and so on with f22.

Likely, with critical detail, you'd notice degradation. My guess is that the 1Ds is good to f11 and not likely more. I can imagine you want greater DOF, but there's a trade off.

Where the lens is limiting, you might not see any difference.

Asher
 

Doug Kerr

Well-known member
Just to help get a general idea of what to expect in contemplating the impact of diffraction effects on resolution, we can follow this trail:

The Rayleigh limit is an arbitrary criterion we may use as a premise for quantifying the resolution of a system limited solely by diffraction effects.

We typically find that the resolution limit of a digital camera based on pixel pitch broadly follows Kell's rule, treating the sensel pitch as the pixel pitch. (This is resolution given as spacing, not frequency.)

An interesting situation is where the resolution limit suggested by diffraction effects under the Rayleigh criterion is the same size as the resolution limit suggested by pixel pitch itself under Kell's rule. (That is not to claim that, at that situation, the diffraction effect on resolution becomes "important" compared to pixel-pitch-limited resolution.)

If for convenience, we treat the Kell factor, K, as being about 0.75, and assume a wavelength of 550 nm, then that situation of equivalence occurs when the f/number of the lens is approximately numerically equal to twice the pixel pitch in micrometers (microns).
 
Thank you for easing my mind and clearing things up Doug. For the past couple of months now I was afraid I had made a terrible mistake buying my D3. At that time, silly me, I never once even considered the Rayleigh criterion or Kell's rule. Can you believe that? I had a slight, hazey rememberence of these phrases but thought perhaps they were only things I had heard on Star Trek back in the 60's. Now that you have explained this so succinctly and jogged my memory a bit, I will rest easy tonight knowing that, as far as this topic is concerned anyway, I could easily have made a much worse purchasing decision than getting that D3. :)
James Newman

PS - I really do enjoy "trying" to read and understand some of your work. I must admit though that I do it purely for the sport. I like seeing if I can still use those portions of my brain required to decipher and understand what it is you are telling me.
 

Bill Miller

New member
Asher, It really does not matter what anyone's opinion or scientific reasoning is. The bottom line is the quality of the photograph. As you know I look for sharp, not the fuzzy feeling you like from your antique LF lenses. As for f11 being the best, tell that to the many owners of 1ds series and Canon. What do you feel the new Nikon's max f-stop is?
 

Jack_Flesher

New member
If for convenience, we treat the Kell factor, K, as being about 0.75, and assume a wavelength of 550 nm, then that situation of equivalence occurs when the f/number of the lens is approximately numerically equal to twice the pixel pitch in micrometers (microns).

A tangential question: Assuming this is a reasonable approximation of where diffraction begins to rear its head at the sensor -- and I agree it is, though in many cases one can add another full stop with little deleterious effect on final image quality -- do we still then need to factor in the sum variables of sensor size, output magnification and viewing distance to determine maximum printable sizes? Or can we derive an equally simple formula for calculating a standard pixel to print-DPI density relative to viewing distance?

Thanks,
 

Asher Kelman

OPF Owner/Editor-in-Chief
Thank you for easing my mind and clearing things up Doug. For the past couple of months now I was afraid I had made a terrible mistake buying my D3. At that time, silly me, I never once even considered the Rayleigh criterion or Kell's rule. Can you believe that? I had a slight, hazey rememberence of these phrases but thought perhaps they were only things I had heard on Star Trek back in the 60's. Now that you have explained this so succinctly and jogged my memory a bit, I will rest easy tonight knowing that, as far as this topic is concerned anyway, I could easily have made a much worse purchasing decision than getting that D3. :)
James Newman

PS - I really do enjoy "trying" to read and understand some of your work. I must admit though that I do it purely for the sport. I like seeing if I can still use those portions of my brain required to decipher and understand what it is you are telling me.

Hi James,

For sure it does sound like mumbo-jumbo and I'm relieved you haven't tossed your D3 out with the Christmas dinner left-overs"

I'm glad you were too imbibed to dump the camera, 'cause the D3 and D700 were for me the best news in 2008 for most working photographers! The Nikonians have in the pair the best first and backup camera for almost any professional work, short of Medium format or film. Canon shooters have seen a flame put up the behind of Canon Japan, so their engineers and marketing guys cannot just sit on their butts and 'diss us with less than the best focus in the 5D and 5DII.

Obviously, what's important is only where something makes a noticeable difference in the print or screen image. I'm guessing that with the D3 beyond f12 and the 1DsIII beyond f8-11. I'd be very surprised if the diffrection at f22 doesn't show up in such images!

All these modern cameras will smear fine detail at some f stop for a print at some magnification and viewing distance. Instead of the mathematics, we need the limits for practical conditions.

Asher
 

Asher Kelman

OPF Owner/Editor-in-Chief
Doug, Jack and Bart and anyone else that smart!

Can you give an example like this: at a viewing distance of 25cm, 10", looking at 8x10 and 20x24 prints of a wedding gown and detailed veil, where would pixel size and f stop degrade the detail on the veil and the fine shading in silk and brocade of a brides dress and her dark or blonde fine hair?

Asher
 

Jack_Flesher

New member
Doug, Jack and Bart and anyone else that smart!

Can you give an example like this: at a viewing distance of 25cm, 10", looking at 8x10 and 20x24 prints of a wedding gown and detailed veil, where would pixel size and f stop degrade the detail on the veil and the fine shading in silk and brocade of a brides dress and her dark or blonde fine hair?

Asher

Hi Asher:

We can by using legacy printing formulas with commonly accepted CoC assumptions, but the hard fact is that 1) CoC assumptions are a variable, and as such not pre-agreed-to standards, and 2) digital has to a large degree turned pixel pitch into a defacto CoC -- and while probably not really germane to print output, it is when we view digital files onscreen at 100%...

Thus my question is for those with better honed mental abilities than myself to help me get my arms around the possibility for an equally simple conversion formula for maximum print sizes based on given pixel density, or if that is even a viable concept.
 

Mike Shimwell

New member
I think Jack's absolutely right about pixel pitch replacing COC, as good prints show softening much earlier now than when shooting 35mm film in my experience. Having said that, a 1Ds3 gives an 18 by 12 at 300ppi, so the degree of softening at normal viewing distanes or small prints is very limited.

If I find time I'llhave a play with this to put some meat on my current thoughts, which are based solely on current experience.

Mike
 
Last edited by a moderator:

Bill Miller

New member
Lace @ f18

Asher, you wanted fabric and lace - f18 1/200 ISO 200 1Ds

Exif got wiped out in conversion

100% crop Elinchrom strobes converted from RAW and cropped no sharpening in PS. There is a little loss in comversion to jpg. This is still a 500Kb file @100dpi

3A1W0061crop.jpg
 

Asher Kelman

OPF Owner/Editor-in-Chief
Asher, you wanted fabric and lace - f18 1/200 ISO 200 1Ds

Exif got wiped out in conversion

100% crop Elinchrom strobes converted from RAW and cropped no sharpening in PS. There is a little loss in comversion to jpg. This is still a 500Kb file @100dpi

3A1W0061crop.jpg
Great practical approach.

This is a 1Ds Mark 1 at 1/200 sec 24-70 Canon L lens and this is a crop from lower right of the original head and torso photograph. The f18 aperture does allow pretty good imaging but there's likely more resolution and fine detail there to be seen. Something is wrong since the her left side, the right of this cutout is not in sharp focus and for this lens and aperture, at 3 meters, and 48mm focal length, the depth of field should be 6ft to 100ft approx! So something is wrong! Maybe she moved or the lens optics is not centered.

In any case, this looks good, but compared to what? We'd need to see it at f5.6 too!

Thanks for trying.

Asher
 
If for convenience, we treat the Kell factor, K, as being about 0.75, and assume a wavelength of 550 nm, then that situation of equivalence occurs when the f/number of the lens is approximately numerically equal to twice the pixel pitch in micrometers (microns).

Hi Doug,

I like the simplification, and it is even similar to what I've been 'preaching'.
My method of simplification is slightly more difficult, although it boils down to 1.5x the sensel pitch being the onset of visible diffraction. How far one is willing to lower the standard beyond that, is up to the individual.

My calculation of that onset of diffraction is based on the principle of diffraction with a circular aperture (ideal case), and empirical evidence that 'happens to be' consistent with the theory.
The reason I mention my specific method of deriving that boundary is in its usability in DOF calculations. When the COC in such calculations is taken as a diffraction spot diameter of 1.5x the sensel pitch, then one also has the most critical criterion for DOF and hyperfocal setups that's practical. Anything more restrictive won't show up as quality in any kind of magnification one might contemplate.

The calculation for the diffraction spot diameter (first zero) is : 2.44 x wavelength x Fnumber. which for 555nm (=0.555 micron) equates to 1.354 x Fnumber, which when it exceeds 1.5x the sensel pitch would visibly deteriorate the intrinsic image quality. Your 2x factor would represent a (somewhat) recoverable factor, depending on one's criteria for quality, and the recovery algorithm used.

Bart
 
A tangential question: Assuming this is a reasonable approximation of where diffraction begins to rear its head at the sensor -- and I agree it is, though in many cases one can add another full stop with little deleterious effect on final image quality -- do we still then need to factor in the sum variables of sensor size, output magnification and viewing distance to determine maximum printable sizes? Or can we derive an equally simple formula for calculating a standard pixel to print-DPI density relative to viewing distance?

Hi Jack,

This is an excellent question, not a tangential one! Obviously one could debate about the actual sensel pitch factor to use. I say 1.5x the sensel pitch before deterioration becomes visible, Doug suggests 2x the sensel pitch before it becomes significant enough to be still recoverable.

Of course, viewing at a certain viewing distance adds another variable (and we know that, at least, photographers will inspect images at closer distances than 'normal' human beings ;-) ).

There are other considerations as well. As Norman Koren explains in his excellent web-pages, we humans tend to respond stronger to some spatial frequencies than others. Be it as it may, there is a technical lower limit! It's the limit of resolving power at the viewing distance. Even if the limiting resolving power of human visual acuity is somewhat debatable (especially if one considers Vernier acuity), one could adopt generally accepted limits. In general, the limit of 5-8 line pairs/mm in output is an accepted criterion (it'll be hard to produce an image that 'looks' sharper than that). So when contemplating the optimal resolution of a DSLR image (at 1.5x the sensel pitch), there is a magnification factor at which the image becomes a compromise at close inspection.

If we accept the very critical criterion of 1.5x the sensel pitch, for uncompromised diffraction resolution, then the magnification factor of the sensor array dimensions becomes the limiting factor for uncompromised viewing. Example, a sensel pitch as for the Canon 1Ds Mark III of 6.4 microns x 1.5 will suggest an uncompromised output size of (0.0064mm x 1.5 = 0.0096mm, therefore a magnification factor of) 21 to 13 times the sensor array size before the resolution exceeds visual acuity (at a normal viewing distance of 8-10 inches or 20-30 centimetres, at normal illumination levels). Anything larger will only fail at a direct comparison between similar capture samples (still assuming an uncompromised capture with an aperture at no narrower than producing a diffraction diameter than at 1.5x the sensel pitch).

So, if one wants to simplify even further, 13-21 x times the magnification of the sensor array size will result in an uncompromised resolution (at normal reading distances), assuming an optimal aperture (of at most 1.5x the diffraction spot diameter, or sensel pitch). Viewing at larger distances will proportionally relax the magnification criterion.

Bart
 
Doug, Jack and Bart and anyone else that smart!

Can you give an example like this: at a viewing distance of 25cm, 10", looking at 8x10 and 20x24 prints of a wedding gown and detailed veil, where would pixel size and f stop degrade the detail on the veil and the fine shading in silk and brocade of a brides dress and her dark or blonde fine hair?

Hi Asher,

Given my, earlier, more detailed reactions it would require a roughly 13-21x magnification of the sensor array size to introduce (diffracion) unsharpness of detail (assuming that capture apertures narrower than the 1.5x sensel pitch diffraction criterion). That's to put it simply, but there are other factors that one could exploit, such as enhancing specific spatial frequenties that are important for the human visual system (HVS). At larger output sizes, and assumed observation distances, the maginification factor increases proportionally.

Bart
 
I think Jack's absolutely right about pixel pitch replacing COC, as good prints show softening much earlier now than when shooting 35mm film in my experience.

I agree, on both theoretical and practical grounds. I use the 1.5x sensel pitch for the DOF/Hyperfocal distance COC criterion. It serves me very well, much better than the old criteria which (by the way) were set for 13x18cm (5x7 inch) prints, 8x10 inch at best.

Bart
 

Doug Kerr

Well-known member
Hi, Bart,

The calculation for the diffraction spot diameter (first zero) is : 2.44 x wavelength x Fnumber. which for 555nm (=0.555 micron) equates to 1.354 x Fnumber, which when it exceeds 1.5x the sensel pitch would visibly deteriorate the intrinsic image quality. Your 2x factor would represent a (somewhat) recoverable factor, depending on one's criteria for quality, and the recovery algorithm used.

My approach is based on the rather arbitrary notion that the impact of diffraction becomes important when the resolution determined by diffraction is equal to the resolution determined by the sensel pitch.

Also arbitrarily, we can estimate the resolution determined by diffraction using Rayleigh's criterion, under which the resolution (on a distance, not frequency, basis) is the radius of the Airy disk.

The radius of the Airy disk, r, for 555 nm wavelength is given by:

r=0.677N in micrometers (microns)

where N is the f/number. (This is the relationship you mentioned.)

We can estimate the resolution, R, determined by the sensel pitch, p, this way:

R=p/K

where K is the Kell factor, for which a value of 0.75 is reasonable.

Thus, for K=0.75:

R=1.33 p

As we said above, we are interested in the situation in which r=R, which leads to:

0.677N=1.33p

Solving for N, we get very nearly:

N=2p

for p in micrometers (microns).

I have built no margin into this, but of course the use of the two criteria is arbitrary, and if we think that another pair of criteria is the most applicable, then there is either positive or negative margin in the relationship.

Best regards,

Doug
 

Jack_Flesher

New member
Hi Jack,

This is an excellent question, not a tangential one! Obviously one could debate about the actual sensel pitch factor to use. I say 1.5x the sensel pitch before deterioration becomes visible, Doug suggests 2x the sensel pitch before it becomes significant enough to be still recoverable.

Of course, viewing at a certain viewing distance adds another variable (and we know that, at least, photographers will inspect images at closer distances than 'normal' human beings ;-) ).

There are other considerations as well. As Norman Koren explains in his excellent web-pages, we humans tend to respond stronger to some spatial frequencies than others. Be it as it may, there is a technical lower limit! It's the limit of resolving power at the viewing distance. Even if the limiting resolving power of human visual acuity is somewhat debatable (especially if one considers Vernier acuity), one could adopt generally accepted limits. In general, the limit of 5-8 line pairs/mm in output is an accepted criterion (it'll be hard to produce an image that 'looks' sharper than that). So when contemplating the optimal resolution of a DSLR image (at 1.5x the sensel pitch), there is a magnification factor at which the image becomes a compromise at close inspection.

If we accept the very critical criterion of 1.5x the sensel pitch, for uncompromised diffraction resolution, then the magnification factor of the sensor array dimensions becomes the limiting factor for uncompromised viewing. Example, a sensel pitch as for the Canon 1Ds Mark III of 6.4 microns x 1.5 will suggest an uncompromised output size of (0.0064mm x 1.5 = 0.0096mm, therefore a magnification factor of) 21 to 13 times the sensor array size before the resolution exceeds visual acuity (at a normal viewing distance of 8-10 inches or 20-30 centimetres, at normal illumination levels). Anything larger will only fail at a direct comparison between similar capture samples (still assuming an uncompromised capture with an aperture at no narrower than producing a diffraction diameter than at 1.5x the sensel pitch).

So, if one wants to simplify even further, 13-21 x times the magnification of the sensor array size will result in an uncompromised resolution (at normal reading distances), assuming an optimal aperture (of at most 1.5x the diffraction spot diameter, or sensel pitch). Viewing at larger distances will proportionally relax the magnification criterion.

Bart

Thanks Bart!

This helps me get my arms around the issue and makes good sense. But I still think pixel pitch needs to be the main variable we key off of instead of sensor array itself, as sensel site diameter is now a more significant variable with a nominal range in current imaging systems of 6 to 9 microns, no?

IOW assuming optimal capture aperture and near perfect lens performance, wouldn't a 6 micron full frame (35mm) sensor deliver a final image 50% larger with comparable IQ to a similar 9 micron sensor?
 
IOW assuming optimal capture aperture and near perfect lens performance, wouldn't a 6 micron full frame (35mm) sensor deliver a final image 50% larger with comparable IQ to a similar 9 micron sensor?

Yes, from a strictly resolution driven approach, that is correct. However, a 6 micron sensel pitch also means that for pixel perfect quality one is more limited in choosing an aperture to suit the subject because diffraction might spoil the party. Smaller sensels will also change the shape of the MTF curve of an optical system, so one may need to adjust the post processing.

Also, smaller sensels result in a lower dynamic range which is another factor in image quality. Smaller sensels also place higher demands on the lenses one uses, and small errors in camera technique (e.g. vibration) get more noticable.

Physically larger sensor arrays automatically reduce the need to magnify for a given output size. One needs to strike a balance between sensor array size and sensel pitch. Each choice has its pros and cons.

Bart
 
Oops, I made a mistake

Example, a sensel pitch as for the Canon 1Ds Mark III of 6.4 microns x 1.5 will suggest an uncompromised output size of (0.0064mm x 1.5 = 0.0096mm, therefore a magnification factor of) 21 to 13 times the sensor array size before the resolution exceeds visual acuity (at a normal viewing distance of 8-10 inches or 20-30 centimetres, at normal illumination levels). Anything larger will only fail at a direct comparison between similar capture samples (still assuming an uncompromised capture with an aperture at no narrower than producing a diffraction diameter than at 1.5x the sensel pitch).

It's those pesky linepairs again (and it was late when I wrote it). The visual acuity will only allow half of the magnification I calculated above. So a 1Ds3 will allow an uncompromised output size in the order of 156 x 234 mm up to 250 x 375 mm, depending on one's visual acuity. A magnification of 6.5 to 10.4 times.

Bart
 

Jack_Flesher

New member
Hi Bart:

I forget those pesky pairs pretty regularly myself, so you'll get no grief from me on that :D

You wrote,

Also, smaller sensels result in a lower dynamic range which is another factor in image quality. Smaller sensels also place higher demands on the lenses one uses, and small errors in camera technique (e.g. vibration) get more noticable.

FWIW, and we're getting OT here, but I want to mention that while I understand the theory behind this DR comment, in practice I am not seeing that to be the case. In a recent test I did on the P25+ (9u sensul), P45+ (6.8u sensel) and P65+ (6u sensel), each step smaller on sensul netted a 1/3 stop GAIN in DR... Obviously other factors are in play, such as processing technologies, but the fact remains that all those are current products and the smaller sensul site diameters had greater DR.

For sure, smaller sensuls do place higher demands on lenses and capture technique.

Physically larger sensor arrays automatically reduce the need to magnify for a given output size. One needs to strike a balance between sensor array size and sensel pitch. Each choice has its pros and cons.

Certainly. But what I'm trying to get to here is a direct simple formula for determining maximum print size from linear pixel count -- regardless of array size or sensul diameter -- for any given sensor. My main hurdle is as print sizes increase, viewing distance also increases, and thus we theoretically should be able to get away with larger and larger prints. My initial gut tells me this should be a relatively linear relationship, but as I get into it I am thinking it may be inverse geometric...
 

Doug Kerr

Well-known member
Hi, Jack,

FWIW, and we're getting OT here, but I want to mention that while I understand the theory behind this DR comment, in practice I am not seeing that to be the case. In a recent test I did on the P25+ (9u sensul), P45+ (6.8u sensel) and P65+ (6u sensel), each step smaller on sensul netted a 1/3 stop GAIN in DR.

An issue here is the definition of DR. If one is using a noise-based definition, then it may well be that, "other things being equal", a smaller sensel will produce a smaller DR.

But we aren't always using a noise-based definition of DR - in fact in many case, we have no idea what the underlying definition is.

For example, in three three cases you used, how did you get the indication of DR, and do you know what definition the determination was based on?

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Bart,

Physically larger sensor arrays automatically reduce the need to magnify for a given output size.

I have to agree that this is true, but I don't know what it means.

Admittedly, if we have a 36 mm x 24 mm sensor array and want to produce a 6" x 4" print, the image-to- print magnification is less than if we have a 22.5 mm x 15 mm sensor array and want to produce a 6" x 4" print, but I don't quite pick up on the significance of that numerical difference.

It is not, for example, as if we have to have a taller column on our enlarger to do that.

Best regards,

Doug
 
But what I'm trying to get to here is a direct simple formula for determining maximum print size from linear pixel count -- regardless of array size or sensul diameter -- for any given sensor. My main hurdle is as print sizes increase, viewing distance also increases, and thus we theoretically should be able to get away with larger and larger prints. My initial gut tells me this should be a relatively linear relationship, but as I get into it I am thinking it may be inverse geometric...

Hi Jack,

Well, the simple rule-of-thumb approach would be: If it prints well at 8x10 inches, it'll print well at larger sizes, as long as the viewing distance increases proportionally. Or, if we add a little math, we can state our resolution requirements for the output (e.g. in the form of PPI), and relax the requirements when we view from a greater distance. Viewing from twice the distance would allow to use half of the required resolution. This however doesn't allow to account for the small aperture losses that the topic of this thread is about.

However, there are more things going on, unfortunately. Norman Koren tried to tackle the issue in a more scientific fashion at his website, and it involves the MTF of the capture and the contrast sensitivity of human vision. There the linear relationship is lost.

Bart
 

Jack_Flesher

New member
Hi, Jack,



An issue here is the definition of DR. If one is using a noise-based definition, then it may well be that, "other things being equal", a smaller sensel will produce a smaller DR.

But we aren't always using a noise-based definition of DR - in fact in many case, we have no idea what the underlying definition is.

For example, in three three cases you used, how did you get the indication of DR, and do you know what definition the determination was based on?

Best regards,

Doug

AFAIK, *all* definitions of DR require a stated noise limit as well as a contrast limit, but like you, I have no idea what the standard should be. Regarding my test above, it was very informal and more a seat-of-my-pants evaluation than a true test, so I will retract it as such :)

What we did compare was the same shot at the same base ISO in the same lighting from the same position using the same camera and lens at the same shutter speed and aperture. We then accounted or a slight difference in base exposure from the P65+ (it over-exposed by .2 stop relative to the other two backs) and compared the images. After the exposure was adjusted, whites and the midlle grays were all identical, and in the darkest regions noise was virtually absent (so not even a consideration for us), yet we could clearly see and measure added separation in the darkest grays from the backs as we went to the *smaller* 6.8u and 6u sensuls. However, what was also notable was that interpixel contrast was clearly (and respectively) lower in the smaller sensul backs, and to get the 'looks' to match more precisely, we had to add local contrast (clarity) back to the 6u and 6.8u back to get it to match the 9u back. Once that was done, the respective gains in DR were lost and for all intents the 6u and 6.8u back became identical to the 9u back. IOW, DR was a wash when all global aspects of the images were equalized.

Regardless, I can only draw from my experience above that going to a smaller sensul size did not negatively impact DR.
 

Jack_Flesher

New member
Hi Jack,

Well, the simple rule-of-thumb approach would be: If it prints well at 8x10 inches, it'll print well at larger sizes, as long as the viewing distance increases proportionally. Or, if we add a little math, we can state our resolution requirements for the output (e.g. in the form of PPI), and relax the requirements when we view from a greater distance. Viewing from twice the distance would allow to use half of the required resolution.

That's exactly where I was going Bart!

So if we assume it prints well at 8x10 and critical (8"?) viewing distance, then it should have *identical* appearance at 16x20 and 16" viewing distance; 32x40 and 32" viewing distance and 64x80 at a 64" viewing distance -- is that a sound assumption? (This is where I worry it is an inverse geometric relationship, not a linear one.)

If so, then we soon hit a defensible argument that once we hit a certain level of linear pixel density at the print level, anything more is simply irrelevant for 'normal' viewing purposes, correct? For example, if we assume our example above is correct (which it likely isn't, but I am sure we can get it there), and we assume 360PPI in an 8x10 print viewed at 8" is greater than the unaided human eye can discriminate (which it probably is), we'd only ever need a native 360PPI in the 8x10 print --- In which case a sensor that delivers 2880x3600 pixel resolution -- or a measly 10.5 MP, -- is all we'd ever need and anything more is virtually useless... Is that correct?

So my question relates more to how we calculate what our minimum acceptable print size and resolution is, then extrapolate from there?
 

Asher Kelman

OPF Owner/Editor-in-Chief
We need more detail to be revealed as we approach!

That's exactly where I was going Bart!

So if we assume it prints well at 8x10 and critical (8"?) viewing distance, then it should have *identical* appearance at 16x20 and 16" viewing distance; 32x40 and 32" viewing distance and 64x80 at a 64" viewing distance -- is that a sound assumption? (This is where I worry it is an inverse geometric relationship, not a linear one.)
The higher resolution print allows us to walk in and see more detail. That's not possible with prints matching the satisfactory 8x10 resolution, just proportionally enlarged. Even moving one's head or leaning forward, the higher resolution print will show itself. So I'd say that one needs to double or quadruple the resolution for all nearby pictures in a room where a 16x20 might be shown. That way as one gets nearer, one finds more. That follows natural behavior.

For billboards, that's not needed. For massive posters that one uses in a display both where one is quite close, then the pixel density of the 8x10 is needed over the whole area!

Asher

For a Monet or Van Gogh masterpiece, it's the pain stroke texture that becomes the extra wonderful detail that's revealed as we get closer. Photographs don't have that unique quality.
 

Jack_Flesher

New member
The higher resolution print allows us to walk in and see more detail. That's not possible with prints matching the satisfactory 8x10 resolution, just proportionally enlarged. Even moving one's head or leaning forward, the higher resolution print will show itself. So I'd say that one needs to double or quadruple the resolution for all nearby pictures in a room where a 16x20 might be shown. That way as one gets nearer, one finds more. That follows natural behavior.

For billboards, that's not needed. For massive posters that one uses in a display both where one is quite close, then the pixel density of the 8x10 is needed over the whole area!

Asher

For a Monet or Van Gogh masterpiece, it's the pain stroke texture that becomes the extra wonderful detail that's revealed as we get closer. Photographs don't have that unique quality.

Excellent Asher! You've raised a very salient point -- that we don't have the brush-strokes of the masters to appreciate as we move in, and we do need something, so what we have at our disposal is more detail.

That said, do we thusly 'move in' to critical (8"?) viewing distance for *every* image regardless of size? If so, then maximum print size becomes directly dependent on pixel count. Or are we more inclined to move into larger prints in some non-linear fashion, and if so, at what size do we start keeping our distance so to speak? In which case we have a non-linear need for increased sensor resolution as print size increases...
 
That's exactly where I was going Bart!

So if we assume it prints well at 8x10 and critical (8"?) viewing distance, then it should have *identical* appearance at 16x20 and 16" viewing distance; 32x40 and 32" viewing distance and 64x80 at a 64" viewing distance -- is that a sound assumption? (This is where I worry it is an inverse geometric relationship, not a linear one.)

Yes, it's a simple linear one. The most common viewing distance is often assumed to be 10" or 12".

If so, then we soon hit a defensible argument that once we hit a certain level of linear pixel density at the print level, anything more is simply irrelevant for 'normal' viewing purposes, correct?

Yes, in the rule-of-thumb scenario that would be correct.

For example, if we assume our example above is correct (which it likely isn't, but I am sure we can get it there), and we assume 360PPI in an 8x10 print viewed at 8" is greater than the unaided human eye can discriminate (which it probably is), we'd only ever need a native 360PPI in the 8x10 print --- In which case a sensor that delivers 2880x3600 pixel resolution -- or a measly 10.5 MP, -- is all we'd ever need and anything more is virtually useless... Is that correct?

Yes, that's correct, however it also assumes no cropping is ever needed, and that there will be no possibility to view from closer by. As for the 360 PPI, there is enough debate about that alone to fill a thread. When viewed side by side, a 720 PPI image can look sharper, but that also depends on the subject matter (and paper surface).

So my question relates more to how we calculate what our minimum acceptable print size and resolution is, then extrapolate from there?

A common assumption is something like 8 x 12 inches viewed at 10 inches, and I would say @720PPI.

However, this is the simple approach, and simple is not universally correct. Different people have different visual acuity. We do not have a uniform contrast sensitivity at all levels of detail, and people do perform close inspection. Imagefiles differ in their MTF signature, and photoediting changes it. That's why I added the link to Norman Koren's SQF approach. It's not as handy as a rule of thumb, but it is more accurate and it shows the gradual decline in quality as the size goes up. It's the quality level we want to accept that sets the bar.

Bart
 
Top