• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Pre-planning Photography I: A look at "Output Quality"

Asher Kelman

OPF Owner/Editor-in-Chief
To give a bit more of a practical idea about the effect of diffraction on image quality, here is a small animation which shows the increase of DOF, but also the decrease of mirro-detail contrast and thus pixel resolution at f/5.6 and narrower apertures:

DIFFvsDOF.gif


The example comes from another thread about the effects of diffraction in the context of photomicrography.

The exact tipping point where pixel quality starts to deteriorate, depends on many factors, but sensel pitch (and thus sampling density), Aperture, and magnification, are the dominant parameters. Of course when we do not need maximum output dimensions, we can down-sample the image and claim back some effective resolution.

Hope that helps.

Cheers,
Bart

Bart,

This is a beautiful and easy to comprehend visualization of so many complex changes consequent on progressively closing down the aperture.

Thanks! :)

Asher
 
I think that you are neglecting the effect of the Bayer array, which dictates that the final information is reconstructed from more than a single pixel.

Hi Jerome,

No, I'm not really neglecting it. The Bayer under-sampling of color has only a minor effect on luminance resolution. In fact, I shot a few test images a few days ago in search of a super resolution approach that demonstrate that. Here is a crop from an image of my test chart, with the Nyquist frequency at 92 pixel diameter indicated:

887.png

The luminance resolution results in a bit of aliasing but otherwise almost resolved detail all the way up to the Nyquist frequency. There is some false color artifacting that I didn't suppress, but that's simple to do. The undersampling of color is the main issue, not luminance resolution.

Cheers,
Bart
 

Jerome Marot

Well-known member
Jerome,

A fair question! Well, there are my on needs ........and then there's my general observation of camera use by us all:

1. In the first instance, except for stitching panoramas, I love having more limited DOF and enjoy peripheral vignetting. But maximum detail retention in chosen portions is paramount.

2. In the latter instance, I just like to remind us all that sometimes, choice of tiny apertures can leading to surprising losses of the quality of an image for reproducing fine detail with good contrast when one needs it for larger prints.

My reference for my discussion are currently as follows:

1. Bart's free Depth of Field output quality planner tool addresses that handily, allowing one to chose the right camera and settings for end use considerations.

2. A discussion of effective pixels versus physical pixels, based on Imatest calculations, especially Table 2. here

I hope this better clarifies the scope of my intent. :)


I am still confused. I believe that your questions, whatever they are, could be answered much more easily. You are wondering how detrimental diffraction at small apertures is? Just do a test. Take a subject with fine details, the camera with the pixel size you want to use and shoot the subject at various apertures. Try a few lenses. See when sharpness decreases. Use any aperture faster than that. That will answer your questions a lot better than imatest calculations, which are made with questionable hypotheses.

Just a word of caution: do not shoot a macro subject as Bart did (unless you want to test for macro use, obviously). The aperture displayed in your camera at large reproduction ratio bears little relationship to the real geometric aperture.
 

Asher Kelman

OPF Owner/Editor-in-Chief
I am still confused. I believe that your questions, whatever they are, could be answered much more easily. You are wondering how detrimental diffraction at small apertures is? Just do a test. Take a subject with fine details, the camera with the pixel size you want to use and shoot the subject at various apertures. Try a few lenses. See when sharpness decreases. Use any aperture faster than that.

Exactly what I reported I'm doing elsewhere here, Jerome. It follows from my motto, "Dont think - try!" So I've taken my best Canon 6D files and printing out strips at the maximum magnification I want using various film and inkjet output and at 8x10 to see if I get the perceived image quality experience I want. Then I'll repeat my tests with scanned and projected 8x10 film. I have to them balance time and cost of each result versus perceived quality of my work.

Just a word of caution: do not shoot a macro subject as Bart did (unless you want to test for macro use, obviously). The aperture displayed in your camera at large reproduction ratio bears little relationship to the real geometric aperture.

That's an excellent point. (I missed that key factor with Nicolas' food pictures above!) Magnification is taken into account by Bart's quality tool.

Asher
 
I don't understand how that image was obtained and what relationship there is between Nyquist and 92 pixels. I am sure you can enlighten me.

Hi Jerome, It is a shot of my resolution chart with my TS-E 90mm lens. Given its 144 cycle circumference at any diameter, it results in a blurred center in direct relationship to the resolving capabilities of the camera system and Raw converter.

At a diameter of 92 pixels we have reached the Nyquist frequency (0.5 cycles/pixel), it is impossible to resolve finer detail in a single capture. Inside the 92 pixel diameter, only aliasing is possible. The chart allows to measure the limiting resolution of a camera system with simple means. It also offers slanted edges for oversampled resolution evaluation as specified by the ISO.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,
At a diameter of 92 pixels we have reached the Nyquist frequency (0.5 cycles/pixel) . . .
That is such a handy relationship. [Yes, of course I have confirmed it!] With a circle at that diameter (as you have done in various presentations) one can easily see how the attained resolution (however we care to judge that) falls beneath that corresponding to the Nyquist limit.

Thanks.

Best regards,

Doug
 

Jerome Marot

Well-known member
Hi Jerome, It is a shot of my resolution chart with my TS-E 90mm lens. Given its 144 cycle circumference at any diameter, it results in a blurred center in direct relationship to the resolving capabilities of the camera system and Raw converter.

At a diameter of 92 pixels we have reached the Nyquist frequency (0.5 cycles/pixel), it is impossible to resolve finer detail in a single capture. Inside the 92 pixel diameter, only aliasing is possible. The chart allows to measure the limiting resolution of a camera system with simple means. It also offers slanted edges for oversampled resolution evaluation as specified by the ISO.

So this is a real world test chart shot with a real camera and lens. The circle is linked to the pixel size of that particular camera (actually, the chart being a variation on the Siemens star and the pixel grid being square, I don't think that the limiting resolution should be a circle, but this is not important here). Now I understand.

How is that particular image a proof that diffraction is linked to pixel pitch and not to a value slightly bigger than pixel pitch due to the effects of the Bayer array? Please consider that an accepted value for the loss of resolution for the Bayer array on real life subjects (as opposed to black and white test pictures) is about 30% and not 0% as you are trying to show us here.

Totally unrelated to the discussion: I wonder why your picture exhibits a round artefact at the center, way below the Nyquist limit.
 

Doug Kerr

Well-known member
Hi, Jerome,
...I don't think that the limiting resolution should be a circle, but this is not important here)

Indeed. If we clear out CFA complications for the moment (in other words, imagine a monochrome sensor), the pixel pitch that determines the Nyquist frequency for detail variations along a diagonal axis is sqrt(2) times the "pixel pitch" we normally mention. (The greater the distance to the line, the greater is the Nyquist frequency it suggests, I believe.)

If we take that into account, it would seem that the distance to the limiting line should always be 46 times the actual pixel pitch, as applies to that direction. That would suggest that the limiting line should be a square.

But I don't seem to see results on Bart's example that seem to comport with that. So I may be barking up a wrong tree.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi, Jerome,


Indeed. If we clear out CFA complications for the moment (in other words, imagine a monochrome sensor), the pixel pitch that determines the Nyquist frequency for detail variations along a diagonal axis is sqrt(2) times the "pixel pitch" we normally mention. (The greater the distance to the line, the greater is the Nyquist frequency it suggests, I believe.)

If we take that into account, it would seem that the distance to the limiting line should always be 46 times the actual pixel pitch, as applies to that direction. That would suggest that the limiting line should be a square.

But I don't seem to see results on Bart's example that seem to comport with that. So I may be barking up a wrong tree.

Doug,

Imagine that's the case, that a square would be needed not a circle. How would one then make that Nyquist-limited square? Would there still be radial diagonals? what would happen at the diagonal corners. would the perimeter all be Nyquist-limited as with the circle?

Asher

BTW, Forgive me, Nicolas and other well-grounded-photographers. No, I haven't become a "space cadet", just wondering what the tests we can do with Bart's free, easy-to-use MTF chart depend on!
 

Doug Kerr

Well-known member
Hi, Asher,

Imagine that's the case, that a square would be needed not a circle. How would one then make that Nyquist-limited square?

Well, in fact, upon reflection, I realize that the Nyquist frequency along a 45° diagonal is typically sqrt(2) times the Nyquist frequency along the horizontal and vertical axes (not a lower value, as I had said recently). (Fuji used a "diagonally-oriented" sensor in some cameras to exploit this.)

And I'm not sure just how a graphical presentation of the Nyquist limit along different axes would properly reflect this. It might be a square in "diamond" orientation - I have to look at the math.

Ah, there was some discussion of this matter in Bart's original presentation of his text target (2011.01.15), in a comment to moi:

My test target will unambiguously show any 45° resolution advantage of images in a regularly sampled grid pattern, be it in print or in capture of the print. In fact, I already prepared the target to have a resolution of Sqrt(2) larger resolution in the diagonal dimension at its center (at the expense of some aliasing in the horizontal/vertical dimension). I applied some super-sampling in the calculation of the 144 cycle pattern for the finite resolution PPI grid.​

I'm not sure exactly what that means. Bart, could you elucidate for us?

Thanks.

Best regards,

Doug
 
So this is a real world test chart shot with a real camera and lens. The circle is linked to the pixel size of that particular camera (actually, the chart being a variation on the Siemens star and the pixel grid being square,...

Hi Jerome,

Indeed. And the beauty of it is that the target is insensitive to shooting distance, it gives the same results without the requirement for accurate magnification factor calibration or fixed shooting distance.

..., I don't think that the limiting resolution should be a circle, but this is not important here). Now I understand.

That would be correct for a CGI image with maximum amplitude, but this image has had an optical low-pass filter applied, i.e. lens + OLPF + Microlenses and sensel aperture. When you downsample the original target image, what will exactly happen depends on the down-sampling filter used, but one indeed often gets higher resolution diagonally, up to 1.4x Nyquist frequency (or a smaller diameter of the blur, producing somewhat of a smooth diamond instead of a circle).

How is that particular image a proof that diffraction is linked to pixel pitch and not to a value slightly bigger than pixel pitch due to the effects of the Bayer array? Please consider that an accepted value for the loss of resolution for the Bayer array on real life subjects (as opposed to black and white test pictures) is about 30% and not 0% as you are trying to show us here.

I've done an analysis of the Demosaicing losses alone, and found a loss of 6.4% luminance resolution with that particular algorithm. That test was based on synthetic data alone, no optics involved. Note that the Luminance resolution is also used to reconstrucet color resolution in modern Raw converters. That's why the resolution of the R/G/B channels is virtually identical, as e.g. Imatest will show.

When one then does a series of aperture shots at the same focus setting (assuming a lens without focus drift with aperture change) it becomes pretty apparent at which aperture the deterioration by diffraction starts to exceed the resolution improvement from cutting of the edge rays. I noticed that it usually sets in when the diffraction pattern exceeds a diameter of 1.5x sensel pitch, which in retrospect makes sense because beyond 1.41x the sensel pitch not only the 4 horizontal and vertical neighbors of a sensel will be affected by the main diffraction pattern influence, but also the 4 diagonal sensels will be added.

In this example for my EF 100mm Macro lens it can be shown that a measurable net deterioration already starts at f/5 with a sensel pitch of 6.4 micron (EOS-1Ds3) at a shooting distance of some 3 metres, but it won't visually affect the image until f/7.1:
BlurRadius_EF100mmF2_8LMacro.png
The blur radius mentioned here is expressed as a pixel radius to be corrected with Capture sharpening, as determined with my Slanted Edge tool.


When we calculate with Green light (= main component of luminance) and 1.5x sensel pitch in microns, we can calculate the aperture where diffraction starts to visually deteriorate the micro-detail contrast with; 1.108x sensel pitch in micron = f-number. Adding 10 or 11 percent to the sensel pitch is easy enough to do by head, so it's a simple rule of thumb if we want to estimate the effect for other cameras. A more accurate estimate of the influence requires actual measurements or test shots.

Totally unrelated to the discussion: I wonder why your picture exhibits a round artefact at the center, way below the Nyquist limit.

The artifact is in the print itself. It is caused by the imperfect printing process, for which the target is also a revealing torture test. At the finest detail level the ink will start to diffuse (lots of fine detail at neighboring print cells doesn't dry fast enough) when so much detail is printed. If the printer allows, one could reduce the amount of ink (which requires a reprofiling for that medium). On the other hand, it is also a signal when the diffusion is not approx. circular, it could indicate paper feed issues.

By shooting the target at 25 - 50x the focal length of the lens to be tested, the artifact will usually be well within the circle at the boundary of the Nyquist frequency, and have no influence on the optical blur.
 
Hi, Jerome,


Indeed. If we clear out CFA complications for the moment (in other words, imagine a monochrome sensor), the pixel pitch that determines the Nyquist frequency for detail variations along a diagonal axis is sqrt(2) times the "pixel pitch" we normally mention. (The greater the distance to the line, the greater is the Nyquist frequency it suggests, I believe.)

If we take that into account, it would seem that the distance to the limiting line should always be 46 times the actual pixel pitch, as applies to that direction. That would suggest that the limiting line should be a square.

But I don't seem to see results on Bart's example that seem to comport with that. So I may be barking up a wrong tree.


Hi Doug,

No, you are correct, if we were looking at a CGI version of the Sinusoidal grating. However, the image I showed was low-pass prefiltered by the optical system, before it was displayed. Therefore all spatial frequencies (also the diagonal ones) are bandpass-limited to roughly the same radius. A circular blur can be expected with a good lens. An elliptical pattern would indicate lens decentering, or uncorrected coma/astigmatism, or camera shake.

When we downsample the target we do often get 1.4x higher diagonal resolution, although the reconstruction filter has an effect on how it's really shaped. Regular Lanczos (Sinc of Sinc) based resampling filters produce such a somewhat diamond shaped blur, whereas the EWA resampling methods of ImageMagick produce more circular blur discs.

Here is a downsampled version of the Star pattern, first a more traditional Lanczos windowed version, and next an EWA (cylindrical filter) version, both without gamma linearization, and then the same with gamma linearization during the resampling:
Star_500px_25pct_TensorLanczos.png
Star_500px_25pct_EWALanczosRadius.png

Star_500px_25pct_G=1_EWALanczosRadius.png
Star_500px_25pct_G=1_EWALanczosRadius.png

These are two downsampling methods that produce very sharp results, which even produces some convincing looking aliasing that could be mistaken for actual detail. By adding a gamma linearization during the downsampling one can additionally avoid color contamination.

Cheers,
Bart
 
Well, in fact, upon reflection, I realize that the Nyquist frequency along a 45° diagonal is typically sqrt(2) times the Nyquist frequency along the horizontal and vertical axes (not a lower value, as I had said recently). (Fuji used a "diagonally-oriented" sensor in some cameras to exploit this.)

Hi Doug,

That's correct, and a clever trick if we realize that (due to gravity) there are more horizontal/vertical structures than diagonal ones. So assigning the higher resolution to those edges makes sense, although it requires some additional pixels to store that resolution in a hor/ver grid again after Raw conversion.

And I'm not sure just how a graphical presentation of the Nyquist limit along different axes would properly reflect this. It might be a square in "diamond" orientation - I have to look at the math.

Correct, a somewhat rounded diamond shape is what we see in practice, unless one uses an Elliptically Weighted Averaging (EWA) reconstruction filter (see samples in previous post).

Ah, there was some discussion of this matter in Bart's original presentation of his text target (2011.01.15), in a comment to moi:

My test target will unambiguously show any 45° resolution advantage of images in a regularly sampled grid pattern, be it in print or in capture of the print. In fact, I already prepared the target to have a resolution of Sqrt(2) larger resolution in the diagonal dimension at its center (at the expense of some aliasing in the horizontal/vertical dimension). I applied some super-sampling in the calculation of the 144 cycle pattern for the finite resolution PPI grid.​

I'm not sure exactly what that means. Bart, could you elucidate for us?

Sure. I calculated the original version of the target pattern pixel values to have a very high amplitude, but I used a square sensel aperture to average (x-0.5 to x+0.5 and y-0.5 to y+0.5 pixels) the individual pixel values. That maintained the maximum detail level that can be represented in a square grid, including the higher diagonal resolution, but also a better behaved Nyquist cut-off than a point sampling. I wanted the target to have the maximum detail level possible, yet few aliasing artifacts in the pattern itself, to challenge the optical reproduction as much a possible.

The current version of the target may be slightly more balanced to reduce aliasing artifacts even further, to avoid the printing process from adding too much artifacting which the camera will also reproduce.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Thank you for all that elucidation.

Your grasp of these matters is a great resource for us. You are a great resource for us.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Bart,

Thanks for the erudite explanations of your target and how you it works so well.

It's great to be able to restore at least some of the lost quality when going past the Nyquist limits in use of progressively small apertures. However some clarification might be in order in light of ever improving RAW convertors:

1. Do current versions of Capture One, Adobe RAW and the like deliver the required Capture Sharpening with some deconvolution?

2. Or one still needs that step to regain the lost crispness from diffraction and demosaicing?

3. Do specific "post-conversion" programs deduce the aperture of the lens and/or other specific shot details? (Topaz Detail, etc)

Asher
 

Jerome Marot

Well-known member
When one then does a series of aperture shots at the same focus setting (assuming a lens without focus drift with aperture change) it becomes pretty apparent at which aperture the deterioration by diffraction starts to exceed the resolution improvement from cutting of the edge rays.

Indeed and this is what I noticed when I tested my lenses. As you may remember and can see on my flickr page, I routinely test my lenses on a far away subject at various apertures (e.g. 2.8, 4, 5.6, 8 and 11). I did notice a small degradation of contrast at small apertures at the time.


In this example for my EF 100mm Macro lens it can be shown that a measurable net deterioration already starts at f/5 with a sensel pitch of 6.4 micron (EOS-1Ds3) at a shooting distance of some 3 metres, but it won't visually affect the image until f/7.1

This is in agreement with my observations. On the Sony A900, which has a 5.9µm pixel size, and excellent lenses, I see that the contrast at f/8 is a very tiny little bit lower on the finest details than at f/5.6. It is clearly a bit lower at f/11. The D800 is not markedly different (even if the degradation should occur a bit earlier, I would need to test more aperture values to find out). On the Nex-7, with a pixel pitch of 3.9 µm, and with a very good lens, I see that f/5.6 is a very tiny bit less contrasty, f/8 noticeably so and f/11 obviously degraded when examined at the pixel level.

It is always nice when the theory is somewhat in agreement with the observations.

Still: if I needed the depth of field, I would not hesitate to use f/16 on the A900 or D800 and f/11 on the Nex-7. The lowering of contrast at these apertures is inconsequential on prints (I can sharpen and manipulate the contrast when printing) and other defects are usually much more limiting:
-focus errors
-camera shake
-lens aberrations
-sensor noise and noise reduction (even at relatively low iso, e.g. 400 or 200).
And of course the main limitation is to find something actually worth taking a photograph of.

So, while the discussion is interesting on a theoretical level, my attitude in the field is simply to try to use the aperture I may need (which is rarely as small as f/11 and never beyond f/16). I consider diffraction not to be a problem worth considering with 24x36 or even APS-C sized sensors.
 

Asher Kelman

OPF Owner/Editor-in-Chief
……………………………..

So, while the discussion is interesting on a theoretical level, my attitude in the field is simply to try to use the aperture I may need (which is rarely as small as f/11 and never beyond f/16). I consider diffraction not to be a problem worth considering with 24x36 or even APS-C sized sensors.


Jerome,

Do you say this because at normal print size, say up to 16x24, in practice, you can generally make up in post-processing for any apparent loss of contrast and appearance of lack of some micro-detail?

From this source, the small sensors, especially 4/3 at the same sensel pitch should be more sensitive to quality loss the MF sensors. Would you agree with that? OTOH, micro 4/3 sensors, OMD-M1 and the new Panasonic GH4 with 3.7 micron sensels, should be more immune to such degradation because of the tiny sensel pitch.

Asher
 
Bart,

Thanks for the erudite explanations of your target and how you it works so well.

It's great to be able to restore at least some of the lost quality when going past the Nyquist limits in use of progressively small apertures. However some clarification might be in order in light of ever improving RAW convertors:

Hi Asher,

Just to make sure, beyond Nyquist there is only aliasing possible, no real detail exists unless by accident, although the detail close to but beyond Nyquist may be rendered looking a bit like real detail. Attempting to reduce that aliasing even more, will also reduce the contrast of micro-detail that is close to Nyquist but nor yet aliased. At Doug pointed out, the Kell factor points out a potential risk for detail that is, or is not, almost exactly aligned with the sensel grid, the detail may not be reliably resolved everywhere.

1. Do current versions of Capture One, Adobe RAW and the like deliver the required Capture Sharpening with some deconvolution?

Only of Adobe Raw do we know that it can use a sort of deconvolution when the Detail slider is at 100% (and it gradually mixes more USM in as the slider is reduced to 0), but it will produce ugly mottled results very quickly when used at 100. So one usually chooses a mix (e.g. Detail=50), with the appropriate Radius for the aperture/lens used, and an amount that is not too much (e.g. 35 or 40) at this stage of Capture sharpening. A mask is also used to avoid sharpening smooth areas, such as sky. A complication is that Noise reduction is part of how the sharpening functions, so things get pretty complex, pretty fast.

The sharpening method used by Capture One is not known, but it can produce crisp results with only modest halo, so it may be edge-aware / adaptive to a degree.

In practice I do not use any sharpening by these Raw converters at all (switched off for the actual Raw conversion) other than for preview to see if the file is sharp enough if I have multiple shots to choose from. I use fastly superior Capture sharpening methods, such as FocusMagic or Topaz Infocus, usually non-destructively on a Photoshop Blend-If layer for added control, and masking capability (if needed). This also allows to first address noise before sharpening it beyond repair.

The Capture sharpening stage should only remove some hardware (lens/aperture/sensor) related blur, to restore the incoming signal, nothing more, no halos, low noise.

2. Or one still needs that step to regain the lost crispness from diffraction and demosaicing?

Absolutely necessary, since all capture hardware in the image forming chain is inherently going to add blur. It is in principle measurable how much blur there is, so restoring much of the original signal sharpness is usually possible with deconvolution.

3. Do specific "post-conversion" programs deduce the aperture of the lens and/or other specific shot details? (Topaz Detail, etc)

Post-conversion and post Capture sharpening, there is another processing step usually called Creative 'sharpening' although it mostly changes contrast at various levels, rather than sharpness. After general tonality adjustments with e.g. Topaz Labs Clarity, which really brings back the illusion of natural light, a plugin like Topaz Labs Detail is amazingly well suited for intricate adjustment of surface structure and of the contrast of larger features. This is essential to cure some of the Diffraction blur losses our image may suffer from.

TL Detail is also very useful for Output sharpening, after resizing to the image size that the printer needs. It requires mostly Deconvolution deblur and Small detail adjustments with Detail to (pre)compensate for resampling blur and media losses (due to ink diffusion and structure).

Cheers,
Bart
 

Jerome Marot

Well-known member
Do you say this because at normal print size, say up to 16x24, in practice, you can generally make up in post-processing for any apparent loss of contrast and appearance of lack of some micro-detail?

Yes. In practice, I have printed images from the Sony A900 at 16"x24" (A2, about 40x60cm) and do not think that the minute degradation at f/11 could be noticed (it is only barely noticeable under the most stringent pixel peeping conditions). I have not made direct print comparisons, however.


From this source, the small sensors, especially 4/3 at the same sensel pitch should be more sensitive to quality loss the MF sensors. Would you agree with that?

The smaller the sensels, the earlier diffraction will be a problem. It is a known problem for video cameras, for example, and the reason why they usually have a built-in ND filter or set of ND filters.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Asher,

Just to make sure, beyond Nyquist there is only aliasing possible, no real detail exists unless by accident, although the detail close to but beyond Nyquist may be rendered looking a bit like real detail. Attempting to reduce that aliasing even more, will also reduce the contrast of micro-detail that is close to Nyquist but nor yet aliased. At Doug pointed out, the Kell factor points out a potential risk for detail that is, or is not, almost exactly aligned with the sensel grid, the detail may not be reliably resolved everywhere.



Only of Adobe Raw do we know that it can use a sort of deconvolution when the Detail slider is at 100% (and it gradually mixes more USM in as the slider is reduced to 0), but it will produce ugly mottled results very quickly when used at 100. So one usually chooses a mix (e.g. Detail=50), with the appropriate Radius for the aperture/lens used, and an amount that is not too much (e.g. 35 or 40) at this stage of Capture sharpening. A mask is also used to avoid sharpening smooth areas, such as sky. A complication is that Noise reduction is part of how the sharpening functions, so things get pretty complex, pretty fast.

The sharpening method used by Capture One is not known, but it can produce crisp results with only modest halo, so it may be edge-aware / adaptive to a degree.

In practice I do not use any sharpening by these Raw converters at all (switched off for the actual Raw conversion) other than for preview to see if the file is sharp enough if I have multiple shots to choose from. I use fastly superior Capture sharpening methods, such as FocusMagic or Topaz Infocus, usually non-destructively on a Photoshop Blend-If layer for added control, and masking capability (if needed). This also allows to first address noise before sharpening it beyond repair.

The Capture sharpening stage should only remove some hardware (lens/aperture/sensor) related blur, to restore the incoming signal, nothing more, no halos, low noise.



Absolutely necessary, since all capture hardware in the image forming chain is inherently going to add blur. It is in principle measurable how much blur there is, so restoring much of the original signal sharpness is usually possible with deconvolution.



Post-conversion and post Capture sharpening, there is another processing step usually called Creative 'sharpening' although it mostly changes contrast at various levels, rather than sharpness. After general tonality adjustments with e.g. Topaz Labs Clarity, which really brings back the illusion of natural light, a plugin like Topaz Labs Detail is amazingly well suited for intricate adjustment of surface structure and of the contrast of larger features. This is essential to cure some of the Diffraction blur losses our image may suffer from.

TL Detail is also very useful for Output sharpening, after resizing to the image size that the printer needs. It requires mostly Deconvolution deblur and Small detail adjustments with Detail to (pre)compensate for resampling blur and media losses (due to ink diffusion and structure).

Cheers,
Bart

Bart,

Thanks so much for helping us:

1. predict quality of our planned images
2. measure where our production line degrades that
3. guide our available remedies.

Your look at recovering lost quality and Jerome's comments are most encouraging.

I also sharpen in a separate layer but mask out the 60% edges and 100% of uniform areas like the sky. But to use "Blend IF", for more refined return of "contrast/detail", what's your approach? How does one choose the appropriate setting and what's the rationale for that?

Asher
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Asher,

Just to make sure, beyond Nyquist there is only aliasing possible, no real detail exists unless by accident, although the detail close to but beyond Nyquist may be rendered looking a bit like real detail. Attempting to reduce that aliasing even more, will also reduce the contrast of micro-detail that is close to Nyquist but nor yet aliased. At Doug pointed out, the Kell factor points out a potential risk for detail that is, or is not, almost exactly aligned with the sensel grid, the detail may not be reliably resolved everywhere.



Only of Adobe Raw do we know that it can use a sort of deconvolution when the Detail slider is at 100% (and it gradually mixes more USM in as the slider is reduced to 0), but it will produce ugly mottled results very quickly when used at 100. So one usually chooses a mix (e.g. Detail=50), with the appropriate Radius for the aperture/lens used, and an amount that is not too much (e.g. 35 or 40) at this stage of Capture sharpening. A mask is also used to avoid sharpening smooth areas, such as sky. A complication is that Noise reduction is part of how the sharpening functions, so things get pretty complex, pretty fast.

The sharpening method used by Capture One is not known, but it can produce crisp results with only modest halo, so it may be edge-aware / adaptive to a degree.

In practice I do not use any sharpening by these Raw converters at all (switched off for the actual Raw conversion) other than for preview to see if the file is sharp enough if I have multiple shots to choose from. I use fastly superior Capture sharpening methods, such as FocusMagic or Topaz Infocus, usually non-destructively on a Photoshop Blend-If layer for added control, and masking capability (if needed). This also allows to first address noise before sharpening it beyond repair.

The Capture sharpening stage should only remove some hardware (lens/aperture/sensor) related blur, to restore the incoming signal, nothing more, no halos, low noise.



Absolutely necessary, since all capture hardware in the image forming chain is inherently going to add blur. It is in principle measurable how much blur there is, so restoring much of the original signal sharpness is usually possible with deconvolution.



Post-conversion and post Capture sharpening, there is another processing step usually called Creative 'sharpening' although it mostly changes contrast at various levels, rather than sharpness. After general tonality adjustments with e.g. Topaz Labs Clarity, which really brings back the illusion of natural light, a plugin like Topaz Labs Detail is amazingly well suited for intricate adjustment of surface structure and of the contrast of larger features. This is essential to cure some of the Diffraction blur losses our image may suffer from.

TL Detail is also very useful for Output sharpening, after resizing to the image size that the printer needs. It requires mostly Deconvolution deblur and Small detail adjustments with Detail to (pre)compensate for resampling blur and media losses (due to ink diffusion and structure).

Cheers,
Bart

Bart,

Thanks so much for helping us:

1. predict quality of our planned images
2. measure where our production line degrades that
3. guide our available remedies.

Your look at recovering lost quality and Jerome's comments are most encouraging. :)

Two more questions:

Can we customize based on test shots?: What use might there be in including your test target as a reference shot for resolution in each shoot, (as we do a grey card) and employ the actual performance in the adjustments we make on the pictures shot under those conditions, including any glass or gel filters or extenders we might use.

You happened to mention the use of "What if" blending of sharpened layer": I also sharpen in a separate layer but mask out the 60% edges and 100% of uniform areas like the sky. But to use "Blend IF", for more refined return of "contrast/detail", what's your approach? How does one choose the appropriate setting and what's the rationale for that? Beyond excluding sharpening of deep shadows, or say just mid tones, are there any other tricks available with "Blend IF" that helps us with selectively refining what gets sharpened. I would be nice if there was a color picket to be able to sharpen say a shirt or a dress if that was desired. But I guess one can do that in Capture One with local adjustments or perhaps with a U tool in Nik.

Asher
 
Two more questions:

Can we customize based on test shots?: What use might there be in including your test target as a reference shot for resolution in each shoot, (as we do a grey card) and employ the actual performance in the adjustments we make on the pictures shot under those conditions, including any glass or gel filters or extenders we might use.

Hi Asher,

It's not necessary to keep shooting test shots for the purpose of sharpening only, unless one just uses the chart as a focus reference. The shift in paradigm for Capture sharpening is that the required Capture sharpening is only depending on the lens + aperture used, assuming the use on a single camera model. Once we have determined the required radius settings, they will not change, unless we drop the lens. We can just reuse the required settings based on a prior test run, and it may even turn out to be similar between different lenses on the same camera model. After all, Aperture is a dominant factor in the resulting blur as I showed in an earlier post of my 100mm Macro lens. My experience with very good lenses on multiple camera platforms is that a Gaussian blur radius of approx. 0.7 pixels is as good as it possibly can get with a normal contrast Raw conversion of a well focused image. The old view was that Capture sharpening depends on subject detail, but it doesn't, aperture is more important. Capture sharpening is hardware related, not subject related.

When we use FocusMagic as Capture sharpener we cannot dial-in fractional accuracy (although it does follow the same principles, a radius of 1 or 2 for well focused images) so we may need to judge by eye, but then FocusMagic can also attempt to repair serious defocus and other one-of-a-kind faults. Topaz Infocus has an 'Éstimate' mode in which it automatically determines the required PSF on actual image content.

It is important to use the correct settings for Capture sharpening, because it is the only moment to really restore sharpness from blur. Using the wrong deconvolution setting will leave quality on the table, and trying to do it after all sorts of tonal adjustments were applied will make it more likely to (partially) fail.

You happened to mention the use of "What if" blending of sharpened layer": I also sharpen in a separate layer but mask out the 60% edges and 100% of uniform areas like the sky. But to use "Blend IF", for more refined return of "contrast/detail", what's your approach? How does one choose the appropriate setting and what's the rationale for that?[/quote]

My rationale for using the following initial Blend-If setting on a duplicate layer is that when an edge is already sharp (much more rapid transition between two brightness levels in the sharpened layer compared to the unsharpened layer) I want to reduce the amount of sharpening to avoid overshooting the edge change into clipping. It also has the effect of reducing sharpened noise in smooth areas, so there is less need for masking out those areas. The default settings are useful starting points, but one can always tweak the influence on shadows or highlights visually (and non-destructively because it's in a layer and the transitions can be changed again anytime).

Non-clipped-sharpening.png

It's simple to put this into an action that duplicates a layer, renames it, sets the blending mode to Luminosity and the Blend-if settings to the default, and applies sharpening. All repeated by two mouse clicks, selecting the action and playing it.

Beyond excluding sharpening of deep shadows, or say just mid tones, are there any other tricks available with "Blend IF" that helps us with selectively refining what gets sharpened. I would be nice if there was a color picket to be able to sharpen say a shirt or a dress if that was desired. But I guess one can do that in Capture One with local adjustments or perhaps with a U tool in Nik.

Since we apply the Blending on a duplicate layer, in Luminosity blending mode to avoid color shifts, we can also apply a mask to exclude/include the sharpening from selected regions. The mask selections can be simply brushed in/out free-hand on the mask, or after a (color or region) selection was made first. If we want to exaggerate the effect we just duplicate the layer, or make one with different settings (e.g. addressing the larger blur towards image corners and gradually blending it in with a radial mask).

Cheers,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Bart,

By Gaussian Blur and 400%, I presume you mean setting unmask sharpening in CC to 400% and then the pixels width to 0.7.

Also, I presume that the sharpened layer is the "This layer" But then I'd have expected your alterations to have been on that layer, not the lower original layer. You have not in this pattern, limited sharpening in the shadows or highlights. It seems that your specification here gradually introduces sharpening from the middle both ways to hte highlights and the shadows, but I don't have a feeling why that would be the way to do it.

Asher
 
Bart,

By Gaussian Blur and 400%, I presume you mean setting unmask sharpening in CC to 400% and then the pixels width to 0.7.

Hi Asher,

No, those are just 400% zoomed in crops of a small radius USM sharpened (which will create halo with any radius > 0.3) edge with all levels of pixel neighbor contrast. As those zoomed in crops show, the risk of clipping tapers off as the brightness gets close to minimum/maximum (0/255). There is no need to sharpen an edge that is already sharp.

Also, I presume that the sharpened layer is the "This layer" But then I'd have expected your alterations to have been on that layer, not the lower original layer. You have not in this pattern, limited sharpening in the shadows or highlights. It seems that your specification here gradually introduces sharpening from the middle both ways to the highlights and the shadows, but I don't have a feeling why that would be the way to do it.

This layer is the sharpened layer copy on top of the original unsharpened layer.
Have to run, I've got an appointment.

Cheers,
Bart
 

nicolas claris

OPF Co-founder/Administrator
Asher
I'm on location shooting some interiors of a large catamaran, waiting for the rain to stop!
When I'll be back i'll post a portion of enlargement.
For now please remind that what you see above are crops at 100 % the file being about 45x62 cm @ 300dpi (camera output)...

150 x 112 cm or 59' x 44 @ 200 dpi (far enough for top class inkjet printer, I would send to print larger at 150 dpi without any fear)

100% crops

_NCL6099-2-crop1.jpg


_NCL6099-2-crop2.jpg


Does it end up just doing "good enough"?
 
Does it end up just doing "good enough"?

Hi Nicolas,

Surely good enough. The question however is, whether you could have gotten even better results (and perhaps faster) by using some different techniques or processing.

While they are invisible at smaller sizes, I do notice some sharpening halos that probably could have been avoided, while getting similar or better results. It's hard to say without looking at the unprocessed images, but perhaps you could post a small crop of the following image area, with only Raw conversion, to see if there is some detail that is just waiting to be revealed?

_NCL6099-2-crop1a.jpg


_NCL6099-2-crop2a.jpg

If you want to do this experiment by PM that's understandable and fine with me as well. No rush, take your limited time.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Jerome,

Indeed and this is what I noticed when I tested my lenses. As you may remember and can see on my flickr page, I routinely test my lenses on a far away subject at various apertures (e.g. 2.8, 4, 5.6, 8 and 11). I did notice a small degradation of contrast at small apertures at the time.

This is in agreement with my observations. On the Sony A900, which has a 5.9µm pixel size, and excellent lenses, I see that the contrast at f/8 is a very tiny little bit lower on the finest details than at f/5.6. It is clearly a bit lower at f/11. The D800 is not markedly different (even if the degradation should occur a bit earlier, I would need to test more aperture values to find out). On the Nex-7, with a pixel pitch of 3.9 µm, and with a very good lens, I see that f/5.6 is a very tiny bit less contrasty, f/8 noticeably so and f/11 obviously degraded when examined at the pixel level..
Thanks for this report.

We of course cannot expect a "tight cluster" of the observations in this matter because (among other things):

• Our criteria are imprecise

• The visual impact of a certain phenomenon varies dramatically with the nature of the image material

• We have only an "anecdotal" description of the MTF of the rest of the system ("excellent/very good lens").

This is not at all to criticize the report, just to recognize how we are working here.

That having been said, It is interesting that for both the examples you gave, for your first criterion ("a tiny bit less contrasty"), at that f-number the diameter of the Airy disk would be a bit less than twice the sensel pitch (actually 1.83X and 1.9X for the two examples).

For your second criterion ("clearly/noticeably degraded"), the diameter of the Airy disk would in the area of 2.5 times the sensel pitch (2.52X and 2.78X for the two examples).

This seems to give us a useful, if hardly precise, rule of thumb: the intrusion from diffraction is probably "noticeable" when the diameter of the Airy disk is on the order of 2 times the sensel pitch and "significant" when the diameter of the Airy disk is on the order of 2.5 times the sensel pitch.

I know that these very rules of thumb have been widely mentioned, but I just wanted to see how they comported with your report.

Thanks again for the input.

Best regards,

Doug
 

Doug Kerr

Well-known member
I noted just above:

This seems to give us a useful, if hardly precise, rule of thumb: the intrusion from diffraction is probably "noticeable" when the diameter of the Airy disk is on the order of 2 times the sensel pitch and "significant" when the diameter of the Airy disk is on the order of 2.5 times the sensel pitch.

Actually, for the two examples cited by Jerome, that ratio was in the range 2.5-2.9. So for this exercise, I will use 2.7. (I'm at the blackboard, so I get to do that.)

It is interesting to consider the MTF that represents the impact of diffraction. We often estimate the "resolution" of a system as the spatial frequency at which the MTF is 0.5. This is very nearly 1/Da, where Da is the diameter of the Airy disk.

We also often find that the resolution of a digital camera system (exclusive of any impact of diffraction), with a "good" lens, is on the order of 0.75 times the sensor Nyquist frequency (which in turn is 1 over twice the sensel pitch).
Note that for a CFA sensor, the luminance resolution, which is what we are most sensitive to, is related to the G layer resolution Nyquist frequency, which is also 1 over twice the sensel pitch.​

If we do all the needed little algebra, we find that:

• The spatial frequency at which the MTF representing the impact of diffraction is on the order of 1/2.7p, where p is the sensel pitch. We can think of this as equivalent to a resolution of 1/2.7p.

• The expected resolution of the system itself is 1/2.7p.

Fancy that.

Thus, we note that when the "resolution" implied by the effect of diffraction is of the same magnitude as the resolution of the system without diffraction, we "significantly notice" the degradation due to the diffraction.

This is very similar to what we observe in other cases, such as degradation of resolution due to misfocus: when we "pile on" two MTF's, the overall "resolution" is "noticeably degraded" from that with only the first MTF when the "resolution" of the second MTF is comparable to that of the first MTF.

Doug
 

fahim mohammed

Well-known member
To answer the question at subject...I aim to take the best picture I can. For whatever media that might be.

I abhor theoretical/mathematical calculations to even cloud my vision of what I want my image to mean. I might not make good images. But neither am I interested in the mathematics of photography.

The content of the image is paramount. Technical perfection is secondary.

To have both is the goal. But a technically perfect image of a foggy idea...well he was right.

Gear, calculations, theories....not for me.

Obviously others here put emphasis on it. Good for them.

The ' kiss by the townhall '( whatever the controversy surrounding it ) would surely fail by the standards and numerical values of the usm settings discussed here.

YMMV.
 
Top