Doug Kerr
Well-known member
The original concept of resolution of an optical system
The concept of the resolution of an optical system goes back to astronomical telescopes (a special kind of camera). There, in an important part of their use, the objects are of infinitesimal size - points of light. An important property of a telescope is its ability to let the observer recognize, as separate, two point objects at a very small angular spacing. This is said to be the ability to "resolve" those objects.
The resolution of a general-purpose camera
Although this notion can be applied to a general-purpose photographic system, it does not fit our most common interest. There, we are typically concerned with the ability of the system to portray in the image fine detail in the scene. But just what that means is not a simple matter. And it is not straightforward to craft a numerical metric which, in one number, will express how well the system does that for us.
In any case, by (inexact) parallel to the astronomical telescope property, we speak quantitatively of this ability as the resolution of our photographic system (or of the images it delivers).
One approach that has been (and still is) widely used is to imagine a test scene comprising alternate light and dark bands (same width for light and dark), and reduce the spacing of the pattern until, in the image, one can no longer see a pattern of light and dark, but only a uniform gray. That spacing of the bands is then said to be the resolution of the entire photographic system.
There are several conventions used to do this. One of the least ambiguous is to consider the joint width of a dark band and the adjacent light band to be a "line pair". If we are working with, for example, a camera negative, we can then state the resolution in line pairs per millimeter. To recognize that the image on the negative may be reproduced for the viewer as a print of varying size, we can state the resolution in line pairs per picture height.
Note that in terms of the spatial frequencies contained in a scene pattern, for a pattern of f line pairs per picture height, the fundamental frequency is f cycles per picture height.
Note that there is some uncertainty as to exactly at what spacing of the pattern can the pattern "no longer be seen".
The modulation transfer function (MTF)
The actual behavior of the photographic system that leads to a certain resolution (as defined above) being observed is the spatial frequency response of the system, which we speak of as its modulation transfer function (MTF). If we have before us a plot of the MTF, can we determine from that, in some "scientific" way, what resolution (as described above) will be observed? Not really.
However, in many cases of interest, the resolution (determined from observation of line patterns), stated in line pairs per picture height, will be about the same as the spatial frequency (in cycles per picture height) at which the MTF drops to 10% of its value for low spatial frequencies.
The MTF of a digital photographic system is basically composed of these subordinate MTFs:
• The MTF of the lens proper (which reflects its aberrations)
• The MTF that represents the effects of diffraction
• The MTF of the antialising filter (if present)
• The MTF that represents the fact that out photodetectors do not pick up the illuminance of the optical image at a point but rather picks up the illuminance more-or-less averaged over a finite region (for example, the "intake port" of the photodetector microlens). This is spoken of as the "sampling aperture" MTF.
So, for example, the spatial frequency at which the MTF drops to 10% of its value at low spatial frequencies might be where
• The MTF of the lens was 30%
• The MTF from diffraction was 67%, and
• The sampling aperture MTF was 50%
• There is no antialising filter.
It is tempting to say, in a certain case, that "the resultion here is limited by the lens resolution". If in fact, for example, the lens MTF drops to 10% at a much lower spatial frequency than the other MTFs in the chain, that is a meaningful notion. But otherwise it is not that simple.
Or to say, "the resultion here is not limited by the lens resolution". If in fact, for example, the lens MTF drops to 10% at a much higher spatial frequency than the other MTFs, that is a meaningful notion. But otherwise it is not that simple.
The Nyquist limit
An upper bound on the possible resolution of an image in a digital system is its Nyquist frequency, which is the reciprocal of twice the pixel spacing. No digital image can convey a frequency component at or above the Nyquist frequency. Never. No way, no how.
If there are, in the optical image being "sampled" by the sensor, any frequency components at or above the Nyquist frequency, they will not be represented in the digital image. But that image will, as a result of the presence of those "out of range" frequencies in the image being sampled, contain spurious frequencies (below the Nyquist frequency) not present in the optical image. This is the phenomenon of aliasing.
Of course, if the amplitude of these "out of range" components is small, then the spurious frequencies they morph into in the digital image will be small, and will not consequentially degrade the image.
In order to keep the amplitudes of any "out of range" components "low", we rely on the system MTF reaching a low value as we approach the Nyquist frequency. In many cases, the combination of lens MTF, diffraction MTF, and aperture MTF will bring this about. If not, we may need to add an antialising filter, the decline of whose MTF as frequency increases will complete the needed suppression of "out of range" frequencies.
Note that when we use a CFA sensor, the Nyquist frequency for the R and B aspects is half that of the final digital image. (For the G aspect, it is the same as that of the final digital image.) How can we then suppress "out of range" frequency components (to avert aliasing) without reducing the resolution to substantially less that the one the Nyquist frequency of the image would allow? Well, we can't really.
Time for breakfast.
Best regards,
Doug
The concept of the resolution of an optical system goes back to astronomical telescopes (a special kind of camera). There, in an important part of their use, the objects are of infinitesimal size - points of light. An important property of a telescope is its ability to let the observer recognize, as separate, two point objects at a very small angular spacing. This is said to be the ability to "resolve" those objects.
The resolution of a general-purpose camera
Although this notion can be applied to a general-purpose photographic system, it does not fit our most common interest. There, we are typically concerned with the ability of the system to portray in the image fine detail in the scene. But just what that means is not a simple matter. And it is not straightforward to craft a numerical metric which, in one number, will express how well the system does that for us.
In any case, by (inexact) parallel to the astronomical telescope property, we speak quantitatively of this ability as the resolution of our photographic system (or of the images it delivers).
One approach that has been (and still is) widely used is to imagine a test scene comprising alternate light and dark bands (same width for light and dark), and reduce the spacing of the pattern until, in the image, one can no longer see a pattern of light and dark, but only a uniform gray. That spacing of the bands is then said to be the resolution of the entire photographic system.
There are several conventions used to do this. One of the least ambiguous is to consider the joint width of a dark band and the adjacent light band to be a "line pair". If we are working with, for example, a camera negative, we can then state the resolution in line pairs per millimeter. To recognize that the image on the negative may be reproduced for the viewer as a print of varying size, we can state the resolution in line pairs per picture height.
Note that in terms of the spatial frequencies contained in a scene pattern, for a pattern of f line pairs per picture height, the fundamental frequency is f cycles per picture height.
Note that there is some uncertainty as to exactly at what spacing of the pattern can the pattern "no longer be seen".
The modulation transfer function (MTF)
The actual behavior of the photographic system that leads to a certain resolution (as defined above) being observed is the spatial frequency response of the system, which we speak of as its modulation transfer function (MTF). If we have before us a plot of the MTF, can we determine from that, in some "scientific" way, what resolution (as described above) will be observed? Not really.
However, in many cases of interest, the resolution (determined from observation of line patterns), stated in line pairs per picture height, will be about the same as the spatial frequency (in cycles per picture height) at which the MTF drops to 10% of its value for low spatial frequencies.
The MTF of a digital photographic system is basically composed of these subordinate MTFs:
• The MTF of the lens proper (which reflects its aberrations)
• The MTF that represents the effects of diffraction
• The MTF of the antialising filter (if present)
• The MTF that represents the fact that out photodetectors do not pick up the illuminance of the optical image at a point but rather picks up the illuminance more-or-less averaged over a finite region (for example, the "intake port" of the photodetector microlens). This is spoken of as the "sampling aperture" MTF.
So, for example, the spatial frequency at which the MTF drops to 10% of its value at low spatial frequencies might be where
• The MTF of the lens was 30%
• The MTF from diffraction was 67%, and
• The sampling aperture MTF was 50%
• There is no antialising filter.
It is tempting to say, in a certain case, that "the resultion here is limited by the lens resolution". If in fact, for example, the lens MTF drops to 10% at a much lower spatial frequency than the other MTFs in the chain, that is a meaningful notion. But otherwise it is not that simple.
Or to say, "the resultion here is not limited by the lens resolution". If in fact, for example, the lens MTF drops to 10% at a much higher spatial frequency than the other MTFs, that is a meaningful notion. But otherwise it is not that simple.
The Nyquist limit
An upper bound on the possible resolution of an image in a digital system is its Nyquist frequency, which is the reciprocal of twice the pixel spacing. No digital image can convey a frequency component at or above the Nyquist frequency. Never. No way, no how.
If there are, in the optical image being "sampled" by the sensor, any frequency components at or above the Nyquist frequency, they will not be represented in the digital image. But that image will, as a result of the presence of those "out of range" frequencies in the image being sampled, contain spurious frequencies (below the Nyquist frequency) not present in the optical image. This is the phenomenon of aliasing.
Of course, if the amplitude of these "out of range" components is small, then the spurious frequencies they morph into in the digital image will be small, and will not consequentially degrade the image.
In order to keep the amplitudes of any "out of range" components "low", we rely on the system MTF reaching a low value as we approach the Nyquist frequency. In many cases, the combination of lens MTF, diffraction MTF, and aperture MTF will bring this about. If not, we may need to add an antialising filter, the decline of whose MTF as frequency increases will complete the needed suppression of "out of range" frequencies.
Note that when we use a CFA sensor, the Nyquist frequency for the R and B aspects is half that of the final digital image. (For the G aspect, it is the same as that of the final digital image.) How can we then suppress "out of range" frequency components (to avert aliasing) without reducing the resolution to substantially less that the one the Nyquist frequency of the image would allow? Well, we can't really.
Time for breakfast.
Best regards,
Doug