Doug Kerr
Well-known member
Asher has recently posed the question as to whether the degradation to an image resulting from diffraction effects when smaller apertures are used can be "nullified".
Diffraction works its ill by replacing the point at the focal plane that should result from a point on the subject with a blur figure, in particular an Airy disk. This is conceptually similar to the blurring caused by imperfect focus when a circle of confusion is generated at the focal plane from a point on the object.
Both of these phenomena can be described as the "convolution of the ideal image component (a point) by a point spread function" for each point on the object.
Conceptually, in either case, we can recover the "perfect" image of the object by deconvolving the recorded (blurred) image by the known (or presumed) point spread function of the troublesome phenomenon.
In general, actually doing this successfully depends on such matters as to how much noise there is in the recorded image. Why does that matter?
Here is an analogy. We have a person's voice picked up my a microphone at an event and transmitted to a radio broadcast studio over a channel with a limited bandwidth (a substantial "rolloff" in frequency response at modest audio frequencies). The voice as broadcast sounds dull and "tubby".
We tend to think of this effect of the channel in the "frequency domain" (the frequency response of the channel), but it can equally be looked at in the "time domain" by contemplating the "impulse response" of the channel.
By that we mean that if we were to feed into the channel a single "spike" of voltage with essentially zero width, what would come out would be a broad pulse, whose shapoe is determined by the frequency (and phase) response of the channel.
If we compare this to our image situation, that test "spike" is exactly equivalent to a point in the object, and the impulse response of the channel is exactly equivalent to the spread function of some blurring phenomenon (perhaps diffraction).
Now, we can in theory negate this degradation of the signal by applying an equalizer, a filter whose frequency response is the inverse of the frequency response of the channel.
But if there is much noise in the signal received over the channel in the upper portion of the range of frequencies of interest, the "high gain" of the equalizer in that region (where the signal was most attenuated by the channel response) will lead to a lot of noise in the "recovered" signal. Thus, we are limited in the degree to which the impairment in speech quality caused by the frequency response of the channel can be mitigated by equalization without incurring another debilitating impairment.
So it is with the mitigation of blurring in photographic imaging by deconvolution.
I do not know to what extent using this approach to mitigate the effects of diffraction in digital photography has been studied or even implemented in image processing software. (Perhaps there are packages that do it.) I will continue to cruise about the battle zone with my eye open for such.
Best regards,
Doug
Diffraction works its ill by replacing the point at the focal plane that should result from a point on the subject with a blur figure, in particular an Airy disk. This is conceptually similar to the blurring caused by imperfect focus when a circle of confusion is generated at the focal plane from a point on the object.
Both of these phenomena can be described as the "convolution of the ideal image component (a point) by a point spread function" for each point on the object.
Conceptually, in either case, we can recover the "perfect" image of the object by deconvolving the recorded (blurred) image by the known (or presumed) point spread function of the troublesome phenomenon.
In general, actually doing this successfully depends on such matters as to how much noise there is in the recorded image. Why does that matter?
Here is an analogy. We have a person's voice picked up my a microphone at an event and transmitted to a radio broadcast studio over a channel with a limited bandwidth (a substantial "rolloff" in frequency response at modest audio frequencies). The voice as broadcast sounds dull and "tubby".
We tend to think of this effect of the channel in the "frequency domain" (the frequency response of the channel), but it can equally be looked at in the "time domain" by contemplating the "impulse response" of the channel.
By that we mean that if we were to feed into the channel a single "spike" of voltage with essentially zero width, what would come out would be a broad pulse, whose shapoe is determined by the frequency (and phase) response of the channel.
If we compare this to our image situation, that test "spike" is exactly equivalent to a point in the object, and the impulse response of the channel is exactly equivalent to the spread function of some blurring phenomenon (perhaps diffraction).
Now, we can in theory negate this degradation of the signal by applying an equalizer, a filter whose frequency response is the inverse of the frequency response of the channel.
But if there is much noise in the signal received over the channel in the upper portion of the range of frequencies of interest, the "high gain" of the equalizer in that region (where the signal was most attenuated by the channel response) will lead to a lot of noise in the "recovered" signal. Thus, we are limited in the degree to which the impairment in speech quality caused by the frequency response of the channel can be mitigated by equalization without incurring another debilitating impairment.
So it is with the mitigation of blurring in photographic imaging by deconvolution.
I do not know to what extent using this approach to mitigate the effects of diffraction in digital photography has been studied or even implemented in image processing software. (Perhaps there are packages that do it.) I will continue to cruise about the battle zone with my eye open for such.
Best regards,
Doug