• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

About (de)convolution - 3 parts

Doug Kerr

Well-known member
This is a rerun (hopefully refined) of a discussion I gave a while ago in response to a question.

**********
We hear of image-processing programs that use deconvolution to, in effect, "back out of" the image captured on the sensor the effects of such unwanted behavior of the optical system as defocus.

Deconvolution is, not surprisingly, the inverse of the mathematical operation of convolution. First, I'll explain convolution as it applies not to optical "signals" but rather to electrical ones.

The time and frequency domains

We can describe electrical "signals", and what happens to them, in either the time domain or the frequency domain, which are said to be duals of each other. We can get a simple glimpse of this by considering a situation in which a light flashes every 0.5 seconds. Its period is 0.5 seconds (a time domain description), and its frequency is 2.0 times per second (2.0 hertz) (a frequency domain description).

The time domain description of an electrical "signal" is its waveform, a plot of its instantaneous voltage vs. time.

The frequency domain description of the signal is its spectrum. One form of this is its power density spectrum (the "power spectrum"). This tells us how the power in the signal is distributed among components at different frequencies. It is plotted as a function of frequency.

Those familiar with the concept of density functions will recognize that my description is a little simplistic. I will often depart from rigor here in the interest of illuminating the concepts.​

The power density spectrum is accompanied, to give the full description of the signal, by a plot of phase vs. frequency. This tells us the relative time relationship of the individual frequency components. From her on, I will mostly ignore the phase aspects of things, not that it isn't important, but again to help the basic concept to be more concisely discussed.

The amplitude spectrum

When we work mathematically with the spectrum of a signal, it turns out that we must usually work with a plot whose ordinate is the square root of the power density spectrum curve, called the amplitude spectrum.

Transforming between domains

If we know the waveform (time domain description) of a signal, we can apply the mathematical operation called the Fourier transform to that and the result will be the amplitude (and phase) spectrums of the signal (the frequency domain description).

If we know the amplitude and phase spectrum of a signal (its frequency domain description) we can apply the mathematical operation called the inverse Fourier transform to that and get the waveform (time domain description) of the signal.

Filters

Anything the signal, passes through that changes its characteristics can be considered to be a filter. We will limit ourselves to linear filters, which we can think of as filters whose relative effect on the signal is not changed by the actual amplitude ("largeness") of the signal. If we are interested in what happens to the signal generated by a studio microphone as it passes through the mixing console, we can treat the console as a filter.

The nature of a filter

In the frequency domain

For our purposes here, a filter can be described by two functions of frequency (plots vs frequency):

• The amplitude frequency response. This tells us the ratio of the output of the filter (in amplitude - that is voltage - terms) to the input, plotted vs. frequency (that is, it tells us what that ratio would be for a signal that contains only a certain frequency.

• The phase frequency response. This tells us, for any frequency, how a signal component at that frequency will have its relative time shifted by passing through the filter. Yes, we will ignore this from now on.

In the time domain

We can also completely describe the response of a filter by imagining that we introduce into it a "unit impulse", a single electrical pulse of zero length in time but containing, over its (zero-length) life one unit of energy (obviously a mathematical fiction). We then note the waveform of the output signal (that is, a plot of its instantaneous voltage vs. time).

This is called the impulse response of the filter. If we know that, we know everything we need to predict what will happen to any signal passing through the filter.

Predicting the output of the filter

Working in the frequency domain

We have a signal for which we know the amplitude spectrum. We pass it through a filter (that could be an amplifier which we treat as a filter for this purpose) whose amplitude response is known. What will the output signal be like?

We multiply the amplitude spectrum of the original; signal by the amplitude response of the filter. By this we main that at every frequency in the range of interest, we multiply the value of the amplitude spectrum by the value of the amplitude response. The result will be a value for every frequency in the range. The plot of that will be the amplitude spectrum of the output signal (its frequency domain description). (For a signal with a "continuous" spectrum, won't there be an infinite number of multiplications? Yes.)

[Continued]
 

Doug Kerr

Well-known member
About (de)convolution - part 2 of 3

[Part 2]

Working in the time domain

We have a signal for which we know the waveform (its time domain description). We pass it through a filter whose impulse response is known. What will the output signal be like?

We take the waveform of the original signal and convolve it with the impulse response of the filter (that is, we use the mathematical operation called convolution). What does that mean?

In effect, for every instant over the life of the signal, we take its instantaneous voltage and multiply that by the impulse response of the filter - that is, multiply it by the value of the impulse response at every instant of time. The result is a waveform. We add together all those waveforms (for the instants of the life of the signal), and the result is the waveform of the output signal.

Note that, if the signal is "continuous" and the impulse response is "continuous", this operation has an infinite numbers of steps, each one of which involves an infinite number of multiplications. Oh, great.

Fortunately, in practice, we mostly use convolution when working with "discrete" signal and filter descriptions; that is the signal waveform and the impulse response of the filter are described not by actual continuous curves but rather by sets of discrete values at some repetitive time interval (the sampling interval).

To summarize
As a practical matter, using the mathematical operation of convolution allows us to do, in the time domain, the same process that we would do in the frequency domain by multiplication of two curves.

To actually apply this, we may often need to transform the known characteristics of the input signal, or of the filter, or of the calculated description of the output signal, between time and frequency domain representations by applying the Fourier transform or its inverse.

Backing out the effect of a filter

Suppose we have a signal that has been affected by a unwanted filter (perhaps the frequency response of an audio transmission channel from an event to the studio). We know its frequency response (perhaps through testing).

In many cases, we can restore the original nature of the signal by "backing out" the effect of the filter. We do that (mathematically) by dividing the amplitude spectrum of the downstream signal by the frequency response of the "filter" (in the same sense we earlier spoke of multiplying the amplitude spectrum of a signal by the amplitude frequency response of a filter).

We may do this physically (not mathematically) by passing the arriving signal through a filter whose amplitude frequency response is the inverse of the response of the "unwanted filter" (the transmission channel). Such a filter is often spoken of an as equalizer, since its purpose is to make the end-to-end frequency response of the whole chain equal over the frequency range of interest.

If we wish to work in the time domain (as is often most convenient for signals represented in digital form), we can take the waveform of the downstream signal and deconvolve it with the impulse response of the unwanted filter.

It is harder to explain how that it actually done, but by now you should be able to grasp how this fits into the entire story.

[Continued]
 

Doug Kerr

Well-known member
About (de)convolution - part 3 of 3

[Part 3]

NOW TO PHOTOGRAPHY

Temporal and spatial frequency

In the previous discussion, the frequency spoken of was temporal frequency, meaning that its basis was events per unit time ("temporal" means, in Thais context, "pertaining to time").

In treating similar issues in photography, the "signal" we speak of becomes a pattern of changing color* across a scene, or across its image. Here, instead of time, we have distance, and instead of temporal frequency, we have frequency that is defined in terms of events per unit distance - spatial frequency (pertaining to space).

* I'll simplify the further discussion by assuming a "monochrome" camera, so we need only speak of the luminance across a scene, of the illuminance across its image.​

Again, we can look at things here both in the spatial domain (equivalent to the time domain for our electrical examples) and the spatial frequency domain.

Our optical system as a filter

Our optical system (the lens), in the context of a particular photographic setup (including the state of focus), is a filter. The actual "signal" of the scene will be changed to a different "signal" on the sensor. Let's in particular think of the affliction of imperfect focus. One way to look at its mechanism is that a point on the image is changed into a finite-size "blur figure" on the image.

We can consider its (imperfectly-foster) optical system as a filter (and, in that respect, an unwanted one).

(Note that now things become two-dimensional (vertical/horizontal axes), unlike the electrical situation, whose realm - time - is one-dimensional. But I'll mostly ignore that, for all the familiar reasons.)

If we, in a laboratory, provide an isolated point source of light and plot the distribution of illuminance along a line on a test focal plane, we get what is called the (one dimensional) point spread function of the optical system (in context - that is, taking into account the focus situation).

This exactly corresponds to the impulse response of an electrical filter, and, like that, completely describes the behavior of the optical system (in context), viewed as a filter. In particular, it is the spatial domain description.

Now consider an actual scene (let's say a track along a line across it). If we had a mathematical description of the variation of luminance along that line, we could predict the variation of illuminance along the corresponding line in the image by convolving the variation in the scene with the point spread function of the optical system in context.

And we can in fact think of the optical system as doing just that when it "generates" the image.

Correcting for misfocus

We would of course like the variation of luminance across the image to be identical to the variation of luminance across the image, but lens aberrations (and the affliction of misfocus) disrupts that.

Suppose that indeed we know the point spread function of our optical system (taking into account the focus situation for the current focus setting of the camera) as it applies to an object at a certain distance.

If we take the variation of illuminance revealed by the digital image from the sensor and deconvolve it with the point spread function of the optical system (in context), we will get the variation of luminance of the original scene.

Thus, the deconvolution process, conducted digitally during post-processing, works as an equalizer to "back out" the affliction of misfocus (and perhaps of some other lens aberrations as well.

**************

Best regards,

Doug
 

Ben Rubinstein

pro member
So technically if a company was to program in the exact effects of the AA filter with every lens at each focal length their software could exactly reverse the effects of the AA filter rather than the arbitary capture sharpening which we do by feel...
 

Jerome Marot

Well-known member
So technically if a company was to program in the exact effects of the AA filter with every lens at each focal length their software could exactly reverse the effects of the AA filter rather than the arbitary capture sharpening which we do by feel...

Up to a point, yes. I say "up to a point" because convolution stands on very strict mathematical assumptions which everybody seems to forget, namely that the system is perfectly linear. Any system with limited dynamic range or noise is not perfectly linear. That should give you an idea of the practical limits in the real world.

But... supposing that it would work, deconvolving the AA filter is not what one wants to do. In a nutshell, you would reverse to a sensor without AA filter and find back the problems of those (e.g. moiré).
 
So technically if a company was to program in the exact effects of the AA filter with every lens at each focal length their software could exactly reverse the effects of the AA filter rather than the arbitary capture sharpening which we do by feel...

Hi Ben,

Indeed, if the combined effect of residual lens aberrations, diffraction at each aperture, AA-filter, and microlenses, added to the Bayer CFA demosaicing is characterized, one can with reasonable success back-out of the loss of sharpness caused by the imaging chain. Even when limited to the effect of the lens, one can improve image quality. Deconvolution worked on the earlier Hubble Space Station's flawed mirror optics, it also works with simpler setups.

No surprise that is the route being taken by Hasselblad, where their newer MF camera's form a closed system with the lenses. The lens effects are an integral part of their software to demosaic the Raw files.

However, we can also achieve part of the restoration by using appropriate software solutions that are slowly becoming available, e.g. FocusMagic or Topaz Labs' Infocus. Even without perfect prior knowledge of the resulting Point Spread Function (PSF), we can come close to perfectly sharp images.

One of the remaining difficulties is that the PSF of an image is not spatially uniform across the image (e.g. corners softer than center), but also outside the Depth of Field zone there is a more blurred version of reality (which we may want to keep blurred, or not).

Cheers,
Bart
 

Ben Rubinstein

pro member
I had wondered how hasselblad and now phase one were doing the corrections on their respective 28mm lenses, to be honest I'd assumed it was cruder than that. I wonder how individual lens QC variation plays into their carefully worked out calculations.
 
I had wondered how hasselblad and now phase one were doing the corrections on their respective 28mm lenses, to be honest I'd assumed it was cruder than that. I wonder how individual lens QC variation plays into their carefully worked out calculations.

Hi Ben,

These manufacturers probably won't tell us in detail what they do. Part of what they do is simple geometric correction of pincushion/barrel distortion if needed. Part is correction for vignetting, basic stuff. What we do not know is what they do in addition to that, or the order in which they do it. There is e.g. a benefit when the Chromatic Aberrations, should there be any, can be addressed in the undemosaiced data, but that will complicate the Raw conversion. Likewise for deconvolution, maybe they use a simple assumption rather than a full spatially variant approach. We can only guess.

From what I've seen, Phase One only does the basic corrections, nothing fancy. That's why they can unlock that functionality for other cameras as well (as they did with the LLC and vignetting correction). I'm using that with my Canon Raws, and it works fine.

Adobe ACR also does only basic stuff, geometrical and vignetting/light-falloff corrections.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Ben,
So technically if a company was to program in the exact effects of the AA filter with every lens at each focal length their software could exactly reverse the effects of the AA filter rather than the arbitrary capture sharpening which we do by feel...

One would not, even if one could, want to wholly reverse the effect of the AA filter.

What I presume is that one could reverse out aspects of its response that were "counterproductive" (that is, that reduce the frequency response of the system in the region below the Nyquist frequency, which we do not need to do to avert aliasing).

That's probably what you meant.

I discuss this a bit in the 3-part note on the AA filter I just posted here.

Best regards,

Doug
 
Top