• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

What is (de)convolution?

Doug Kerr

Well-known member
We here more and more these days about convolution (actually, most often, about deconvolution, which is of course its inverse). But just what is convolution, anyway?

I will describe it here in the context of electrical engineering (you know, surgeons cut).

An electrical "signal" can be described in both the time domain and the frequency domain. The time domain representation is just the familiar waveform "plot", as we would see on an oscilloscope. It tells us the instantaneous value (voltage) of the waveform at each instant of time.

The frequency domain representation is perhaps the amplitude spectral density (ASD) plot, as we would see on a spectrum analyzer. It essentially tells us the square root of how the power in the signal is distributed over different frequencies.

We can convert between the time domain representation of a signal to its frequency domain representation by using a mathematical operation called the Fourier transform, or convert between frequency and time domain representations using the inverse Fourier transform.

Now consider a filter, through which we will might pass our signal.

We may know the frequency domain representation of the filter, its frequency response, which is a plot of the ratio of the output voltage to the input voltage as a function of frequency. (There is also a phase angle aspect of this, but I'll ignore it to make the story clearer.)

We may also have the time domain representation of the filter. This is a plot of the waveform that will come out of the filter if we feed into it a curious thing (not actually physically realizable), a unit impulse (a voltage spike that has zero width but a finite amount of energy). We should be glad we never actually encounter one, as it would have infinite peak voltage! The waveform that comes out is called the impulse response of the filter, and it completely describes the filter's behavior, just as the frequency response does.

Not surprisingly, we can convert between the time domain and frequency domain representation of the filter's behavior representation by using the Fourier transform (in both directions, as before).

Now suppose we have some signal (described in the time domain, by its waveform) and a filter, whose response we have in the frequency domain. How can we determine what will come of the filter when we feed in our signal (we want the result as a waveform).

Well, here's one way:

• Convert the waveform (time domain) to the amplitude spectral density function (frequency domain) using the Fourier transform.
Multiply the amplitude spectral density of the signal by the frequency response of the filter (meaning that, for each frequency in the range of interest, we multiply the value of the ASD by the value of the frequency response, and plot that as a point on the ASD of the output signal). We now have the ASD of the output signal.
• Convert the ASD of the output signal to its waveform using the inverse Fourier transform.

Or, we could do this:

• Convert the frequency response of the filter to its impulse response (using the inverse Fourier transform).
Convolve the waveform of the signal by the impulse response of the filter. (I won't try and give an explanation here of what this involves.) The result will be the waveform of the output signal.

So, even though I haven't described what convolution involves, we can see that convolution is the way to combine the time-domain representations of an input signal and a filter to get the time domain representation of the output signal, just as multiplication is the way to combine the frequency domain representations of an input signal and a filter to get the frequency domain representation of the output signal.

Now suppose that we don't know the input waveform, but know the output waveform and the impulse response of the filter, and want to know the waveform of what must have been the input signal. Well, we take the output waveform and deconvolve it with the impulse response of the filter (essentially undoing the operation we discussed earlier).

Now, how does this relate to photographic imaging?

Well, most of our work here deals with space-domain representations (which correspond to the time domain representations for electrical signals). (We do deal with the frequency domain when discussing MTF.)

If we consider a defocused image, the defocusing takes what should be a point image and makes it into a circle of confusion. The mathematical description of that process is called the "spread function" - it defines how what should have been a point is "spread" into a circular figure.

This spread function is wholly analogous to the impulse response of a filter (time domain) in the electrical case. (It works two-dimensionally, in the geometric sense, something that has no equivalent in the electrical time domain.)

So, if we wanted to take the "result" of the defocus and determine what the original "waveform" of the "should be image" was, we could deconvolve the recorded ("blurred") image with the known (or presumed) spread function that describes the blurring process. The result should be the "unblurred" image - what we would have recorded but for the intervention of the misfocus.

Best regards,

Doug
 

Joachim Bolte

New member
....determine what the original "waveform" of the "should be image" was...

Isn't that problematic? The whole reason of sharpening is that you do not have a sharp representation of the image... Best you can do is compare the blurred original with an even more blurred version.
 

Doug Kerr

Well-known member
Hi, Joey,

Isn't that problematic? The whole reason of sharpening is that you do not have a sharp representation of the image.
No, in theory one can, from the blurred image, and knowledge [perhaps presumption] of the mechanism of blurring, actually reconstruct what the not-blurred image would have been.
... Best you can do is compare the blurred original with an even more blurred version.

What I describe does not involve comparing the blurred image with an even more blurred image.

You may be thinking of "unsharp mask sharpening", which works something like that. It does not seek to reconstruct "what the unblurred image would have been"; it seeks to make the delivered image "look sharper".

Best regards,

Doug
 

Joachim Bolte

New member
Hi Doug,

But how then does a sharpening function know the difference between parts of an image that are unsharp (so that should be sharpened), and parts that are blurry by themselves, like soft transitions?

Hi, Joey,


No, in theory one can, from the blurred image, and knowledge [perhaps presumption] of the mechanism of blurring, actually reconstruct what the not-blurred image would have been.


What I describe does not involve comparing the blurred image with an even more blurred image.

You may be thinking of "unsharp mask sharpening", which works something like that. It does not seek to reconstruct "what the unblurred image would have been"; it seeks to make the delivered image "look sharper".

Best regards,

Doug
 
Hi Doug,

But how then does a sharpening function know the difference between parts of an image that are unsharp (so that should be sharpened), and parts that are blurry by themselves, like soft transitions?

Hi Joachim,

It doesn't have to know that. Assuming a spatially uniform blur function, all pixels will have some of their signal spread to their neighboring pixels in the same proportions. The deconvolution takes those spread-out portions of the original pixel and puts it where it originated from, that's why the process is referred to as restoration.

Even if we don't know the exact point spread function (PSF), we can approximate it by trial and error (using a preview of a small section), or use some mathematical procedures and try to extract clues from the image (or images taken under similar circumstances).

In practice, many blur sources (lens, diffraction, aa-filter, Bayer CFA) mixed together produce something that looks like a Gaussian PSF distribution, so the approach used by e.g. RawTherapee (based on a simple Gaussian) already delivers quite an improvement.

BTW, RawTherapee also reads existing TIFFs which can then be sharpened with its RL-restoration method.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Joey,

Hi Doug,

But how then does a sharpening function know the difference between parts of an image that are unsharp (so that should be sharpened), and parts that are blurry by themselves, like soft transitions?
Firstly, it misdirects us to speak of "soft transitions" as "blurry" in a discussion where "blur" is used in a different sense, so I discourage it.

And note that I am not speaking here of "sharpening". I am speaking of "reversing" a "blurring" caused by the overall optical situation.

If the blurring is caused by misfocus, then it indeed will vary over the image, depending on the distance to the object in each case.

Then to mitigate it by deconvolution is in fact a very challenging task, as you recognize.

The theory by which deconvolution based "blurring" processes work, given that situation, is way beyond me!

If however we are speaking of "blurring" caused by diffraction, which does not vary with the distance to the object (and doesn't know, or care, about "soft transitions"), and thus we have the potential of "reversing" it with a systematic process (again, as I said, analogous to equalizing an audio transmission channel).

Best regards,

Doug
 
@Bart&Doug

Clear, thx!

Hi Joachim,

You're welcome. There's more info to be shared, there's still something new I learn about almost every day, although devonvolution had drawn my attention a long time ago already. It is slowely becoming available as a built-in functionality in all sorts of imaging applications, like recently in TopazLabs Denoise 5 to add some resolution back to an image after noise reduction.

Cheers,
Bart
 

Joachim Bolte

New member
@Bart,

Isn't it so that you can do the works in PS also? It's not a 'function' in that sense of the word, but I think the following steps should get you a deconvultion layer:

1. start of with an unsharp image on a layer
2. copy this layer 2 times
3. turn the upper layer invisible, and blur the middle layer with an appropriate amount (I mostly use Gaussian blur, radius between 1 and 3)
4. turn the upper layer visible, and in the Edit -> apply image menu apply the blurred layer to the top layer in 'substract' mode with values '2' in the upper and '128' in the lower box. This will result in an image that resembles the output of a high-pass filter.
6. set the upper layer to 'linear light' blend mode. The combined sharpening and blur layer should resemble your original image.
7. Now toss the blurred middle layer, so that the (now extra) sharpening applies to the original.

one can tweak this sharpening by lowering the opacity of the layer, or playing with the blend-modes.
 
Top