• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Topaz Labs InFocus, a new deconvolution based sharpening plugin

Doug Kerr

Well-known member
Hi, Alain,

How would you define Deconvolution? Anyone? I can't find a good definition on the web in regards to photography. The definitions I found are about algorithyms.
Many affronts to an image (such as imperfect focus) affect the "ideal" image as would a certain kind of filter, leading to the "injured" image.

If we run the injured image through a filter with the inverse response of the first "filter", it will nullify the effect of the original (and undesired) "filter" - restoring the original image.

The mathematical operation by which that is done is called deconvolution.

That doesn't really define it, and in fact I won't really do that - its rather tricky.

But, for the masochists, I will try and give some insight into its significance through context.

Deconvolution

I will speak for a while about electrical signals, not images.

If we work in the "frequency domain", the effect of a filter (including an unintended one) on a signal is modeled mathematically this way:

We take the spectrum of the signal (the "plot" of the distribution of its power by frequency) and multiply it by the frequency response of the filter. That means that for each frequency, we multiply the value of the spectrum at that frequency by the value of the filter response at that frequency. The result is the spectrum of the signal as affected by the filter - the "injured" signal if the filter is unintended.

To "back out" the effect of such a filter, we run the injured signal through a filter with the inverse response of the first filter. To model this mathematically, we take the spectrum of the (injured) signal and multiply it by the frequency response of the "inverse filter". But mathematically, this is the same as dividing the injured signal's spectrum by the response of the original filter.

Now, if we work in the "time domain", we start with the waveform of the original signal (not its spectrum) and convolve it with the time response (not frequency response) of the filter.

The process of convolution is quite tricky to define, and I won't really do that here. The important thing is that it we use it for the same thing when working in the time domain as we use multiplication for in the frequency domain.

To remove the effect of the original filter, we can deconvolve the injured signal by the time response of the original filter. That process is called deconvolution.

We use it for the same thing when working in the time domain as we use division for in the frequency domain.

Now back to photography

In photography, we are working in the "spatial domain", which is much like the time domain for electrical signals. When an ideal image is impacted by, for example, defocus blur (which is equivalent to the result of the application of a certain type of filter), the spatial variation of luminance is convolved by the spread function of that "filter". The process is called convolution.

To "back out" the impact of that undesirable "filter", we can take the injured image and deconvolve it by the spread function of the "filter" (assuming we know that). That process is called deconvolution.

Thus the name of that approach in removing (for example) the result of misfocus blur.

Best regards,

Doug
 
Last edited:

Alain Briot

pro member
Thanks Doug. I am glad you posted an answer to my question. When I typed it I thought it was right up your alley. Between you and Bart I now have a full understanding of what deconvolution is.
 

Doug Kerr

Well-known member
Hi, Asher,

How do we find those "frequency response" numbers?
Well, in our case, what we need is the spread function (which is a spatial response, not a frequency response) of the afflicting phenomenon; that is, how it would take what should be a point in the image and changes it into a blur figure with a certain distribution of luminance.

If for example we wish to cure misfocus, then that spread function can be calculated mathematically for an ideal lens, based on aperture, focal length, and the distance at which focused. For an actual lens, ideally it would be determined by laboratory testing. Still, a deconvolution function based on the spread function for an ideal lens will still do a "pretty good" job of correcting defocus.

Best regards,

Doug
 

Doug Kerr

Well-known member
Steven W. Smith, Ph.D, in his book, The Scientist and Engineer's Guide to Digital Signal Processing, in chapter 17, says:

"Deconvolution is nearly impossible to understand in the time domain, but quite straightforward in the frequency domain."​

I'll certainly buy the first clause of that.

But deconvolution (as the term is used today) is a creature only of the time domain (that is, assuming a context where the "dual" domains are time and frequency), so his contrast doesn't really work out.

I'm sure what Smith is referring to is that we can easily understand the matter of "backing our the effect of an unwanted filter" when we work in the frequency domain. There, we divide the spectrum of the "afflicted" signal by the frequency response of the "afflicting" filter (frequency-by-frequency). But this is not (under today's practice) deconvolution.

Now, at an earlier time, we sometimes heard the process of multiplying the spectrum of a signal with the frequency response of a filter described as "convolution" (rather than "multiplication"). The reason was that "multiplication" did not seem quite right, since we were working with entire functions of frequency, not just single numbers. It didn't seem like the familiar operation of multiplication. (That's why I always, at first mention, include the parenthetical, "In the sense that we multiply . . .".)

Some authors then adopted the term "convolution" to mean, "multiplying the values of two functions (of frequency) for every frequency in the range." (I in fact used to use the term that way, until a young student in one of my seminars - less than 15 years ago - straightened me out!)

But since the emergence of the concept of digital signal processing, the term convolution has been limited to the process we perform in the time domain to get the same result that, working in the frequency domain, we would get by multiplication (in the sense of frequency-by-frequency).

In my original note here, I didn't even describe how we preform convolution (since that is a bit tedious. although easily grasped once we go through the drill). But, as Smith aptly observed, it is ever so much harder to describe, and grasp, how we do do deconvolution.

But we can't "understand deconvolution in the frequency domain". We can understand a different tool we use there for the same purpose.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi folks, FYI,

http://www.topazlabs.com/infocus/

Finally a dedicated sharpening tool that also works on 64-bit platforms.
It also gives some control to reduce potential sharpening artifacts. When used on a Luminosity blending Layer, it gives an immense amount of control over what gets sharpened, and how much.

While there is no option to use user-defined point spread functions (PSFs), there is a mode that attempts to construct such a PSF from image content. It does need some (blurred) edges in the image to estimate the PSF. It's slightly less effective than a custom PSF, but still gives very acceptable results, e.g. restoring sharpness from diffraction blur or other complex PSFs.
Example of how to use blur estimation.

Cheers,
Bart

So Bart,

How have we progressed in deconvolution tools in the past year? What should we know about approaching 2013?

Asher
 
Top