Open Photography Forums  
HOME FORUMS NEWS FAQ SEARCH

Go Back   Open Photography Forums > OPF Welcome Hall > Breaking News

Breaking News Updates, innovations, equipment: moderated!

Reply
 
Thread Tools Display Modes
  #61  
Old May 15th, 2011, 04:58 PM
Bart_van_der_Wolf Bart_van_der_Wolf is offline
pro member
 
Join Date: Jun 2006
Location: Netherlands
Posts: 4,046
Default

Quote:
Originally Posted by Alain Briot View Post
Why is that? Is it because it wasn't paid attention to at the time?
As far as I can see, it 'slipped their attention'. Deconvolution was used to counteract the effects of the flawed mirror of the Hubble Space telescope in the early '90s.

The Richardson Lucy algorithm they apparently used was already known for more than a decade if I'm not mistaken.

Cheers,
Bart
__________________
If you do what you did, you'll get what you got.
Reply With Quote
  #62  
Old May 15th, 2011, 08:03 PM
Alain Briot Alain Briot is offline
pro member
 
Join Date: May 2006
Location: Phoenix, Arizona, USA
Posts: 1,421
Default

Thank you very much Bart, this helps a lot answering my questions.

Alain
__________________
Alain Briot
Fine Art, workshops, books: Get 40 Free eBooks when you subscribe to my newsletter:
http://www.beautiful-landscape.com
Reply With Quote
  #63  
Old May 15th, 2011, 09:04 PM
Doug Kerr Doug Kerr is offline
Senior Member
 
Join Date: May 2006
Location: Alamogordo, New Mexico, USA
Posts: 8,197
Default

Hi, Alain,

Quote:
Originally Posted by Alain Briot View Post
How would you define Deconvolution? Anyone? I can't find a good definition on the web in regards to photography. The definitions I found are about algorithyms.
Many affronts to an image (such as imperfect focus) affect the "ideal" image as would a certain kind of filter, leading to the "injured" image.

If we run the injured image through a filter with the inverse response of the first "filter", it will nullify the effect of the original (and undesired) "filter" - restoring the original image.

The mathematical operation by which that is done is called deconvolution.

That doesn't really define it, and in fact I won't really do that - its rather tricky.

But, for the masochists, I will try and give some insight into its significance through context.

Deconvolution

I will speak for a while about electrical signals, not images.

If we work in the "frequency domain", the effect of a filter (including an unintended one) on a signal is modeled mathematically this way:

We take the spectrum of the signal (the "plot" of the distribution of its power by frequency) and multiply it by the frequency response of the filter. That means that for each frequency, we multiply the value of the spectrum at that frequency by the value of the filter response at that frequency. The result is the spectrum of the signal as affected by the filter - the "injured" signal if the filter is unintended.

To "back out" the effect of such a filter, we run the injured signal through a filter with the inverse response of the first filter. To model this mathematically, we take the spectrum of the (injured) signal and multiply it by the frequency response of the "inverse filter". But mathematically, this is the same as dividing the injured signal's spectrum by the response of the original filter.

Now, if we work in the "time domain", we start with the waveform of the original signal (not its spectrum) and convolve it with the time response (not frequency response) of the filter.

The process of convolution is quite tricky to define, and I won't really do that here. The important thing is that it we use it for the same thing when working in the time domain as we use multiplication for in the frequency domain.

To remove the effect of the original filter, we can deconvolve the injured signal by the time response of the original filter. That process is called deconvolution.

We use it for the same thing when working in the time domain as we use division for in the frequency domain.

Now back to photography

In photography, we are working in the "spatial domain", which is much like the time domain for electrical signals. When an ideal image is impacted by, for example, defocus blur (which is equivalent to the result of the application of a certain type of filter), the spatial variation of luminance is convolved by the spread function of that "filter". The process is called convolution.

To "back out" the impact of that undesirable "filter", we can take the injured image and deconvolve it by the spread function of the "filter" (assuming we know that). That process is called deconvolution.

Thus the name of that approach in removing (for example) the result of misfocus blur.

Best regards,

Doug

Last edited by Doug Kerr; May 15th, 2011 at 11:13 PM.
Reply With Quote
  #64  
Old May 15th, 2011, 11:14 PM
Alain Briot Alain Briot is offline
pro member
 
Join Date: May 2006
Location: Phoenix, Arizona, USA
Posts: 1,421
Default

Thanks Doug. I am glad you posted an answer to my question. When I typed it I thought it was right up your alley. Between you and Bart I now have a full understanding of what deconvolution is.
__________________
Alain Briot
Fine Art, workshops, books: Get 40 Free eBooks when you subscribe to my newsletter:
http://www.beautiful-landscape.com
Reply With Quote
  #65  
Old May 15th, 2011, 11:55 PM
Asher Kelman Asher Kelman is online now
OPF Owner/Editor-in-Chief
 
Join Date: Apr 2006
Posts: 32,777
Default

Doug,

How do we find those "frequency response" numbers?

Asher
__________________
Follow us on Twitter at @opfweb

Our purpose is getting to an impressive photograph. So we encourage browsing and then feedback. Consider a link to your galleries annotated, C&C welcomed. Images posted within OPF are assumed to be for Comment & Critique, unless otherwise designated.
Reply With Quote
  #66  
Old May 16th, 2011, 12:07 AM
Doug Kerr Doug Kerr is offline
Senior Member
 
Join Date: May 2006
Location: Alamogordo, New Mexico, USA
Posts: 8,197
Default

Hi, Asher,

Quote:
Originally Posted by Asher Kelman View Post
How do we find those "frequency response" numbers?
Well, in our case, what we need is the spread function (which is a spatial response, not a frequency response) of the afflicting phenomenon; that is, how it would take what should be a point in the image and changes it into a blur figure with a certain distribution of luminance.

If for example we wish to cure misfocus, then that spread function can be calculated mathematically for an ideal lens, based on aperture, focal length, and the distance at which focused. For an actual lens, ideally it would be determined by laboratory testing. Still, a deconvolution function based on the spread function for an ideal lens will still do a "pretty good" job of correcting defocus.

Best regards,

Doug
Reply With Quote
  #67  
Old May 16th, 2011, 04:02 PM
Doug Kerr Doug Kerr is offline
Senior Member
 
Join Date: May 2006
Location: Alamogordo, New Mexico, USA
Posts: 8,197
Default

Steven W. Smith, Ph.D, in his book, The Scientist and Engineer's Guide to Digital Signal Processing, in chapter 17, says:
"Deconvolution is nearly impossible to understand in the time domain, but quite straightforward in the frequency domain."
I'll certainly buy the first clause of that.

But deconvolution (as the term is used today) is a creature only of the time domain (that is, assuming a context where the "dual" domains are time and frequency), so his contrast doesn't really work out.

I'm sure what Smith is referring to is that we can easily understand the matter of "backing our the effect of an unwanted filter" when we work in the frequency domain. There, we divide the spectrum of the "afflicted" signal by the frequency response of the "afflicting" filter (frequency-by-frequency). But this is not (under today's practice) deconvolution.

Now, at an earlier time, we sometimes heard the process of multiplying the spectrum of a signal with the frequency response of a filter described as "convolution" (rather than "multiplication"). The reason was that "multiplication" did not seem quite right, since we were working with entire functions of frequency, not just single numbers. It didn't seem like the familiar operation of multiplication. (That's why I always, at first mention, include the parenthetical, "In the sense that we multiply . . .".)

Some authors then adopted the term "convolution" to mean, "multiplying the values of two functions (of frequency) for every frequency in the range." (I in fact used to use the term that way, until a young student in one of my seminars - less than 15 years ago - straightened me out!)

But since the emergence of the concept of digital signal processing, the term convolution has been limited to the process we perform in the time domain to get the same result that, working in the frequency domain, we would get by multiplication (in the sense of frequency-by-frequency).

In my original note here, I didn't even describe how we preform convolution (since that is a bit tedious. although easily grasped once we go through the drill). But, as Smith aptly observed, it is ever so much harder to describe, and grasp, how we do do deconvolution.

But we can't "understand deconvolution in the frequency domain". We can understand a different tool we use there for the same purpose.

Best regards,

Doug
Reply With Quote
  #68  
Old May 17th, 2011, 12:19 AM
Jerome Marot Jerome Marot is offline
Senior Member
 
Join Date: Jan 2011
Location: Munich, Germany.
Posts: 3,534
Default

Quote:
Originally Posted by Asher Kelman View Post
How do we find those "frequency response" numbers?
By measuring how a known signal or a set of known signals are transformed by the filter.
Reply With Quote
  #69  
Old September 17th, 2012, 10:46 AM
Asher Kelman Asher Kelman is online now
OPF Owner/Editor-in-Chief
 
Join Date: Apr 2006
Posts: 32,777
Default

Quote:
Originally Posted by Bart_van_der_Wolf View Post
Hi folks, FYI,

http://www.topazlabs.com/infocus/

Finally a dedicated sharpening tool that also works on 64-bit platforms.
It also gives some control to reduce potential sharpening artifacts. When used on a Luminosity blending Layer, it gives an immense amount of control over what gets sharpened, and how much.

While there is no option to use user-defined point spread functions (PSFs), there is a mode that attempts to construct such a PSF from image content. It does need some (blurred) edges in the image to estimate the PSF. It's slightly less effective than a custom PSF, but still gives very acceptable results, e.g. restoring sharpness from diffraction blur or other complex PSFs.
Example of how to use blur estimation.

Cheers,
Bart
So Bart,

How have we progressed in deconvolution tools in the past year? What should we know about approaching 2013?

Asher
__________________
Follow us on Twitter at @opfweb

Our purpose is getting to an impressive photograph. So we encourage browsing and then feedback. Consider a link to your galleries annotated, C&C welcomed. Images posted within OPF are assumed to be for Comment & Critique, unless otherwise designated.
Reply With Quote
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Sharpening workflow question Phil Marion Image Processing and Workflow 83 April 15th, 2011 09:40 AM


All times are GMT -7. The time now is 02:57 AM.


Posting images or text grants license to OPF, yet of such remain with its creator. Still, all assembled discussion 2006-2017 Asher Kelman (all rights reserved) Posts with new theme or unusual image might be moved/copied to a new thread!