• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Overcoming aliasing "in software"

Doug Kerr

Well-known member
We often see today that such and such a new camera does not use an overt optical low pass filter to avert aliasing (leading to an improvement in resolution). Often we then hear that instead, the mischief caused by aliasing is overcome by some post-shot digital processing.

I have trouble understanding how this can work, and how it is done.

But I realize that I am hampered by an incomplete grasp of all the implications of aliasing - and of low-pass filters - with a CFA sensor.

And of course, as you all know, I see all sampling issues through the prism of the context in which I first learned about sampling theory: the reconstruction of an audio or video waveform from samples.

In that context, there is no such thing as overcoming aliasing by digital processing after the initial sampling and digitization.

Oversampling

There is a widely used technique (used to alleviate the cost and bulk of the conventional antialising low-pass filter) which is sometimes misunderstood to be "elimination of aliasing by digital processing." The technique is often described as "oversampling". I'll describe it here. It has nothing to do with the subject of this message, but I just felt like writing about it while I still remember how it works.

In this technique we first sample the signal at a rate several times greater than would be required to successfully capture all the frequencies in which we are interested. The object is to indeed capture all the frequencies in the signal as it arrives - the frequencies we want to convey via our final digital result, as well as those higher ones that might be present and which, without any antialising filter, would cause mischief.

But in the general case, at least conceptually, we don't know how high are the frequencies that might be in that signal. And if there are any equal to or greater than half this "exaggerated" sampling frequency, aliasing will be committed at this stage, which can't be good.

So in fact we must prevent there being any such frequencies in the signal as it is sampled. How do we do this? With a low-pass antialiasing filter.

But I thought the object of this scheme (not yet fully described, of course) was to eliminate the need for an antialising filter. No, the object is to eliminate the kind we ordinarily need - one that cuts off nicely a little below the (usual) Nyquist frequency but has good transmission at the highest frequency we want to handle.

The antialising filter we need in this scheme is much simpler to design and execute. For one thing, it works in a much higher frequency regime, which means that the values of its components - perhaps inductors and capacitors - are much smaller (and the components can be much smaller). And other considerations allow us to use a frequency response that is relatively easy to achieve.

So the real punch line of the story is that the scheme (the rest of which I will describe presently) allows us to not use a "difficult, expensive" antialising filter but rather an "easy, cheap" one.

A similar technique used at the "reconstruction" end allows us to use a reconstruction filter (whose response needs to be similar to that of the antialising filter) which works in a higher frequency regime and is less costly to build than the reconstruction filter used in conventional digital audio or video technique.​
Now back to the story.

The train of samples (at this exaggerated rate), digitized, represents all the frequencies in the signal (below the quite high cutoff of the "cheap" antialising filter) - the frequencies we want to convey in our final digital result, as well as those higher ones which, without any antialising filter, would cause mischief in our final digital result.

Now, by means of digital signal processing, we can nicely remove from the waveform that the train of samples represents all frequencies above those in which we are interested (those we would, in conventional technique, remove by a conventional antialiasing filter). We can easily craft the response of this digital filter to do a very nice job of this.

We now have a digital representation of a bandlimited signal with a sample rate far greater than is needed to represent the frequencies in it. But we can reduce the sample rate to what is actually needed in a trivial way - essentially, we just leave out all the excess samples (we "decimate" the sample train).

We are left with a digital representation of the desired frequencies of the original signal, with only the sample rate that requires, achieved without using the kind of antialising filter that would be needed with "classical" technique.

Back to photography

What I just described is clearly not what is being done in digital cameras without an overt optical low pass filter. If this were done, we would see sensors with sensel densities many times the final pixel density.

Back to my real question

In any case, I wondered whether anyone here knows what is really going on when we hear of "aliasing being overcome by digital processing in a camera without an optical antialising filter.

Best regards,

Doug
78 years old today, and trying to remember what a logarithm is, anyway.
 

Jerome Marot

Well-known member
Happy birthday , Doug. I'll explain the logarithm another day, but for aliasing I can give you a clue. What the software removes is only colour aliasing and that happens on frequencies lower than the luminance Nyquist frequency. No software removes luminance aliasing, but that is usually not a real problem: lenses pass little signals at these frequencies anyway.

To remove colour aliasing, the software may need human input to tell it where artefacts are recognised. In my mind, that would mean that the software is some kind of kludge which works on specific pattern recognition.
 

Doug Kerr

Well-known member
Hi, Jerome,

Happy birthday , Doug. I'll explain the logarithm another day, but for aliasing I can give you a clue. What the software removes is only colour aliasing and that happens on frequencies lower than the luminance Nyquist frequency. No software removes luminance aliasing, but that is usually not a real problem: lenses pass little signals at these frequencies anyway.
Thank you. That makes much sense.

To remove colour aliasing, the software may need human input to tell it where artefacts are recognised. In my mind, that would mean that the software is some kind of kludge which works on specific pattern recognition.
Yes, I would think so.

Thanks again.

Best regards,

Doug
 
Top