• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

On anti-aliasing filters

Doug Kerr

Well-known member
I came to understand the concepts of "representation by sampling" in connection with the introduction of digital transmission into the telephone network, which occurred early in my "formal" career in that industry. And so it was in that context that I came to grasp the concept of "foldover distortion" (or "aliasing"), and the role of a low-pass filter before the sampling process to mitigate the phenomenon.

And of course my grasp of the somewhat (not not exactly) parallel situation in digital imaging is colored (for better or worse) by that background.

In the design of basic "PCM" digital encoder for audio, in the early days, the exact response of the anti-aliasing filter was of considerable interest - not because of any notion that it was "harmful", and we needed to minimize that harm, but just because a sharp cutoff was expensive to do (these were of course analog filters at the time), and has some bad side effects. There was never any discussion about "what a shame that this filter limits the bandwidth of the channel" - we understood the Nyquist limit (Harry was after all one of us) and the basic system, sampling at 8000 Hz, was intended to support a "legacy" channel bandwidth of 3450 Hz.

Now in digital imaging, many things are different. For one thing, there really isn't any reconstruction low-pass filter - that just happens because of the finite resolution of the display or print process.

And of course the use of CFA sensors complicates the aliasing issue, where some of the artifacts are of a chromatic nature. (In fact, this makes some people incorrectly think that the aliasing problem is solely a creature of CFA operation, hardly so.)

In any case, overall I'm not able to develop as clear an "image" of the issues as I have had in digital audio.

We hear lot about the advantages, in some situations, of eliminating the overt low-pass antialising filter in the interest of improved something (sharpness?) at the expense of increased exposure to aliasing artifacts.

I would be very interesting in seeing some comparisons between the same scene captured with two cameras with identical sensors except that in one case there was no overt antialising filter.

Does anybody know where such a comparison can be found?

Incidentally, we may soon be very easily able to have such a comparison, what with the new Ricoh Pentax K-3, where the antialising is done by dithering the sensor and can be shut off or set to two "bandwidths".

Best regards,

Doug
 

Doug Kerr

Well-known member
We read almost every day of the emergence of new digital cameras is which there is said to be no antialising filter.

As I said in my original post, this is presumably to get the advantage of an improvement in something at the possible expense of increased susceptibility to the artifact of aliasing.

The something is sometimes described as "resolution", but a problems is that resolution (expressible as a number) has many possible definitions.

Perhaps we speak of something that does not pretend to be quantifiable, but which has an easily-appreciated subjective implication. Maybe it is "degree of detail". Maybe it is "perceived sharpness". These all have the property, "No, I don't know how to define this, and so I wouldn't know how to measure it, but I sure know that image 'A' has more of it than image 'B' ".

So, I am still eager to:

• Hear the outlooks of the various members here on what image property it is that we hope to "improve" by eliminating an overt antialising filter.

• See some actual examples of images of the same scene taken with otherwise-identical camera setups with and without an overt antialising filter, so I can see the "something".

Best regards,

Doug
 

Doug Kerr

Well-known member
As the design of digital camera systems advances, we become aware of the interplay between various factors that influence image performance.

In one area, which we can speak of as "resolution" (realizing that this does not have a unique quantifiable definition), we have the contribution of:

• The MTF of the lens, including the effect of diffraction at the aperture in which we might be interested.

• The MTF of the sensor chain.

As manufacturers develop new sensor systems, with different attributes (including sensel pitch), and new lenses, we have seen a see-saw between two situations that can be simplistically described as:

• "Resolution" is limited by the sensor (mostly in terms of sensel pitch).

Antialising in digital audio systems

As I discussed in an earlier essay, when digital representation of audio waveforms entered the telephone network, one design issue was that of the frequency and phase response of the antialising filter.

Simplistically, it would be desirable for it to have a fairly flat response to near the Nyquist frequency of the sampling rate in use and then a rapid decrease to almost zero response at the Nyquist frequency.

But making such a filter (challenging in a context of analog filters) turned out to not make the best use of the money.

Instead, a basic plan was adopted in which:

• The Nyquist frequency was made a value (4000 Hz) rather greater than the highest "payload" frequency we committed to "transport well" (3450 Hz, a value that came from several generations of legacy analog transmission systems).

• Then elaborate subjective test were made to ascertain the tradeoffs in user observation of delivers signal "quality" to decide on the antialising filter response that gave the "best bang for the buck."

The actual filter curve that was established as the norm was "pretty sloppy".

Back to digital photography

A parallel challenge (although much different in technical fact) faces the designer of digital cameras. An important consideration is that the response of the lens (as reflected in its MTF) is the first stage of the antialising filter. And in fact, the shape of a typical lens MTF turns out to be not bad one for an antialising filter. The question is of course does it have an appropriate cutoff frequency.

As the MTF of the sensor system zig-zags up and down, that answer changes.

It is possible that the recent trend to eliminate the overt antialising filter in high-performance digital cameras in part comes from the fact that the MTF's of the lenses they are equipped with (or that are unexpected to be used, in the case of interchangeable-lens cameras) provide ("almost free") a quite-suitable antialising filter given the sensel pitch (and thus Nyquist frequency) involved?

The pitchman on a street corner is selling amulets intended to ward off snakes. A passer-by says, "This is a hoax, man - there are no snakes in this part of the city"

"So you see how well they work", the pitchman says.​

Best regards,

Doug
 

Jerome Marot

Well-known member
I would be very interesting in seeing some comparisons between the same scene captured with two cameras with identical sensors except that in one case there was no overt antialising filter.

Does anybody know where such a comparison can be found?


There have been quite a few comparisons done between the Nikon D800 and D800e. For example, on dpreview or imaging resource.

In effect, the problem of "aliasing" in digital photography is not the same as in digital audio. As you correctly noted and because of the reconstruction process, lens effects and lower human sensitivity to the artifacts, baseband aliasing is not a real problem. It can even bee perceived as beneficial, giving an impression of increased sharpness. Black and white cameras, tri-CCD (video) cameras and foveon sensor do not use any anti-aliasing filter and the artifacts are rarely perceived as a problem. The only problems which are routinely noted in photography correspond to aliasing with the Bayer matrix and they are only perceived when the aliasing produces spurious colors. This is the effect that the "anti-aliasing" filters are designed to suppress.

(As a side note: there is another aliasing problem, but it only happens in video. It comes from the fact that sensors primarily designed for photography are read, for video use, in a mode where only one in 5 or one in 6 lines is active.)
 

Doug Kerr

Well-known member
Hi, Jerome,

There have been quite a few comparisons done between the Nikon D800 and D800e. For example, on dpreview or imaging resource.
Thanks for the references. I'll look at those.

In effect, the problem of "aliasing" in digital photography is not the same as in digital audio. As you correctly noted and because of the reconstruction process, lens effects and lower human sensitivity to the artifacts, baseband aliasing is not a real problem. It can even bee perceived as beneficial, giving an impression of increased sharpness. Black and white cameras, tri-CCD (video) cameras and foveon sensor do not use any anti-aliasing filter and the artifacts are rarely perceived as a problem. The only problems which are routinely noted in photography correspond to aliasing with the Bayer matrix and they are only perceived when the aliasing produces spurious colors. This is the effect that the "anti-aliasing" filters are designed to suppress.

An important point. Thanks for making it so clear.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Jerome,

Those two references were very valuable (I'm not yet done digesting them!).

The Imaging Resource article is excellent on the issue of color artifacts, both in terms of examples and its nice discussion.

The DPR report is very interesting on the matter of "resolution".

It is interested that in the range of "best resolution" apertures, both cameras show considerable "false color" in the resolution wedges (it being more apparent in the D800E).

I'm not sure that these show significant differences in "resolution" as judged from the wedges between the two cameras. This of course fits in with my view that "something is better but I'm not sure how to describe it".

And the actual crops from the test shot seem to exhibit that.

This all helps me a lot in understanding the practical implications of this matter.

It will be so nice when we are not generally dependent on CFA sensors!

Thanks again.

Best regards,

Doug
 

Jerome Marot

Well-known member
I'm not sure that these show significant differences in "resolution" as judged from the wedges between the two cameras.

The wedges (which should actually be trumpets...) are a poor way to judge resolution of the sensor (they are more useful to estimate the resolution of the lens). Because the system lacks a low pass filter at the B&W sampling frequency, they tend to show spurious resolution above the Nyquist frequency.

There is also something else that should be noted. For displaying pictures on the web, we are restricted to jpeg pictures in practice. Jpeg compression, not only subsamples the color components of the signal, its compression algorithm is not at all designed to compress this kind of pictures. All in all, I would not try to derive something really scientific from the cited articles. They are only there to show the differences between two cameras as used in practice.


It will be so nice when we are not generally dependent on CFA sensors!

I don't think that this will happen in our lifetime.
 

Doug Kerr

Well-known member
Hi, Jerome.

The wedges (which should actually be trumpets...)[/qiuote]
Indeed, and I think they are - my term was not a good one!.
are a poor way to judge resolution of the sensor (they are more useful to estimate the resolution of the lens). Because the system lacks a low pass filter at the B&W sampling frequency, they tend to show spurious resolution above the Nyquist frequency.

There is also something else that should be noted. For displaying pictures on the web, we are restricted to jpeg pictures in practice. Jpeg compression, not only subsamples the color components of the signal, its compression algorithm is not at all designed to compress this kind of pictures. All in all, I would not try to derive something really scientific from the cited articles. They are only there to show the differences between two cameras as used in practice.

All well said.

I don't think that this will happen in our lifetime.

Probably not.

By the way I have since looked at he actual "scenes" in the DPR article. There is no doubt but that whatever "it" is, the images from the D800E have more of it!

One powerful demonstration was in a scene with rippled water, where very modest false chromaticity was very obvious. (It was a hue sensitivity issue - the scene had a pretty uniform hue.)

It's a lot like noise. I am always amazed in what situations it is visually obtrusive and in what situations less so.

Our perceptual system is so complex!

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Jerome,

No, this one:

DSC_0868-001.jpg


from this page:

http://www.dpreview.com/reviews/nikon-d800-d800e/28

Best regards,

Doug
 
Top