• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Aliasing and demosaicing

Doug Kerr

Well-known member
It has been suggested here that the phenomenon of aliasing (which can be the cause of visual artifacts such as moiré patterns) only occurs in cameras that use demosaicing.

Demosaicing is used to recover a complete color image, with one pixel per "sensel", from a color filter array (CFA), or "Bayer", sensor, such as is found in the preponderance of modern digital still cameras.​

But in fact the theoretical basis of aliasing has nothing to do with the use of demosaicing.

Still, the assertion seems reasonable in the face of the fact that many cameras not using demosaicing (including monochrome cameras, "three chip sensor" video cameras, and cameras with Foveon-type sensors) "do not have antialising filters" yet "do not exhibit aliasing artifacts".

How can that be reconciled with my assertion that the concept of aliasing is not only applicable to cameras with demosaicing?

Here is my own take on that matter.

The Nyquist frequency

In a system in which a continuously-varying value (the variable) is "captured" by noting its value at regular intervals ("sampling"), the corrupting phenomenon of aliasing occurs if there are any components of the variation of the phenomenon whose frequencies are at or above a frequency of half the rate at which we take the samples. This critical frequency is called the Nyquist frequency for the sampling plan in use.

In a monochrome camera, every sensel takes a sample of the variable (the variation of illuminance across the image). Then (in either the vertical or horizontal direction), the Nyquist frequency is half the rate of occurrence of the sensels (in sensels per millimeter). Since there is a pixel of the delivered image for each sensel, this is also half the rate of occurrence of pixels (which we might call the pixel density).

In a color image, the variable is the color of the illumination on the sensor (color in its full sense, illuminance plus chromaticity). We approximately perceive this color in a tristimulus way, that is, with three kinds of photodetectors with different spectral responses (we generally call these the "R", "G", and "B" photodetectors).

It is useful to think of the image as having three layers, each being the aspect of the image to which one of the three kinds of photodetectors responds.

In a camera with a CFA sensor using demosaicing, we do not have at every location corresponding to a pixel of the delivered image a cluster of three photodetectors, one of each type. Rather, we have the three types of photodetector distributed across the sensor array in a repetitive pattern (usually a "Bayer" pattern). There is one detector (of one of the three types) at each location corresponding to a pixel of the delivered image.

If we consider the "R" layer, it is sampled by the "R" photodetectors. Their density (rate of occurrence) along a vertical or horizontal path is half the pixel density (one exists at every other pixel location). Thus, with respect to the "R" aspect of the image, the Nyquist frequency is 1/4 the pixel density, not 1/2 the pixel density as for the monochrome camera.

The same is true for the "B" aspect of the image.

Something similar is true of the "G aspect, but grasping it is a little more complicated, so I will just limit my specific discussion to the "R" and "G" aspects.

Thus, components of the variation of the color of the image of a frequency that, in a monochrome camera with a certain pixel density, would be no problem (being below its Nyquist limit) would, in a color camera with a CFA sensor (and the same pixel density) lead to aliasing (they being above that camera's Nyquist frequency for their "layer").

Demosaicing itself

Here I am much less able to be specific in my comments, and may in fact be off base here. But I present it, subject to that caveat.

In a CFA sensor camera, we seek to get an image with a certain pixel density by capturing each of the "aspect layers" of the image at a sampling rate too low for that pixel density. Again, limiting ourselves to the "R" and "B" aspects, and thinking in terms of a one-dimensional path across the image, we have half the number of samples we need.

The demosaicing process attempts to reconstruct the "missing" samples, for each of the "aspect layers", by "intelligent interpolation". Based on "knowledge" of typical patterns of variation in an image, it makes a best estimate of the value of each missing sample (of a particular "layer") from the samples taken nearby (many of them from other layers).

Now it seems to me that if the values of the samples (in one or more layers) are corrupted (by the presence of components of the variation across the image that are at or beyond the Nyquist frequency for the layer), the impact of those "errors" would be compounded as several sample values are contemplated to develop the estimate of each "missing sample".

If this outlook is valid, it would be another reason why the visual impact of a certain degree of aliasing would be greater in a camera using demosaicing.

The "rest of the antialising filter"

This issue is discussed at length, with illustrations, here:

http://www.openphotographyforums.com/forums/showthread.php?p=122865#post122865

We normally mitigate aliasing by using a low-pass filter (an anti-aliasing filter), whose task is to substantially attenuate (ideally, completely suppress) components in the phenomenon being sampled that have frequencies equal to or greater than the Nyquist frequency.

Indeed, in many of our cameras, we have a component identified as the anti-aliasing filter. If we analyze a common implementation of this, we find that its frequency response is not really that of a low-pass filter at all. Although its response nicely falls to zero at the Nyquist frequency, it immediately rebounds for higher frequency, opening the door to the retention of components above the Nyquist frequency, sure contributors to aliasing.

So why do we use such a filter? Well, it is very difficult to make a true optical low-pass filter (with a useful response for this purpise), and we know how to make this thing instead.

But how can we avert aliasing with an antialiasing filter that doesn't do what we need? Well, it gets some help behind the scenes (something that isn't spoken of except in textbooks, and here of course).

One helper

The theoretical discussions of sampling assume that in fact the "sampling organ" captures the value of the phenomenon at a true point in time or space. But that is not true of our sensors. In fact, by intent, they capture the light over as broad an area as the manufacturers are able to arrange (in the interest of best noise performance).

When we increase the "intake area" of the photodetector, the effect on system behavior is as if we put in front of the sensor array a certain kind of optical low-pass filter. It;s response, compounded with the response of the component we call the anti-aliasing filter, makes up a filter with a much better response.

Another helper

The spatial frequency response of any real lens declines with increasing spatial frequency, a matter we consider when we discuss the MTF of a lens.

Unless the lens in use has a resolution potential far beyond that of the rest of the camera, this decline in frequency response is substantial by the time we get to the Nyquist frequency. Thus, the lens contributes its own "low-pass filter" effect to the entire system.

So the true antialiasing filter comprise three ingredients:

• The "anti-aliasing filter" proper
• The virtual filter arsing from the "large throat" photodetectors
• The lens

Now, in a camera without an overt "anti-aliasing" filter, the second and third ingredients are still in play, so we do have an "anti-aliasing filter". It is less ideal that the "full boat", but evidently in a camera without a CFA sensor, for the reasons I described earlier, the challenge of aliasing is not as great, so this "accidental" anti-aliasing filter will do well enough for most purposes.

The dirty little secret?

In fact, with respect to cameras with Foveon-type sensors, there are reports that in fact artifacts of aliasing (moiré patterns) do appear, although certainly not nearly as visually severe as in a CFA camera with no (overt) antialising filter. I have not read nor critiqued these reports.

In conclusion

The phenomenon of aliasing is not a creature of the process of demosaicing. However, a number of considerations may make it practical, in a camera not using a CFA sensor with demosaicing, to operate in an acceptable way without the presence of an overt anti-aliasing filter. That may make it seems as if aliasing is only a creature of demosaicing.

Best regards,

Doug
 

Jerome Marot

Well-known member
It has been suggested here that the phenomenon of aliasing (which can be the cause of visual artifacts such as moiré patterns) only occurs in cameras that use demosaicing.

If you are referring to my post on the subject, I would like to point out that I did not intend to suggest that the phenomenon of aliasing (which is inherently linked to sampling) only occurs in cameras that use demosaicing. It certainly happens in Foveon or B&W sensors.

What I wrote is that aliasing artifacts, when they happen on real world photographs, are usually far more obvious on Bayer arrays than on Foveon or B&W sensors. It certainly is possible to design a subject so that aliasing artifacts are obvious on a Foveon sensor, but on real world subjects, the artifacts are usually unobstrusive when they happen. OTOH, when using a Bayer sensor without a low-pass filter, artifacts happen on many subjects (e.g. fabric, plants at a distance, any subject with repetitive minute details actually) and are very, very obvious to the eye because they make weird colors appear on places where they are not expected.
 

Doug Kerr

Well-known member
Hi, Jerome,

If you are referring to my post on the subject . . .

Well, bascailly I am!

. . . I would like to point out that I did not intend to suggest that the phenomenon of aliasing (which is inherently linked to sampling) only occurs in cameras that use demosaicing. It certainly happens in Foveon or B&W sensors.

What I wrote is that aliasing artifacts, when they happen on real world photographs, are usually far more obvious on Bayer arrays than on Foveon or B&W sensors. It certainly is possible to design a subject so that aliasing artifacts are obvious on a Foveon sensor, but on real world subjects, the artifacts are usually unobstrusive when they happen. OTOH, when using a Bayer sensor without a low-pass filter, artifacts happen on many subjects (e.g. fabric, plants at a distance, any subject with repetitive minute details actually) and are very, very obvious to the eye because they make weird colors appear on places where they are not expected.
Thank you so much for the very apt clarification.

I apparently misconstrued your initial comment. My apologies.

I think we are on the same page here.

I always enjoy your contributions to these matters.

Best regards,

Doug
 
Top