• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Antialising filters and the G "channel"

Doug Kerr

Well-known member
In a recent essay I described one classical form of antialising filter for digital cameras. It's point spread function take a point in the image and turns it into a constellation of four points. Its spatial frequency response (i.e., MTF) has the form of the absolute value of a cosine function. We can speak of the spatial frequency in which this function drops to zero as the cutoff frequency" of the filter

As I understand it, it is common to arrange for this cutoff frequency to fall at the Nyquist frequency of the R and B detector arrays. If we ignore the inconvenient fact that the response rises again as we go further up in frequency, this response curve would seem well-suited to averting aliasing for the R and B "aspects" of the image.

This cutoff is, however, at half the Nyquist frequency associated with our desired output image (since its pixel pitch is half that of the photodetector pitch for the R and B photodetector arrays.

Now (if we limit ourselves to the R and B aspects), it would seem that thus the spatial frequency limitation given by this filter would degrade the overall "resultion" of the camera to perhaps about half that suggested by the image pixel pitch. This then leads to the interest in, even in high-performance cameras, eliminating the antialising filter, in order to more nearly approach the "geometric potential" of overall image resolution albeit in the face of increased susceptibility to aliasing in the R and B "aspects. (Yes, I know I keep refusing to speak of the "G" aspect, and for good reason. Be patient.)

Differing luminance and chrominance "bandwidths"

Now I must digress to an important matter of human perception. During the development of the various original (analog) color television systems (such as NTSC), attention was given to earlier research that showed human perception exhibited higher resultion to the luminance aspect of a scene than the chrominance aspect.

This was exploited in the(analog) TV "coding" systems by separating the image into luminance and chrominance aspects, then allocating greater bandwidth in the composite transmission channel to the luminance aspect than the chrominance aspect. This allowed the overall bandwidth to be less, for the same perceived image quality, than if both luminance and chrominance aspects were allocated the same bandwidth.

Today, in the JPEG encoding system for digital still images, the same perceptual situation is again exploited. We allocate a higher bit ration to the luminance aspect of the image than the chrominance aspect. We might wonder how can that be done? Must we not allocate the same bit ration to the R, G, and B "channels"? if so, how can the resolution of the luminance aspect be greater than the chrominance aspect.

Well, in fact, the pixel colors in JPEG are not encoded as their R, G, and B values. Rather what is transmitted is the value Y (which is much like luminance) and the values Cb and Cr, which together are much like chrominance. And, in the most common forms, we do not send a Cb and Cr pair for every pixel - perhaps only one for every two pixels, or every four pixels. (This is what the mysterious designations like "4:2:2" and "4:2:0" describe, albeit via a very curious convention.)

Back to our digital still camera

Thus, if we could somehow limit the "spatial bandwidth" of the "chrominance aspect" of the image, to avert aliasing that would affect that, but not so much limit the spatial bandwidth of the luminance aspect, we could enjoy an imagine that, to the human viewer, would not be compromised in apparent resolution.

But how could we do that. Well, we can't really do that, but we can get close.

The reason is that, of the three aspects of the image, R, G, and B, the G aspect is the most prominent in the perception of luminance. In fact, some clumsy descriptions of the CFA demosaicing process say that the luminance aspect of the image is derived from the G photodetector array. And while that is not strictly true, it gives a hint as to what is really going on.

In fact, in the first scheme taught by Bayer's patent for a CFA sensor, the sensor has three kinds of detectors: luminance, "R", and "B," but in a later form, the luminance photodetectors were replaced by "G" ones.​
Now suppose we could:

a. Make the Nyquist frequency of the G detector array greater than for the R and B arrays, and

b. Have the antialising filter, as it affects the G detector array, have a correspondingly higher cutoff frequency. (This might mean that the filter would create a wider spacing of the multiple output points for wavelengths in the range of importance to the G photodetectors.)

Then we could approach (if not strictly achieve) the desirable goal of a higher resolution for the luminance aspect of the image than is possible given the needed cutoff frequency of the antialising filter based on the Nyquist frequency of the R and B photodetector arrays.
The result would be an image that exhibited, to the human viewer, a resultion nearer to the "geometric potential" of the sensor while the system would remain free of color aliasing.​

Well, guess what: we in fact do exactly (a) (by having twice as many G detectors as R and B ones).

But doing (b) is very tricky, and I do not know if this has actually been achieved. (Maybe some of the readers here will know. Maybe that is done in all our modern cameras and nobody told me.)

Without B, the prospect for a luminance resolution nearing that of the image pixel pitch, brought by the higher density of the G photodetectors, is thwarted by the antialising filter.

Best regards,

Doug
 

Doug Kerr

Well-known member
By the way, the matter of an antialiasing filter with a different spatial frequency response for the different CFA photodetector arrays is discussed at length by Rastislav Lukac in Single-Sensor Imaging: Methods and Applications for Digital Cameras.

photoLukac_small.jpg


Rastislav Lukac​

Best regards,

Doug
 

Doug Kerr

Well-known member
In my original note here, I mentioned that in Bayer's patent on a CFA sensor, he initially described a layout in which one set subset of detectors was intended to respond to luminance, and the other two collectively to discern chrominance. I had incorrectly assumed that the detectors he describes as being to "respond to luminance" indeed would respond to luminance - that is, would be "panchromatic".

But in fact Bayer quickly says that in fact the "luminance" photodetectors would most desirably be responsive to the "green" portion of the spectrum (he pointing out that this was more important in determining luminance than, for example, the "red" and "blue" portions of the spectrum).

There obviously is more to the choice of "green-responsive" photodetectors to do the heavy lifting in determining image luminance rather than "luminance-responsive" photodetectors.

I suspect that part of the reason might be that the context of Bayer's work was largely that of NTSC video. There, the "delivered image" encoding was based on an RGB camera model. If a CFA array followed that very same scheme, it might make image processing much simpler (even if that arrangement did not best exploit the higher resolution of the "third set" of photodetectors for luminance).

Why do we not see this in the patent? Because of course the objective is not to reveal the whole story in the patent, just enough to support the claims.

So we see in the patent description the not-very-slippery shifting of gears from "we want to have the more closely-spaced photodetectors be luminance-sensitive so we discern luminance at their higher resolution" to "we want to have the more closely-spaced photodetectors be green-sensitive; they will do a pretty good job of discerning luminance at their higher resolution".

Best regards,

Doug
 
Top