• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The color space of a sensor

Doug Kerr

Well-known member
The three kinds of output values from a CFA sensor - generally labeled "R", "G", and "B" - are the coordinates of the sensor's color space. We do not often speak of it as such (although it is extensively spoken of in the DNG specification).

And some folks have trouble accepting that. After all, we do not have all three values for any given spot in the focal plane image, so what is it whose color can be described in that color space? But don't let that throw you off.

We can get around that, if need be, by considering a substantial region in the focal plane image of uniform color. The output of all the "B" sensels, the output of all the "G" sensels, and the output of all the "B" sensels, together tell us the color of that region, as we would expect of a color space.

The sensor color space is a tristimulus color space, just like any of the familiar RGB-model color spaces, or the CIE XYZ color space (the one we often use to describe color in absolute terms in scientific work). Thus, there are three primaries for the color space, defined uniquely for the color space.

However, unlike any of the familiar RGB-model color spaces (but as in the XYZ color space) these primaries are not physically-realizable. That is, they are not any kind of radiation (even invisible radiation). So they are nothing like the primaries R, G, and B of RGB-model color spaces - the coordinates corresponding to them are just usually called R, G, and B as a way to help people understand the concept of different spectral sensitivities of the three kinds of sensels. Would the primary called "R" at least look "sort of red"? No - it is not any physical kind of radiation, so it couldn't be seen at all.

Thus, we cannot view the way this color space describes a color is as a recipe for physically creating light of that color by mixing together the indicated amounts of the three primaries.

But in fact the mathematics of determining the luminance and chromaticity of any mixture of these three primaries - in cases where that leads to a visible color (and of course there is no other kind) - works just as well as if the primaries were themselves kinds of (visible) radiation.

We get a fuller appreciation of these values as being the coordinates of a bona fide color space when we contemplate demosaicing of the sensel outputs. We can visualize this as being done by simple interpolation among the "R", "G", and "B" values to produce a set of all three for each pixel location (although more sophisticated techniques are actually used).

Those three values, at some pixel location, describe the (estimated) color of the focal plane image at that point - in the color space of the sensor! So now we have that color space at work in a more familiar setting, which hopefully relieves any misgivings we might have as to whether or not there is such a thing.

We can transform the representation of colors in terms of the coordinates of one tristimulus color space into terms of the coordinates of another tristimulus color space (assuming both are linear) by means of a 3x3 matrix of constants.

Thus, once we have the (estimated) color at each pixel of our image in terms of the coordinates of the sensor color space, we can transform it into, for example, the coordinates of the sRGB color space, or the Adobe RGB color space, or the ProPhoto RGB color space, or the CIE XYZ color space, as we might need for further processing of the image, by multiplication of the set of three sensor values by the appropriate transformation matrix.

Now, because of matters related to metameric failure (which I will not belabor here), two different kinds of light (with different spectra) that have the same description under one color space might not have the same description under another color space.

So of course a single transformation matrix, intended to convert color representation from the sensor color space into, for example, sRGB, will not always produce a consistent result. Thus, such matrixes are always predicated on a particular illumination - not just one with a certain chromaticity but one with a particular spectrum.

In a DNG camera profile, there is such a matrix - or more likely, two - one for each of two "important" illuminations.

Best regards,

Doug
 
Top