Doug Kerr
Well-known member
I have spent much of this past week looking into many layers of the colorimetric theory behind the behavior of a digital camera sensor and how we process its outputs. Many fascinating facts have come to light. Here's one that I find especially fascinating.
********
We generally label the three "channels" of a digital camera sensor R, G, and B. What does that mean?
We often speak of it in terms of the three channels "being sensitive to red, green, and blue light, respectively", understanding that those terms don't precisely describe kinds of light; after all, the notation is just for "labeling", and is not intended to imply a "specification".
But, even subject to that understanding, the description falls short. The "response" of a sensor channel is described by a curve extending over a substantial range of wavelengths. We can't really say that it is to some particular kind of light.
Perhaps we mean that, generally speaking, the "peaks" of the three curves fall at wavelengths we can reasonably associate (in the usual broad way) with the colors red, green and blue. And that is "sort of so" in most cases.
Since the sensor delivers three outputs that describe the color of the light on the sensor, it must operate in some sort of "color space", albeit perhaps not some standard one but rather a "private" one for the sensor. It would be what we call an "additive" color space. These describe a color in terms of the amounts of three "primaries" we would need to add together to create light of that color. The "RGB" family of color spaces are of course the most familiar examples of such, and in fact we might think that the use of the designations R, G, and B for our sensor channels implies that they operate in some type of RGB color space.
If that were actually true, then the would be a set of three primaries associated with it (whose chromaticities could be reasonably called, with the customary lack of precision, red, green, and blue).
Suppose we wanted to know, "well, just what are the three primaries of a certain camera sensor. If we know the response curves of the three channels of the sensor (DxO Labs is kind enough to publish those for many cameras, based on their laboratory tests), can we determine the three primaries of the sensor.
We can. But the results may be shocking. It turned out that in every case, the three primaries are imaginary. That means that they cannot be physically generated, and if they could be, could not be seen. They are mathematical fictions (although that does not prevent them from being the premises of a color space. (The same is true of the primaries of the CIE XYZ color space, widely used as the description of color for much scientific work.)
If we plot the chromaticities of these primaries on a CIE x-y chromaticity chart (the one on which we most often see plotted such things as the chromaticity gamut of a color space), we would find their dots all outside the "region of visible chromaticities".
How can I say with assurance, "in all cases" (meaning, for any sensor design)?
In an actual sensor design, the spectral sensitivity curves cannot have any negative values, not because of any mathematical rule, but because such would be impossible as the behavior of an actual sensor channel.
It then turns out that the primaries implied by a set of sensor spectral response curves that were "everywhere non-negative" will always be all imaginary. (I will spare you the proof.)
Now, how can we deal with this bizarre situation in working with the sensor outputs? Easily. It turns out that (assuming certain conditions are met by the sensor response curves), a description of a color in terms of any valid set of three primaries (even imaginary ones) can be mathematically converted ("transformed") into a description of the color in terms of any other valid set of three primaries, including (for example) the primaries of the sRGB color space or the primaries of the Adobe RGB color space. And doing so is part of the "development" of the raw sensor data.
Now here's the opposite face of this surprising situation.
Assume that we are designing a digital camera, and we only contemplate its output being in the sRGB color space. Why don't we just equip the three kinds of sensels with filters that will result in the three channel outputs being the description of the color in terms of the primaries of the sRGB color space (that is, the outputs would be ready to be turned into sRGB coordinates by merely applying gamma precompensation)?
We can't, because the sensor response curves needed to bring this about would have negative values for some ranges of wavelength, which as we mentioned before, could not actually happen.
Thus we could have, in a sensor design, one of these two but not both:
• The three response curves would be everywhere non-negative (a physically-realizable sensor).
or
• The implied primaries would be physically realizable.
Accordingly, we have no choice but to use a set of response curves that would lead to an implied set of "imaginary" primaries, and then transform the representation in those terms (given by the sensor outputs) into a representation in terms of a set of physically-realizable primaries, as in one of the color spaces in which we want our image output to be.
Fun stuff, wot?
********
We generally label the three "channels" of a digital camera sensor R, G, and B. What does that mean?
We often speak of it in terms of the three channels "being sensitive to red, green, and blue light, respectively", understanding that those terms don't precisely describe kinds of light; after all, the notation is just for "labeling", and is not intended to imply a "specification".
But, even subject to that understanding, the description falls short. The "response" of a sensor channel is described by a curve extending over a substantial range of wavelengths. We can't really say that it is to some particular kind of light.
Perhaps we mean that, generally speaking, the "peaks" of the three curves fall at wavelengths we can reasonably associate (in the usual broad way) with the colors red, green and blue. And that is "sort of so" in most cases.
Since the sensor delivers three outputs that describe the color of the light on the sensor, it must operate in some sort of "color space", albeit perhaps not some standard one but rather a "private" one for the sensor. It would be what we call an "additive" color space. These describe a color in terms of the amounts of three "primaries" we would need to add together to create light of that color. The "RGB" family of color spaces are of course the most familiar examples of such, and in fact we might think that the use of the designations R, G, and B for our sensor channels implies that they operate in some type of RGB color space.
If that were actually true, then the would be a set of three primaries associated with it (whose chromaticities could be reasonably called, with the customary lack of precision, red, green, and blue).
Suppose we wanted to know, "well, just what are the three primaries of a certain camera sensor. If we know the response curves of the three channels of the sensor (DxO Labs is kind enough to publish those for many cameras, based on their laboratory tests), can we determine the three primaries of the sensor.
We can. But the results may be shocking. It turned out that in every case, the three primaries are imaginary. That means that they cannot be physically generated, and if they could be, could not be seen. They are mathematical fictions (although that does not prevent them from being the premises of a color space. (The same is true of the primaries of the CIE XYZ color space, widely used as the description of color for much scientific work.)
If we plot the chromaticities of these primaries on a CIE x-y chromaticity chart (the one on which we most often see plotted such things as the chromaticity gamut of a color space), we would find their dots all outside the "region of visible chromaticities".
How can I say with assurance, "in all cases" (meaning, for any sensor design)?
In an actual sensor design, the spectral sensitivity curves cannot have any negative values, not because of any mathematical rule, but because such would be impossible as the behavior of an actual sensor channel.
It then turns out that the primaries implied by a set of sensor spectral response curves that were "everywhere non-negative" will always be all imaginary. (I will spare you the proof.)
Now, how can we deal with this bizarre situation in working with the sensor outputs? Easily. It turns out that (assuming certain conditions are met by the sensor response curves), a description of a color in terms of any valid set of three primaries (even imaginary ones) can be mathematically converted ("transformed") into a description of the color in terms of any other valid set of three primaries, including (for example) the primaries of the sRGB color space or the primaries of the Adobe RGB color space. And doing so is part of the "development" of the raw sensor data.
Now here's the opposite face of this surprising situation.
Assume that we are designing a digital camera, and we only contemplate its output being in the sRGB color space. Why don't we just equip the three kinds of sensels with filters that will result in the three channel outputs being the description of the color in terms of the primaries of the sRGB color space (that is, the outputs would be ready to be turned into sRGB coordinates by merely applying gamma precompensation)?
We can't, because the sensor response curves needed to bring this about would have negative values for some ranges of wavelength, which as we mentioned before, could not actually happen.
Thus we could have, in a sensor design, one of these two but not both:
• The three response curves would be everywhere non-negative (a physically-realizable sensor).
or
• The implied primaries would be physically realizable.
Accordingly, we have no choice but to use a set of response curves that would lead to an implied set of "imaginary" primaries, and then transform the representation in those terms (given by the sensor outputs) into a representation in terms of a set of physically-realizable primaries, as in one of the color spaces in which we want our image output to be.
Fun stuff, wot?