• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The color space of raw data

Doug Kerr

Well-known member
Although we rarely speak of it, a set of raw data from a particular camera type represents image color using a specific color space, the native color space of the sensor itself. Here, I will discuss that a little. This discussion is in a sense excerpted from my forthcoming article, "The Color Space of a Digital Camera Sensor".

Color spaces

In our context a color space means a fully-defined system of representing color by the values of (usually) three coordinates.

Tristimulus color spaces

The familiar "RGB" family of color spaces can be described as nonlinear tristimulus color spaces. The three values that define a color, R, G, and B, are nonlinear forms of three underlying values, r, g, and b. Those values tell us the "amounts" of three kinds of physical light, called the primaries, which if presented together to a viewer in those amounts will exhibit the color of interest.

There are tristimulus color spaces, such as the CIE XYZ color space, which use the same principle, but with an important wrinkle: their three primaries are not physically-realizable kinds of light, or even of non-visible radiation. We cannot, even in a laboratory, construct a physical emission of the "X" primary.

Nevertheless, these imaginary "primaries" follow the laws governing the color resulting from the mixture of primaries, and so, on paper, numerous combinations of amounts of these three primaries will indeed correspond to colors of light.

(One is tempted to say "visible light", but of course, rigorously, only visible radiation is "light", and in any case only visible radiation has "color", which is a perceptual property.)

The color space of a digital camera sensor

The outputs of the sensels of a conventional digital camera describe color in a system of three coordinates - the values of the outputs of the three classes of sensel detectors.

This situation constitutes a color space - the "native color space" of the sensor. We can also think of it as the color space of the raw data, which (more-or-less verbatim) records the sensel outputs.

We may be thrown off the track here by the fact that, in a CFA sensor, we do not get three values for any given pixel. That, however, does not disrupt the existence of a three-coordinate color space.

Recall that, even with a true "tricolor" sensor, we do not observe the color of every point in the image (there are an infinite number of those). Rather, we "sample" it at discrete points, at the pixel pitch.

The entire image (as, say, presented in a paper print), is derived from interpolation of those discrete color determinations (in part by the perhaps-Gaussian distribution of the ink patterns).

In a CFA sensor, we sample the three coordinates of the image color on staggered grids of discrete points, but the concept is no different because of that.

A metaphor might be a survey project in which we take data for all the trees in a certain forest, but, for reasons of data economy, for each group of three trees, we record for one only its latitude, for the next only its longitude, and for the third only its altitude. This does not any way invalidate the notion that this data is taken under the standard latitude/longitude/altitude coordinate system.

If we wanted to determine the color space of a CFA sensor in a laboratory, we would presumably do so with targets of uniform color, so concern with this matter would disappear.

How does demosaicing get into this?

The color space of which I speak is that of the sensor itself, meaning observed at the point where the sensel outputs are developed - in digital form, if you wish. This is before demosaicing. So demosaicing is not involved. (It may get tangled up in some of the implications, of course.)

There are many demosaicing algorithms, and it may well be that the color space of the image as first demosaiced is a different color space - a transform of the sensor native color space.

The primaries of the sensor native color space

We can readily guess (correctly) that the digital sensor native color space is of the tristimulus type. And in fact (for comfort), we usually label the three coordinates "R", "G", and "B".

Does that mean that this tristimulus color space works with primaries that have chromaticities something like the chromaticities of the R, G, and B primaries in common RGB-family color spaces?

No. In fact, the primaries of the native sensor color space, for most cameras that I know of, are imaginary (just like our infamous friends X, Y, and Z). This is of necessity so; a demonstration of this will be found in my article (which I need to finish up and publish).

On a related point, from here on, to avoid any misunderstanding, I will refer to the outputs of the three classes of sensels (and the sensel classes themselves) as R, G, and B .

Suppose that (to get rid of the demosaicing worry) we consider the sensor as regarding an image of uniform color. We look at the outputs of the R, G, and B sensel detector groups; we will imagine that all R outputs are the same value, and so forth, ignoring spatial and temporal noise.

Could we now in a laboratory, where we can construct light emitters having any spectrum we wish, construct three emitters (our primary sources) such that if we "throttled" them in proportion to the R, G, and B outputs of the sensor they would, for any color regarded by the sensor, create for the viewer that color? No.

Nevertheless, a valid set of three coordinate values describes a color.

Transformation from the native color space

We often see data (in for example the DxO labs reports on specific cameras) tables of the "sensitivities" of the three sensel groups, and often a transformation matrix. (These are related, in an obscure way.)

The purpose of that matrix is to define a transformation of the R, G, and B values from the sensor into r, g, and b values for some particular RGB-family color space (perhaps sRGB). From those, by applying the defined nonlinear transform, the R, G, and B values of the color in that color space would be developed.

This transformation may, in whole or in part, be embedded into the demosaicing algorithm of the camera or of external raw development software.

There are many complications here, relating to the spectrum of light under which the sensel detector sensitivities are determined. These are beyond the scope of this note.

Best regards,

Doug
 

Andrew Rodney

New member
The familiar "RGB" family of color spaces can be described as nonlinear tristimulus color spaces.
The three values that define a color, R, G, and B, are nonlinear forms of three underlying values, r, g, and b.

Always nonlinear?

There are tristimulus color spaces, such as the CIE XYZ color space, which use the same principle, but with an important wrinkle: their three primaries are not physically-realizable kinds of light, or even of non-visible radiation. We cannot, even in a laboratory, construct a physical emission of the "X" primary.

I like to call them Theoretical color spaces. Some other well known theoretical RGB color spaces is sRGB. Adobe RGB (1998) and ProPhoto RGB.

(One is tempted to say "visible light", but of course, rigorously, only visible radiation is "light", and in any case only visible radiation has "color", which is a perceptual property.)

As my color geek friend Karl Lang has stated:
Color is not a particular wavelength of light. It is a
cognitive perception that is the end result of the excitation of
photoreceptors followed by retinal processing and ending in the
visual cortex. We define colors based on perceptual experiments.

This situation constitutes a color space - the "native color space" of the sensor.

Based on whatever RGB filters are used to constitute that space right? And perhaps some extrapolation as we continue below?

If we wanted to determine the color space of a CFA sensor in a laboratory, we would presumably do so with targets of uniform color, so concern with this matter would disappear.

Here’s my copy and paste with respect to cameras based on a number of conversations with Eric Walowit and Jack Holms, both on the ICC digital photo group.

Digital cameras don't have a gamut, but rather a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different). A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. They don't exist and therefor are not colors. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can see we can see as well. Yet some cameras can “see colors“ outside the spectral locus but usually every attempt is made to filter those out. More important is the fact that cameras “see colors“ inside the spectral locus differently than humans.
The point is that if you think of camera primaries you can come to many incorrect conclusions because cameras capture spectrally. On the other had, displays create colors using primaries. Primaries are defined colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.
Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry. The same thing could be said about a color film negative.
Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like with film scanning).
Do raw files have a color space? Fundamentally, they do, but we may not know what that color space is. The image was recorded through a set of camera spectral sensitivities which defines the intrinsic colorimetric characteristics of the image. An simplistic way to think of this is that the image was recorded through a set of "primaries" and these primaries define the color space of the image.
If we had spectral sensitivities for the camera, that would make the job of mapping to XYZ better and easier, but we'd still have decisions on what to do with the colors the camera encodes, that are imaginary to us.


How does demosaicing get into this? Does that mean that this tristimulus color space works with primaries that have chromaticities something like the chromaticities of the R, G, and B primaries in common RGB-family color spaces?

No. In fact, the primaries of the native sensor color space, for most cameras that I know of, are imaginary (just like our infamous friends X, Y, and Z). This is of necessity so; a demonstration of this will be found in my article (which I need to finish up and publish).

Seems to confirm my understand above from discussions with Eric and Jack.
 

Doug Kerr

Well-known member
Hi, Andrew,

Always nonlinear?
Well, all the RGB color spaces I know of are. But perhaps I should not have been so broad.

I like to call them Theoretical color spaces. Some other well known theoretical RGB color spaces is sRGB. Adobe RGB (1998) and ProPhoto RGB.
What is the significance of "theoretical" here? Do you mean that they are not normally rigorously followed?

As my color geek friend Karl Lang has stated:

Based on whatever RGB filters are used to constitute that space right? And perhaps some extrapolation as we continue below?

Well, for example, the sRGB color space is not defined in terms of filters. We might hope to make a sensor whose responses are r, g, and b, by using physical filters. But in fact the responses of at least one of those filters must be negative at some place in its domain, which cannot happen.

Here’s my copy and paste with respect to cameras based on a number of conversations with Eric Walowit and Jack Holms, both on the ICC digital photo group.

Digital cameras don't have a gamut . . .
I don't agree. If we take some color camera, we find that there is a three dimensional solid (in some color space of our choice, for example, xyY) that bounds all the colors (as described in that color space) that can be distinctly represented in the camera output.

. . . but rather a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries.
I have no idea what "capture all sorts of different primaries" means. There are many primaries that are physically realizable, and many cameras can deliver distinct outputs for them.

Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different).
Certainly

A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. They don't exist and therefor are not colors. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can see we can see as well. Yet some cameras can “see colors“ outside the spectral locus but usually every attempt is made to filter those out. More important is the fact that cameras “see colors“ inside the spectral locus differently than humans.
All well said.
The point is that if you think of camera primaries you can come to many incorrect conclusions . . .
True, but that doesn't mean that the concept is invalid.

. . .because cameras capture spectrally. On the other had, displays create colors using primaries. Primaries are defined colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.

Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry.

The same thing could be said about a color film negative.
Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like with film scanning).
Do raw files have a color space? Fundamentally, they do, but we may not know what that color space is. The image was recorded through a set of camera spectral sensitivities which defines the intrinsic colorimetric characteristics of the image. An simplistic way to think of this is that the image was recorded through a set of "primaries" and these primaries define the color space of the image.
If we had spectral sensitivities for the camera, that would make the job of mapping to XYZ better and easier, but we'd still have decisions on what to do with the colors the camera encodes, that are imaginary to us.

An interesting discussion. Mostly makes sense to me.

In any case, each "channel" of a CFA sensor has a spectral (different) sensitivity (although we may not be told what that is). Then, its outputs can be deemed to constitute a tristimulus color space with specific primaries (even though they be "imaginary", just like the X, Y, and Z primaries). In fact, if the response curves for the three channels are all nowhere negative, it can be shown that the implied primaries must be imaginary (that is, cannot be given physical life as any kind of radiation, visible or otherwise).

Thanks for your inputs.

Best regards,

Doug
 

Andrew Rodney

New member
Well, all the RGB color spaces I know of are. But perhaps I should not have been so broad.

OK I wasn't totally sure but I'd go out on a limb and say there are some that are linear spaces such as the linear color space Adobe uses in Lightroom and ACR that's based on ProPhoto primaries but with a 1.0 TRC. I suspect other raw converters work this way (rumor has it Aperture uses Adobe RGB (1998) primaries).

What is the significance of "theoretical" here? Do you mean that they are not normally rigorously followed?

Maybe conceptual would be a better term. They are not ‘real’, someone just made up some math to describe em. The sRGB spec is based on a conceptual (theoretically) defined CRT display. Down to the ambient conditions it would exist in.

I don't agree. If we take some color camera, we find that there is a three dimensional solid (in some color space of our choice, for example, xyY) that bounds all the colors (as described in that color space) that can be distinctly represented in the camera output.

The question is how that 3D solid was defined and created. Goes back to the color mixing function of these capture devices. A very old post from the ColorSync list might better define the concept. I suspect you’ll have a much better time of deciphering Hunt than I (you have his book)? :

On 4/5/99 10:48 AM, "Richard F. Lyon" wrote:
The discussion of profiling Leaf and other cameras is very interesting,
but doesn't go far enough in clarifying the connections with underlying
color science and photographic science (these are very different things!).

Bruce's and Andrew's comments below are right on, but as I say don't go
far enough. A camera (or a scanner) doesn't "HAVE" an RGB gamut unless
it has computation built in to convert its measurements to an RGB space.
If a camera measures a scene through filters that are 'color matching
functions' (see Hunt, especially the section on Television signal
processing) as it should, then ALL colors can be correctly measured and
represented. But then the numbers are usually matrixed to somebody's
standard RGB space, based on a set of primaries. In this process, some
of the pixels can get negative values of R, G, or B, meaning that those
colors are not inside the triangle bounded by the primaries. It is at
this point that the gamut becomes limited, because nobody's output file
formats allow negative values (not even in 16-bit files, stupidly!), so
they clip them one way or another. So usually what comes out of a camera or scanner is in an ordinary RGB space that the engineers picked or defaulted to, and of course you can
profile to figure out what the primaries seem to be, what the gamut
limitations are, etc.


In any case, each "channel" of a CFA sensor has a spectral (different) sensitivity (although we may not be told what that is). Then, its outputs can be deemed to constitute a tristimulus color space with specific primaries (even though they be "imaginary", just like the X, Y, and Z primaries). In fact, if the response curves for the three channels are all nowhere negative, it can be shown that the implied primaries must be imaginary (that is, cannot be given physical life as any kind of radiation, visible or otherwise).

That again seems to jive with Eric, Jack and Richard.
 

Doug Kerr

Well-known member
Giant mea culpa here.

Well, I guess I should have read my own papers before giving this morning's note.

In fact, the typical digital camera sensor does not have primaries, even imaginary ones, because (as Andrew and his colleagues ably pointed out), they are not ordinarily colorimetric.

That is, they do not (necessarily) deliver a consistent set of three values for different instances of light that have the same color but different spectrums (that is, the sensors are subject to metameric error).

Only if their spectral response curves obey a certain set of conditions (often called the Luther-Ives conditions) can they be colorimetric in that sense. But conforming to those conditions is not handsome, for a variety of reasons, and just doesn't happen in the sensors we normally encounter.

If a sensor were colorimetric, then its outputs could in fact be considered related to a set of primaries (real or imaginary).

I don't know where I got off on this morning's binge! Probably a leftover from yesterday's error with my insulin dosage!

So I would say, forget the whole thing.

Instead, I refer interested readers to my earlier article, "Digital Camera Sensor Colorimetry" available here:

http://dougkerr.net/Pumpkin/articles/Sensor_Colorimetry.pdf

Sorry about all that.

Best regards,

Doug
 

Andrew Rodney

New member
So I would say, forget the whole thing.

Sorry about all that.

Wow, don’t throw the baby out with the bathwater, there’s good stuff there. Be useful if all this stuff (from Eric, Richard, yourself etc) could be distilled down into an understandable piece. Even reading all these posts over and over, its not entirely clear; its color science!
 

Doug Kerr

Well-known member
Hi, Andrew,

Wow, don’t throw the baby out with the bathwater, there’s good stuff there. Be useful if all this stuff (from Eric, Richard, yourself etc) could be distilled down into an understandable piece. Even reading all these posts over and over, its not entirely clear; its color science!
Sure.

Let me know what you think of the real article:

http://dougkerr.net/Pumpkin/articles/Sensor_Colorimetry.pdf

Thanks for your patience with me!

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, George,

Footnote 4 on page 3 of the article is fascinating:

"For almost all humans; there are a few humans, all women, whose perception of
color requires four values to describe."

What is this called?

They are called tetrachromats. The phenomenon is called tetrachromacy.

Where can I learn more about it?

Here's one discussion of the broader issue:

http://en.wikipedia.org/wiki/Tetrachromacy

Here's another:

http://www.post-gazette.com/pg/06256/721190-114.stm

Does it mean my wife has been right all these 40 years????

That's beyond even me!

Best regards,

Doug
 
Top