• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The DxOMark sensor color response report

Doug Kerr

Well-known member
I suggest that the reader first read this article:

http://www.openphotographyforums.com/forums/showthread.php?t=21462

The well-respected testing organization DxOMark publishes thorough reports on the behavior of the sensors in many cameras. One interesting section of these reports is the Color Response section. Here we see a sample of that section, this one for the sensor of the Canon EOS 20D:

DxO_color_response_20D-D50-01-S800.jpg


I first call attention to the leftmost three bar charts at the top. What do they tell us?

Let me step back from this report for a moment and review some matters relating to sensor response. First let us consider the kind of sensor I described (as item b) in my earlier article as desirable (but not attainable): one in which the sensor output values would be consistent for light of any given color, regardless of the spectrum that gives it that color. Such a sensor is spoken of as a "colorimetric" sensor, meaning literally that it "measures color".

We will assume that it delivers its "color" report in terms of three coordinates, not necessarily those of any color space we use for image interchange. It can be shown that we can convert its three output values to the three coordinates of some recognized color space (such as the coordinates r, g, and b, which are the underlying coordinates of the sRGB color space, not yet made non-linear) with a (fixed) linear transform. This can be executed by multiplying the sensor output values by a (fixed) matrix, which defines the transform.

But how can we determine the proper matrix? In a laboratory, we expose the sensor to a fixed level of light whose color is sequentially that of the three sRGB primaries. We record, for each primary, the outputs of the three sensor channels. We arrange those nine results as a 3×3 matrix and take its inverse. The resulting matrix is the one we should use (in the actual camera) to convert the sensor outputs to r, g, and b, from which we can proceed with the image processing.

Very tidy.

[To be continued]
 

Doug Kerr

Well-known member
[Part 2]

Next suppose we have (as we always will in reality) a non-colorimetric sensor, such as that I described in item c in the other article. I described there now we cannot have a transform that will "consistently" transform the sensor outputs into the coordinates of some color space. And I discussed how, in ISO 17321-1, an evaluation procedure is given that involves the construction of an "optimal" transformation matrix.

When we have this kind of sensor, is it useful to determine the response of its three channels to the three sRGB primaries? Well, the important answer is that there is really no way to do that, so we don't have to think if it would be useful.

In a non-colorimetric sensor, in general the outputs will not be consistent for given color of the light on the sensor over the various spectrums that can give that color (again the cause of metameric error). If we were to test a sensor with one of the sRGB primaries, which spectrum giving that color wouild we use? (There is, for example, no spectrum prescribed for an sRGB primary - only a color.) So that concept is doomed.

And of course, we already saw that here the determination of an appropriate transformation matrix must be done not on the basis of "finding the right one", but rather on finding the one that is the best compromise. Hold that thought.

******

Now, let's look at the three leftmost bar charts at the top of the DxO sensor Color Response report. The labeling suggests that they report the results of testing the response of each of the three sensor "channels" to each of the three sRGB primary colors. As was discussed above, this would be doable (and useful) for a colorimetric sensor. That data in fact would lead us directly to the proper matrix to be used to transform the sensor outputs to the coordinates of come color space (such as the values r, g, and b).

But we are dealing here with a non-colorimetric sensor. There, it is not even possible to make the test implied by the labeling of the charts. Hmm.

Moving on down the report page we see a matrix described as "Color Matrix as defined in ISO standard 17321". We heard how that is determined, and we realize that, for a simple sensor data processing scheme, that would probably be the best matrix to use to transform the sensor outputs to r, g, and b (my terms for what are called in the report R sRGB, etc, not really good names, since they are not the R, G, and B of the sRGB color space, which are nonlinearized, and these are linear values).

Now let's for a moment go back to the case of the fanciful colorimetric sensor, We note that if we test the response of the three sensor channels to the sRGB primaries, and organize that data as a matrix, and take the inverse of that matrix, the resulting matrix is in fact the proper one to use to transform that sensor data to r, g, and b. (ISO 17321 is not at all involved in this scenario.)

Now suppose we take that latter matrix and take its inverse, That will of course just give us the original matrix, the one made up of the sensor response data.

Now back to the DxO report (of course for a non-colorimetric sensor). Here the matrix to be used to transform the sensor data into r, g, and b is not derived from sensor response test data but by a more complicated process. But oppose that, anyway, we take the inverse of that matrix. Will that be a matrix that describes the response of the sensor channels to the three sRGB primaries. No, the ISO 17421 matrix did not come from that, and so we cannot back that matrix out of it. (And of course, as I discussed earlier, there is really no such thing for a non-colorimetric sensor.)

Nevertheless, in the DxOMark color response report, the values of the nine bars in the three bar charts are taken from a matrix that is obtained by taking the inverse of the "ISO 17321 matrix" below! But the data is labeled to indicate that it is fact the response of the three sensor channels to the three sRGB primaries.

[To be continued]
 

Doug Kerr

Well-known member
[Part 3]

So, just what do the three bar charts tell us about the sensor? I wish I could be more comfortable with my thoughts on that. It may be true that they give us some idea about the relative contribution of light with wavelengths from different pars of the spectrum to each of the sensor outputs, for whatever that's worth. And it's certainly possible that this is just a misdirected effort to preserve a presentation that might had been started when, before the emergence of the "optimal matrix" concept, we actually measured sensor channel response to (some handy form of) the three sRGB primaries.

******
Now,. what about the rightmost bar chart. This of course relates to the often discussed but rarely clarified matter of the relative sensitivity of the three sensor channels. Relative sensitivity to what? Aye, that's the rub.

Sometimes we expose the sensor to the light with a uniform spectral distribution (CIE illuminant E) and record the outputs of the three channels. Sometimes we will expose the sensor to one of the standard illuminants (perhaps D50, or A).

In this case it seems likely to be the latter, as we can choose either of those illuminants as the context of the measurements, and the results on that graph change significantly as we go from one to the other.

In any case, it is almost inevitable that the three bars give the relative output value for the three sensor channels. That being the case, it makes no sense that they are labeled with the three sRGB primaries.

Well, that's all I know about that.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Doug,

Thanks for this surprise and highly interesting discussion in color in in photography with silicon. I used to think we could simply add a correction curve to the output of any camera and produce "the same" apparent look as say a Phase One versus a Canon shot.

But is this supposition true or sufficiently true to be able to deceive us?

Asher
 

Doug Kerr

Well-known member
Hi, Asher,

Doug,

Thanks for this surprise and highly interesting discussion in color in in photography with silicon.

Thanks. It is fascinating stuff. Sadly, much of what one reads (even from supposedly-authentic sources) doesn't fit together. And it is often so editorially careless that it is hard to be certain just what is being said!

I used to think we could simply add a correction curve to the output of any camera and produce "the same" apparent look as say a Phase One versus a Canon shot.

But is this supposition true or sufficiently true to be able to deceive us?

Well firstly, what I talk about in this article is only one small subset of the properties of a sensor that can be measured objectively.

But, even by way of more comprehensive objective measurements, the "look" of a particular camera is a topic that is very elusive to quantify. I am reminded of the "golden years" of "high-fidelity" sound reproduction (just before the introduction of stereophonic sound, in fact) where there were endless discussions about the merits of this or that power amplifier, or even of this or that vacuum tube to be used in a power amplifier. The technical articles were awash in frequency and phase response curve and oscilloscope traces. But never in comparing two amplifiers that were thought to be "in the top echelon" did we find any direct trail from the objective measurements to the subjective assessment of performance.

Of course, the camera manufacturers will not tell us just what "objective" properties of the sensor or the processing chain do they hold responsible for the "look" for which they have become famous.

Best regards,

Doug
 

Jerome Marot

Well-known member
There is an essential difference between measuring the frequency response of an HiFi amplifier and measuring the color response of a camera. The amplifier does not take into account the property of the human auditory system (besides that it limits its bandwidth to 20 KHz). You imput frequencies, it outputs the same set of frequencies.

A color camera is linked to the properties of the human vision system. You input wavelentghts, it outputs a completely different set of wavelengths. It just so happens that this completely different set looks of a similar color to the (standard) human eye. It would look different to a spectrometer.

That essential difference is the reason why the whole measuring procedure is messy. It needs to make assumptions on what is a "standard" vision system and, more importantly, for the differences between input and output, it needs to assess whether these are "pleasing" or not.
 
I am lucky in that there is a lot of material published that helps to understand the color workings of my Sigma cameras. For example, numerous examples of the spectral responses of the 3 Foveon layers and a good few articles going on about those versus Luther-Ives and such.

And every Sigma X3F file has 3x3 matrices embedded in meta-data for use by Sigma's raw converter.

Thus it is that I know, for my SD10, the camera-to-XYZ 3x3 coefficients and also the WB correction (still in XYZ space) 3x3 coefficients. In other words, Sigma uses XYZ as a sort of PCS a la ICC profile.

So, no need of DxOMark to figure out that kind of stuff, which is just as well since they tend to ignore Sigma anyway!

Ted
 

Doug Kerr

Well-known member
Hi, Ted,

Welcome to the forum.

I am lucky in that there is a lot of material published that helps to understand the color workings of my Sigma cameras. For example, numerous examples of the spectral responses of the 3 Foveon layers and a good few articles going on about those versus Luther-Ives and such.

And every Sigma X3F file has 3x3 matrices embedded in meta-data for use by Sigma's raw converter.
That is all good to hear.

Thus it is that I know, for my SD10, the camera-to-XYZ 3x3 coefficients and also the WB correction (still in XYZ space) 3x3 coefficients. In other words, Sigma uses XYZ as a sort of PCS a la ICC profile.
That makes a lot of sense. In fact the ISO matrix of which I spoke is actually cast in the XYZ space.

So, no need of DxOMark to figure out that kind of stuff, which is just as well since they tend to ignore Sigma anyway!

I have noticed that!

Thanks for writing.

Best regards,

Doug
 

Jerome Marot

Well-known member
And every Sigma X3F file has 3x3 matrices embedded in meta-data for use by Sigma's raw converter.


That is quite interesting. Presumably, not all sensors have the same matrix or there would be no reason to embed them. That, in turn, would mean that there are minute differences between sensor batches, maybe because it is difficult to control the thickness of the silicium layers which serve both as a sensor and as a filter.
 
Top