• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Color correction info in EOS JPG files?

Doug Kerr

Well-known member
I have assumed that one of the following must be true for a Canon EOS camera:

A. The camera must embed in a JPG file information on the "color correction vector" used to perform color correction on the image in that file.

The reason is that, when we select a JPG image file to be the "reference frame" for use in setting a custom white balance (CWB), the camera, before evaluating the average chromaticity of the image over the "center spot" must "back out" the color correction that is has been applied, for which it would need to know the vector that had been applied.

Otherwise, the conclusion reached by the evaluation would be irreversibly poisoned by the color correction that had been applied (and we are not advised to set some specific fixed color correction when taking the reference frame to dispose of that uncertainty).

B. The camera must, on every shot, make a determination of the average chromaticity of the central spot (from the raw data, presumably) and record that in the JPG file metadata for use just in case the user later selects that frame for use as a CWB reference frame.


We know that when we take a raw output file, the color correction vector that was (or would have been) used to correct the JPG output is recorded in the metadata. We can read this in various raw processing tools.

Now:

1. Do we know whether A or B above is what actually happens?

2. Do we know how to read the color correction vector from a JPG file? (Perhaps it is read by some well-known image editong or viewing tool.)

Thanks for any insight into this you may have.
 
I have to ask, why couldn't...

C. The software just take the value for a selected white balance image and compare it against a default value to find the correction needed to adjust the white balance. This would seem to be the simpliest solution. This is basically what PS and DPP does when you click the white balance pointer (except it is localised to the specific point, while the camera could use the overall value of the selected image since it is supposed to be all of the sample color.
 

Doug Kerr

Well-known member
Hi, Tim,

I have to ask, why couldn't...

C. The software just take the value for a selected white balance image and compare it against a default value to find the correction needed to adjust the white balance.. . .

If by "the value for a selected white balance image" you mean the chromaticity of (the central area of) that (JPEG) image, that chromaticity has been affected by the correction already applied (when that shot was taken), based on whatever white balance the camera happened to be set to when we took that frame.

Best regards,

Doug
 
Based on reading the manual and then doing some quick and cheesy testing, I am trying to disprove one or more of the theories.

If I use red suede as my white balance card (shot1), everything using that WB (CWB1) comes out with a cyan tint. Specifically my white card is cyan (shot2).

If I then use that shot (shot2) as a WB selection; the shots from that are correct. (shot3),

If I double this up, shot1A is again the red suede, but with CWB1, it is green. And interestingly any shot with CWB set with either 1 or 1A come out the same, with no discernable difference (at least to my eye).

So it can't be using vector data from the previous shot1 when shot2 is used for CWB or it would not get the correct WB on shot3.

Thus A must be false (unless I am tired and missing something )

Now the one part where I may be messed up is if the system somehow keeps the CWB history (to back out the data needed), but this could result in a rather large file if taken to extremes and thus I highly doubt it.

However, it must have some form of data from shot2 for use in CWB, so B is not false.

and my 'C' was just MY confusion in that the description of 'B' is missing the current image evaluation which must take place as well before the system compares against a known base value vectored by what we now believe to be derived from the process in 'B'

In other words, each image must be evaluated for use as a potential CWB image and that data embeded into the file's metadata (both raw and jpeg ). When that file is used for a CWB, the system most likely evaluates the image to determine a value, then compares that against a base value, which is modified by the embedded data.

I do a lot better at this type of thing when live and in person, but trying my best to use what little brain I have left.
 

Doug Kerr

Well-known member
Hi, Tim,

Your scenario is not the one that relates to the matter of "Theory A" and "Theory B". The matter is not one of test targets of different color. We must always use a neutral target to make the needed white balance determination (as you of course conclude).

Here is an example that does illustrate the issue.

Suppose we have a subject area illuminated with light from a high color temperature incandescent floodlight.

We take a shot of a gray card (under that illumination) for use as a WB reference, with the camera WB set to "daylight" (frame 1A).

Then we take another shot of the gray card (also for use as a WB reference) with the camera WB set to "incandescent" (frame 1B).

If we pause and examine the JPG outputs for frame 1A and 1B in an editor, then the gray card in 1A will look rather "reddish", and in 1B will look rather "bluish" (at least relatively), as a result of the different WB presets used when they each were taken.

Now we select frame 1A as a custom WB reference, set the camera WB to CWB, and shoot our scene (let's suppose we leave the gray card in it) (frame 2A).

Then we select frame 1B as a custom WB reference, leave the camera WB at CWB, and again shoot our scene (frame 2B).

We examine the JPG outputs for frames 2A and 2B in an editor. They will look identical. The recorded chromaticity of the gray card in the two frames will be the same.

Thus, the "color correction vectors" deduced by the camera from the gray card images in reference frames 1A and 1B must have been identical, notwithstanding that the chromaticity of the gray card recorded in those two images was quite different (as a result of different WB settings being in force when they were taken)

So far, it seems to me that the only way this could happen is as described in "Theory A" or in "Theory B".

Best regards,

Doug
 
Sometimes, I don't explain myself very well...

I tried different WB settings including AWB, Incandesant, and flash even (did not have daylight available so did not think to try that one. )

All results showed the same thing.

Thus, Theory A is still false,

because if A might still be true, then my third subsequent shot should have used the vector data from the second shot and this was a known false CWB result so should have resulted in different colors. Since the colors were identical between the control and the false orginal CWB, the camera must be using "B"

What I did not try was different actual light sources, but I had presumed the light would be the same, because I would think that if I used a different enough light source for the CWB frame, I could not get a correct white balance on the subsequent shot. Also I would think the results should be no different then my shooting a non-white card target in the sequence.
 

Doug Kerr

Well-known member
Hi, Tim,

Sometimes, I don't explain myself very well...

I tried different WB settings including AWB, Incandesant, and flash even (did not have daylight available so did not think to try that one. )

All results showed the same thing.

Thus, Theory A is still false,

because if A might still be true, then my third subsequent shot should have used the vector data from the second shot and this was a known false CWB result so should have resulted in different colors.
I don't think this disproved Theory A.

Under that theory, it doesn't matter whether the color correction vector used to correct a reference frame shot (such as your "second" shot) is "false" or not (or whether that error arises out of a two-stage process or not). (This is indeed the fact that we try to explain by either Theory A or Theory B.) Whatever that correction vector is, it was applied to the "captured" image to give the "delivered JPG image", and the vector used for correction saved with the image.

When that frame is selected as a CWB reference frame, then (conceptually), the saved correction vector is applied backwards to the "delivered" image, thus reconstructing the "captured" image. That reconstructed captured image is used to determine the chromaticity of the light from the test target, and if that target were neutral, that would be the chromaticity of the incident light.In any case, the "correction vector" that would suggest is captured as the current CWB vector.

If the test target were not neutral, then of course that CWB vector, when CWB balance were used for a "real" shot, would not produce "correct" color correction. But the result would still be independent of any error in the correction vector applied when the reference shot was taken.

Or that's how it seems to me.

I need to more thoughtfully follow the sequence of your description, and perhaps a different outlook will emerge from that. If so, I will apply my thoughts to an annotation of that description.

Thanks again for your thoughts on this.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Tim,

Let me make sure I understand exactly what you are saying here.
Based on reading the manual and then doing some quick and cheesy testing, I am trying to disprove one or more of the theories.

First, let me note that it is accepted that the CWB process will not blindly accept the chromaticity of light from the target as representing the chromaticity of the incident light if it is "far off white".

To pursue that a little farther, in careful stages:

Suppose we take a shot of a card that is "truly neutral" under light that is generally "white" but does not match the reference white of our color space (so color correction is needed). That is shot A1. (We take no note of the WB in effect when we do that.)

We select that as a CWB reference. Then, with CWB in play, we again shoot that same card under the same incident light (shot B1).

In the delivered JPG image (B1), that card will appear "white" (that is, will have R=G=B), as a result of the working of the CWB system. (We don't yet know how that is able to work, the point of this whole discussion!.)

Now we set the WB to anything we want and shoot a card that is "little off neutral" (frame A2). We select A2 as a CWB reference.

We set CWB and shoot the card again (frame B2).In the delivered JPG image (B2), that card will appear "white" (that is, will have R=G=B).

Now we set the WB to anything we want and shoot a card that is "way off neutral" (red suede?) (frame A3). We select A3 as a CWB reference.

We set CWB and shoot the card again (frame B3).In the delivered JPG image (B3), that card will not appear "white" (that is, will not have R=G=B).

Why?

Presumably because the system, examining frame B3 as a reference frame, says, "I will assume that the target here is neutral, and thus I will assume that the chromaticity of the light from it is the chromaticity of the incident light under which it was shot, and I will adopt that assumed incident light chromaticity on which my new CWB correction vector will be based.

"But, holy smoke, that chromaticity is way off white. The color correction process only works for incident light that is somewhere near white. Either this was shot under deep red incident light (in which case I can't correct for it) or the target was deep red (in which case the shot is useless for determining the incident chromaticity). Thus I will adopt some scaled-down vector in the suggested direction as my CWB correction.

(This parallels the fact that, doing color correction of a raw file in Photoshop, if we point the eyedropper at a "really red" object, in effect saying, "make this white", the program says, "are you kidding me?", and won't do it.)

Thus, tests made with "really non white" light, or a "really non white" test card, will not reveal the performance of the color correction chain.

This probably doesn't really have anything to do with what we are discussing here, but I thought it was important to have it on the table (f we're talking about "red suede" test targets).

Now, back to your story.

If I use red suede as my white balance card (shot1), everything using that WB (CWB1) comes out with a cyan tint.

I presume you mean, "unnaturally cyan-ish". That is to be expected.

Specifically my white card is cyan (shot2).

Do you mean a "really white" card in the shot or your "red suede" card?

If you mean a really white card, then yes, we would expect it to look cyan, since the color correction vector would be in that direction (but not so much as to make a shot of the "red" card look "white", for the reason I discussed above.)

If I then use that shot (shot2) as a WB selection; the shots from that are correct. (shot3),

Was that a shot of a really white card?

If I double this up, shot1A is again the red suede, but with CWB1, it is green. And interestingly any shot with CWB set with either 1 or 1A come out the same, with no discernible difference (at least to my eye).

I'm losing track of what you shot with what CWB basis. Sorry.

So it can't be using vector data from the previous shot1 when shot2 is used for CWB or it would not get the correct WB on shot3.

If shot2 is of a really white card, then used as a CWB reference, we will get the "correct" result in another shot used by it, regardless of the white balance in effect when we shot it. So we cannot be concerned with whether that white balance was "correct" or not. (If we had it set to "fluorescent", or "1800K", it probably wouldn't have been "correct" either for the reference shot, but the CWB process would still have played out correctly.)

If we stipulate to Theory A, then whatever color correction was in effect when shot 2 was taken (wherever it came from, even if that was a CWB set from some previous shot of a card of whatever color), then that correction vector was saved with shot 2, and when shot 2 was selected as a (new) basis for CWB, that vector is "backed out" of the finished JPG image to get an "uncorrected" image. from which the color of the incident light (during shot 2) is deduce, based on the assumption that the target in shot 2 was neutral.

What makes this difficult for me to address is that, once you describe "[using] red suede as your white balance card", then I can no longer be certain when you speak of a "white" card whether you mean that "red white card" or a "white white card."

Still, here is the bottom line:

It does not seem to matter what white balance correction is in effect when we take a CWB reference shot. It can be "AWB", or "daylight", or "tungsten", or a "CWB" earlier set from any old reference shot. If the target of that reference shot is neutral, then when we select that shot as a CWB reference, the color correction which CWB will then give us (on a real shot) is proper for the incident light involved when we took the reference shot.

And either Theory A or Theory B are credible scenarios for that happening.

I hope that helps you understand my position.

Best regards,

Doug
 
OKay let me try it this way to clarify

(all recorded on card as jpegs only)

Sequence 1
Shoot red suede = frame R1 (frame is red)
use R1 as CWB
Shoot red suede = frame R2 (frame R2 is green)
use R2 as CWB
Shoot white card = frame W1 (frame W1 is Cyan)
use W1 as CWB
Shoot white card = frame W2 (frame W2 is white (R=G=B) )

Sequence 2
Shoot red suede = frame R3 (frame is red)
use R3 as CWB
Shoot white card = frame W3 (frame is cyan)
use W3 as CWB
Shoot white card = frame W4 (frame is white (R=G=B) )

Sequence 3
Shoot white card = frame W5 (frame is off white (yellow tint) due to light)
use W5 as CWB
Shoot white card = frame W6 frame is white (R=G=B) )

Since the results show that all three methods result in a proper white result at the end...

Given that R2 appears green, but produces W1 as a R1 CWB would, the image can't be using a single shot vector compilation, it would have to maintain the history of the vectors used for each previous shot.

If Theory A were true then the jpeg would have to maintain the vector data from shots R1, R2 and W1 in order to back out the vector used in determining CWB to get a correct shot for W2. This would result in a huge file after awhile. This becomes impractical if carried to extremes.

Unfortunately I suppose it is possible, but to test, I would need very controlled conditions to see the difference for the relatively small file difference or I would have to spend way too much time doing CWB samples.

As a final piece of data
if I use R2 as CWB and take a frame of the red suede (R4), it also appears green.

And I used red suede only because it was handy and defocused to get a mostly even color, but I suppose I could shoot different colors in sequence, but I suspect when all is done, only the frame actually used for setting CWB seems to have an effect on the color.
 

Doug Kerr

Well-known member
Hi, Tim,

Thank you so much. That is very helpful.

If Theory A were true then the jpeg would have to maintain the vector data from shots R1, R2 and W1 in order to back out the vector used in determining CWB to get a correct shot for W2.

Not at all.

When W1 is taken, some correction vector was used. Suppose it is entirely random, and we know nothing about the "chain" that developed it. It might have been left in the camera from some work we did last week (the last time we set CWB)

The color of the "white" target as initially captured is the result of the chromaticity of the incident light (which we want to know) and the reflective chromaticity of the white card (but we assume it is "perfectly neutral"). Thus the "initially captured" chromaticity of the card is indeed the chromaticity of the incident light (which we want to know).

That "captured" chromaticity of the white card is then shifted by the CWB correction vector then in effect (whatever it is, however it got into the CWB system). It is recorded along with with the corrected image.

Now when we select frame W1 as a CWB reference frame, the CWB system "backs out", from the recorded chromaticity (in that frame) of the white card, the recorded correction vector. The result is the chromaticity of the captured image of the white card (captured when W1 was taken. If we follow backwards the logical train above, we find that this chromaticity (if the white card is in fact neutral) is exactly the chromaticity of the incident light (which is what we want to know).

Note that there is no concept of, nor need for, the corrected image W1 being "correct". We just need to get back from it to the "captured image W1", and all we need to do that is the vector that was used to change the captured image to the corrected image W1 (whatever its value, wherever it came from). From the chromaticity of reconstructed captured image of the white card, the system knows how to get a "properly corrected" image of the scene in W2.

The entire process takes place from when you take shot W1 on. Anything you have done before (playing with red velvet or whatever) will affect how the image taken as W1 looks on-screen, but has no effect on attaining the the proper correction of w2 based on the chromaticity of the incident light illuminating the white card for shot W1 (as you have in fact noted).

Since this works if we assume the correction vector in the CWB engine when W1 was taken is random, and unknown, then surely it will work for a vector whose history you happen to know (since you made it in a two step process).

Thus, there is no need for the "history" of that vector to be kept in the image file.

We can see this with a kind of bogus algebra, in which I use the multiplication operator to indicate the effect of two chromaticities to produce a third (and the division operator to indicate the "inverse" effect).

Here is the basic notation:

V - A "color correction" vector. It is the inverse of of the chromaticity of incident light for which correction will be performed (it is "blueish" in direction for correction of shots taken under "yellowish" light).

C - A chromaticity of light or of an image element

R - A "reflective chromaticity", where "1" means "perfectly neutral".

We start with the chromaticity of the incident light, Ci.

If the reflective chromaticity of the target card is Rt, then the chromaticity of the light reflected by the card (and captured by the sensor of the camera), Cc, is given by:

Cc=Ci*Rt

But since the card has a neutral reflectance, Rt=1, and thus:

Cc=Ci.

Now let Vcwb1 be the correction vector saved in the CWB engine when we take the reference shot, W1. We do not know its value, nor how it got into the CWB engine, nor its history.

When the CWB correction takes place during the processing of our reference shot (W1 in your scenario), the captured chromaticity of the white card, Cc, is transformed by color correction (using that vector) to the recorded chromaticity, Cr, this way:

Cr=Cc*Vcwb1

The value of Vcwb1 is also saved in the file for image W1 (under Theory A).

Now, we select frame W1 as a (new) CWB reference frame. Under Theory A, the CWB system does this:

Cr/Vcwb1=Cc'

But, if we follow the trail, Cc'=Cc

That is, it has recovered the originally captured chromaticity of the card by "backing out" the vector used to transform it in the beginning - whatever that was.

Now, we could get the chromaticity of the incident light this way::

Ci=Cr/Rt,

but since, by assumption, Rt=1, the system just does

Ci=Cr

and thus has in hand the "presumed" chromaticity of the incident light, Ci, which it needs.

It now determines its new correction vector, V2 (to be used, in your scenario, for correction of the actual "subject" shot, this way:

Vcwb2=1/Ci

Note that none of this requires the system keeping track of the history of Vcwb1. Al that is important is that when the CWB engine wants to retrieve the "captured" chromaticity of the white card in the reference frame (from which it will deduce the chromaticity of the incident light), it backs out the same correction it put in.

None of this is inconsistent with Theory A as a candidate mechanism.

Does that help?

Best regards,

Doug
 
Doug,
It took me awhile to figure out why it didn't seem right to me,
I think maybe your equations are wrong

In sequence it should be

1. Cc=Ci*Rt

2. Vcwb=Rt-Cc
the Camera presumes Rt = neutral white, but it does not have to be in reality.
I would also give Rt a value of 1 if it is nuetral white, As it must be if either your or my equations result in a correct result if using correct WB card.
but in any case Vcwb could be determined if you knew Ci and Rt, which would be possible with calibrated instruments and the variable of the sensor's recording spectrum(s) in determining Cc.

3. Cr = (Ci*Rt)+Vcwb
or Cr = Cc+Vcwb
not
Cr=Cc*Vcwb1
You had it multiplying the vector, but it would add the vector not multiply it.

Once this correction is made the results match my real-world results, with your multiplication, my red should have come out red (at least according to the excel spreadsheet I made using your formulas)

I am now too tired to continue any more tonight, but now the initial equations are matching the results, but I haven't finished thinking it through to confirm a CWB image used as a CWB image as it relates to Theory A or B, just that the equations you were using were not making any sense to me and kept giving me a circular answer, which I knew to be false from the real world test showing red, green and cyan.

For trivia
I presumed
- Ci had a value of 3 (this would be a constant within a particular set taken with the same light)
- a nuetral white target provided an Rt of 1
- a Red target provided an Rt of 6

and I note that I put in very simple values for something that would be a complex RGB value, but the math works.
 

Doug Kerr

Well-known member
Hi, Tim,

Doug,
It took me awhile to figure out why it didn't seem right to me,
I think maybe your equations are wrong

Indeed, it looks as if two are wrong (at least)!

Where I had:

Ci=Cr/Rt

it should have read:

Ci=Cc/Rt

(At this point in the chain, we had Cc, which we had recovered by the inverse transformation.)

Where I had:

Ci=Cr

it should have read:

Ci=Cc

You had it multiplying the vector, but it would add the vector not multiply it.

Actually, the operation is neither multiplication nor addition, but rather some transform we do not know. As I said, I just use multiplication as a metaphor for whatever transformation that is, and thus division for its inverse.

Once this correction is made the results match my real-world results . . .

Glad to hear that. Hope it holds with my corrections as I gave them above.

I am now too tired to continue any more tonight, but now the initial equations are matching the results, but I haven't finished thinking it through to confirm a CWB image used as a CWB image as it relates to Theory A or B, just that the equations you were using were not making any sense to me and kept giving me a circular answer, which I knew to be false from the real world test showing red, green and cyan.

Sorry for the confusion. And I'm not certain I found all my errors yet!

Let me know how your thinking progresses.

For trivia
I presumed
- Ci had a value of 3 (this would be a constant within a particular set taken with the same light)
- a nuetral white target provided an Rt of 1
- a Red target provided an Rt of 6

and I note that I put in very simple values for something that would be a complex RGB value, but the math works.

Yes, that was my approach as well.

Best regards,

Doug
 
Hi, Tim,



Indeed, it looks as if two are wrong (at least)!

At least this proves my brain is not dead yet :)
Actually, the operation is neither multiplication nor addition, but rather some transform we do not know. As I said, I just use multiplication as a metaphor for whatever transformation that is, and thus division for its inverse.

I think actually addition/subtraction is what it does. It is simple and it works.

Going with a single pixel, it provides a single intensity value to the processor, which then converts it to red value, a green value and a blue value based on the bayer filter position. For a raw image it stops here except to compress and save as far as a single pixel is concerned. WB is only applied to jpegs, or displaying raw images, but it is not part of the raw data set.

To do WB, all it would need to do is add or subtract appropriately to each of the three channels to get a corrected value for that pixel. The system would presume the value of the color of the light was constant over the sensor and only the reflectivity of the targets would change.

I grant it is possible the system might compensate for light intensity over the sensor making it a more complex equation, but for our purposes this has no effect to our results.

I have to go work on someting else right now, but will continue to think on the equation and probably get back on later this evening.
 

Doug Kerr

Well-known member
Hi, Tim,

I think actually addition/subtraction is what it does. It is simple and it works.

Going with a single pixel, it provides a single intensity value to the processor, which then converts it to red value, a green value and a blue value based on the bayer filter position. For a raw image it stops here except to compress and save as far as a single pixel is concerned. WB is only applied to jpegs, or displaying raw images, but it is not part of the raw data set.

To do WB, all it would need to do is add or subtract appropriately to each of the three channels to get a corrected value for that pixel. The system would presume the value of the color of the light was constant over the sensor and only the reflectivity of the targets would change.

I don't think so.

Here is a simplistic model. I will work here in the "non-gamma precompensated" tristimulus values, r, g, and b (not the "gamma precompensated" values, R, G, and B. The rgb values are linear with luminance, and any manipulation we do is best done in those terms. I will scale these on the scale 0-1.

Suppose we have incident light whose chromaticity can be expressed by these (relative) rgb values: 0.50, 0.55, 0.50. (That is. its "green" component is 10% higher than its "red" and "blue" components.

We have a "neutral" test subject whose tristimulus reflectance can be expressed by these rgb values: 0.9, 0.9, 0.9.

Then the light reflected by it can be characterized by these rgb values: 0.45, 0.495, 0.45. (Of course, the scale here is arbitrary.) The camera sensor system captures this.

Assume we have earlier measured the chromaticity of the incident light, and have characterized it, arbitrarily, as 0.50, 0.55, 0.50.

We can take the inverse of this as a "correction vector": [2.0, 1.812, 2.0]. (We of course must shift the image in the opposite direction from the departure of the chromaticity of the incident light from "reference white".)

Now we multiply the "captured" values (also treated as a vector) by our "color correction vector, thus:

[0.45, 0.495, 0.45] * [2.0, 1.812, 2.0] = [0.9, 0.9, 0.9]

The result vector is what is recorded in the final image for the subject (test target). (Again, the scale here is arbitrary.)

Thus we have regained the "neutral" aspect of the test target in the corrected image.

If we tried to work "by addition", it might go like this:

We would make the "incident light" vector the difference in the incident light tristimulus values from their average:

[0.50, 0.55, 0.50] - 0.517 = [-0.017, 0.033, -0.017]

We then make our "color correction vector" the negative of that (negative, rather than inverse, since here we are dealing with addition/subtraction rather than multiplication/division):

+0.017, -0.033, +0.017

We then add that correction vector to the "captured" chromaticity, thus:

[0.45, 0.495, 0.45] + [+0.017, -0.033, +0.017] = [0.467, 0.528, 0.467]

This latter triple portrays the "color" of the target in the corrected image. We can see that it is not "neutral" (for which we would have to see r=g=b).

Now of course we both recognize that neither "multiplication" nor "addition" is not likely the actual operative mechanism. But I hope you can see why I have adopted multiplication, rather than addition, as my metaphor for the process.

I could do a similar homily as to why "backing out the correction vector" wouldn't work under the "addition" model (even though at first it might seem as if it would). The reason has to do with the scaling of the r, g, and b values between different places in the system. Recall that above I emphasized the arbitrary nature of the scale of r, g, and b,, which causes no problems if:

a. We are only interested in chromaticity, not luminance, since chromaticity is a creature of the relative values of r, g, and b

and

b. All operations are performed with multiplication and division..

Best regards,

Doug
 
I had to look up tristimulus, but fortunately my 3-years of Latin served me well and my guess was close.

I am starting to lose you and except for short breaks have to concentrate on my other project right now.

I just know Theory A is most likely not correct, but can't prove it right now. When things like this happen I am correct far more often then not, but the not is high enough, I could be wrong.

Sorry I can't spend the time or brain power to figure it out right now.

If nobody else figures it out, maybe I can get back to it late next week.
 
I have assumed that one of the following must be true for a Canon EOS camera:

A. The camera must embed in a JPG file information on the "color correction vector" used to perform color correction on the image in that file.

I've just skimmed the thread, so I don't know that I will be adding much to it, but in the maker note of most images (and JPEGs from many cameras do contain the same maker data as the raws) the camera writes the RGB gains and you can use those to back out the camera white balance.

-Colleen
 

Doug Kerr

Well-known member
Hi, Colleen,

I've just skimmed the thread, so I don't know that I will be adding much to it, but in the maker note of most images (and JPEGs from many cameras do contain the same maker data as the raws) the camera writes the RGB gains and you can use those to back out the camera white balance.

Is variation of the RGB gains the likely way the camera actually applies the "WB orrectionr"? I would think that if this were so, then the correction would influence the raw outputs, which it doesn't.

Are you by any chance related to Charlie Vermilion, who worked for Kentron International (then a subsidiary of The LTV Corporation) in Dallas in about 1971-1977?

Thanks for writing.

Best regards,

Doug
 
Hi, Colleen,
Is variation of the RGB gains the likely way the camera actually applies the "WB orrectionr"? I would think that if this were so, then the correction would influence the raw outputs, which it doesn't.

Those gains + the camera mode are all the white balance information Bibble gets to render the RAWs (other than the our profiles for each camera of course). If you know the camera characteristic, and the wb mode/gains you can back it out fairly easily although not perfectly in the case of a JPEG. Close enough for "click white" type corrections at least. I'm not sure what you mean when you say it "doesn't influence the RAW". The gains in the file are what the converter uses for as shot (or the converter's interpretation of as shot). Are you saying that the raws shot after you set the custom balance with the JPEG don't come up correctly white balanced? Now I'm actually going to have to go back and read everything aren't I? That's what I get for jumping in without following the whole thread :)

Are you by any chance related to Charlie Vermilion, who worked for Kentron International (then a subsidiary of The LTV Corporation) in Dallas in about 1971-1977?

Nope, but I do get calls from that Vermillion clan every so often asking if we are :) We're the ridge runner Vermillions out of West Virginia.

-Colleen
 

Doug Kerr

Well-known member
Hi, Colleen,

Those gains + the camera mode are all the white balance information Bibble gets to render the RAWs (other than the our profiles for each camera of course).

It seems that the "gains" yoiu speak of are not the amplifier gains prior to the DAC (I jumped to that conclusion, since I am most familiar with the use of the term "gain" there) but rather are the scaling constants used in applying color correction during image processing.

If you know the camera characteristic, and the wb mode/gains you can back it out fairly easily although not perfectly in the case of a JPEG. Close enough for "click white" type corrections at least.

Makes sense.

I'm not sure what you mean when you say it "doesn't influence the RAW".

That was based on my thinking that the "gains" you spoke of were the pre-ADC gains. But of course those can't be adjusted easily, so that couldn't have been what you meant anyway!

The gains in the file are what the converter uses for as shot (or the converter's interpretation of as shot).

Sure. They are the coefficients of what I refer to as the "color correction vector".

I never knew in what form was the data from which, for example, PS ACR uses to report (or apply) the "as shot" color correction. Now I do.

Of course, it transforms those values into a description of a chromaticity of ambient light that would call for such a correction, expressed in terms of correlated color temperature (labeled just "temperature") and Planckian offset (labeled "tint", which I understand is 3000 times the "duv" value used to quantify Planckian offset in formal colorimetric work).

Are you saying that the raws shot after you set the custom balance with the JPEG don't come up correctly white balanced?

No nothing like that. I get what you are saying, and it is of course something I have been visualizing but just not with the labels "gains" attached to the coefficients (although of course it is wholly apt).

This gains data must be in the Canon EOS JPG files (owing to the need to "back it out" in the Canon scheme of setting custom white balance from a reference frame that was taken with no indication that it will be used for a WB reference). Is there any application that will read/decode this from the Canon JPG maker data?

Thanks so much for making the bridge for me between these various notions.

Best regards,

Doug
 
This gains data must be in the Canon EOS JPG files (owing to the need to "back it out" in the Canon scheme of setting custom white balance from a reference frame that was taken with no indication that it will be used for a WB reference). Is there any application that will read/decode this from the Canon JPG maker data?

I think that ExifTool extracts a lot of that information, but I haven't tested it specifically. The EXIFToolGUI I have does show the table of RGGB files for each white balance mode for the couple of Canon files I checked.
The ExifTool site:
http://www.sno.phy.queensu.ca/~phil/exiftool/

The EXIFToolGUI site:
http://freeweb.siol.net/hrastni3/foto/exif/exiftoolgui.htm

You would calculate two gains from the information : G1/R and G2/B then use them as multipliers for the red and blue channels respectively.

-Colleen
 

Doug Kerr

Well-known member
Hi, Colleen,

I think that ExifTool extracts a lot of that information, but I haven't tested it specifically. The EXIFToolGUI I have does show the table of RGGB files for each white balance mode for the couple of Canon files I checked.
The ExifTool site:
http://www.sno.phy.queensu.ca/~phil/exiftool/

The EXIFToolGUI site:
http://freeweb.siol.net/hrastni3/foto/exif/exiftoolgui.htm

Those look fabulous. I'll download them and see if I can get the pair on the air. Thanks so much.

You would calculate two gains from the information : G1/R and G2/B then use them as multipliers for the red and blue channels respectively.

I'l look into that. Sounds like fun.

Thanks so much for your help.

Best regards,

Doug
 

Emil Martinec

New member
Exactly. I was wondering when someone would get around to mentioning exiftool. On a 1D3 file it reports

WB RGGB Levels As Shot : 2070 1015 1015 1461
Color Temp As Shot : 5668
WB RGGB Levels Auto : 2070 1023 1023 1461
Color Temp Auto : 5668
WB RGGB Levels Daylight : 1984 1023 1023 1543
Color Temp Daylight : 5200
WB RGGB Levels Shade : 2277 1023 1023 1293
Color Temp Shade : 7000
WB RGGB Levels Cloudy : 2147 1023 1023 1402
Color Temp Cloudy : 6000
WB RGGB Levels Tungsten : 1397 1023 1023 2442
Color Temp Tungsten : 3203
WB RGGB Levels Fluorescent : 1785 1023 1023 2099
Color Temp Fluorescent : 3969
WB RGGB Levels Flash : 2173 1023 1023 1360
Color Temp Flash : 6309
WB RGGB Levels Custom 1 : 1984 1023 1023 1543
Color Temp Custom 1 : 5200
WB RGGB Levels Custom 2 : 1984 1023 1023 1543
Color Temp Custom 2 : 5200


So it seems to report specific multipliers for scaling the four color planes (I presume the numbers are 1000 times the multiplier values), as well as a color temp, for the various presets as well as the "as shot" color balance.
 

Doug Kerr

Well-known member
HI, Emil,

Oooh! Neat.

I assume you are speaking of a JPG file.

Thanks so much. Now I have a new nest thing to figure out!

Guess I'd better get my ExifTool installed!

Exactly. I was wondering when someone would get around to mentioning exiftool. On a 1D3 file it reports . . .
<snip>

So it seems to report specific multipliers for scaling the four color planes (I presume the numbers are 1000 times the multiplier values), as well as a color temp, for the various presets as well as the "as shot" color balance.

Makes sense to me.

Thanks again (to Colleen, too).

Best regards,

Doug
 
So it seems to report specific multipliers for scaling the four color planes (I presume the numbers are 1000 times the multiplier values), as well as a color temp, for the various presets as well as the "as shot" color balance.

I'm not sure I understand what you mean by "4 color planes". The values are R Gr Gb B, with different Gs for the alternating rows in the bayer pattern. You divide them to get the ratios to correct the red and blue and leave the green channel alone. Most of the time the two greens are identical, but they toss them in there to allow for the day when they might not be I guess.

So to white balance a RAW file you read out your RGGB value, and do Rout = Rs * (gr/r), Gout = Gs, Bout = Bs*(gb/b), make sure you clip to the appropriate white point so you don't get a magenta cast in your highlights, then you do your camera profile and such.

-Colleen
 

Doug Kerr

Well-known member
Hi, Colleen,

I think that ExifTool extracts a lot of that information, but I haven't tested it specifically. The EXIFToolGUI I have does show the table of RGGB files for each white balance mode for the couple of Canon files I checked.
The ExifTool site:
http://www.sno.phy.queensu.ca/~phil/exiftool/

The EXIFToolGUI site:
http://freeweb.siol.net/hrastni3/foto/exif/exiftoolgui.htm

Wow! What an amazing toolkit. Thank you so much for introducing it to me. It will be good for so many things.

I will be playing with the RGGB "levels" (and the associated "gains" ) to see how that fits together with my conceptual notion of a "color correction vector".

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Emil,

No this was a RAW file. For some reason my camera doesn't output jpegs ;-)

Ah. Must be some religious preference.

However, I have since gotten and installed ExifTool and I find that very same array in a JPG output file from my 20D (as well is in the CR2 file).

It is interesting that, although the G1 and G2 coefficients in the table for all the WB presets are "always" 1023, 1023, the actual G1 and G2 coefficients shown for "as shot" are usually (maybe always) 1015, 1015.

But it's fabulous to have this further insight into the working of The Mind of Canon.

Have you perhaps never set CWB on your machine (or the temperature for "K)"? It is curious that the coefficients for "Custom 1" (which I believe to be CWB) are identical to those for "Custom 2" (which I assume is "K"). Maybe if neither of those have been touched, the factory default for "K" is 5200 K, and the default vector for CWB is the same as that default vector for "K".

Thanks for the additional scoop.

Best regards,

Doug
 
Top