• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The new life of the X-Rite Passport

Doug Kerr

Well-known member
The X-Rite Passport is a foldable set of color targets, now supported by new software packages. I thought I would take a quick look to see just what is going on here.

In my after (Sunday) breakfast theory lecture today, I conjectured that this set of tools might give us the opportunity to do color balance that takes into account the fact that some light sources do not have power spectral density functions (SPD) such that white-balance color correction based on a simple determination of the chromaticity of the illumination will approach "theoretical ideal" white balance color correction.

My conclusion is that it does, even though the single presentation of its use by the manufacturer I have so far looked at (the "Photoshop" version of their video) only hints at it.

The device (I describe it as "Barbie's eye shadow compact") includes three targets:

• A neutral gray target for making traditional white-balance color correction measurements.

• A "GretagMacbeth Colorchecker Classic" target, a pattern of numerous small patches of known reflective spectrum. My discussion here will concentrate on this.

• A target including various patches of nearly-neutral reflective color but not quite. Its purpose is to be used in post-processing to provide color correction that intentionally varies slightly from "theoretically-ideal" so as to "beautify" the subject's skin. (We have encountered this notion before in connection with one of Drew Strickland's products.)

The associated software allows us to use a shot of the second of these targets to build a camera profile (spoken of as a "DNG" color profile, although that may not be a meaningful description).

This profile describes the color response (in both luminance and chromaticity, I assume) at various places in the color space (that is, for the object colors of the patches), taking into account the illumination on the target.

Now, if we invoke this profile when processing a camera raw file, it will overcome inconsistencies in the camera's response across the color space, taking account of the illumination while it's at it, assuming that the illumination for the actual shot is the same as that used to shoot the target. (This is the same kind of thing we do with respect to the response of a printer when we construct a profile for it.)

The video mentions that of course, ideally this should be done separately for "various kinds of illumination" (tungsten, daylight, etc.), and the appropriate profile used when processing raw files for shots taken under that "type" of illumination.

But it would in fact seem advantageous, when dealing with "difficult" lighting situations (as those Asher so poignantly described for us in connection with his work at Colburn) to make such profiles on an "ad hoc" basis (for example, for shots to be taken on a certain region of the stage with a certain lighting setup in effect).

This does not require anything complicated in production - just shooting the ColorChecker target at a key spot on the stage for each scenario. The heavy lifting is done by the new X-Rite software during post.

Well, that's how it looks after about ten minutes' contemplation.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Good intro! See if you can find a workaround to use the "DNG profile" with Capture One, aperture, Lightroom etc!

Asher
 

Doug Kerr

Well-known member
Regarding the significance of "DNG profile", Abobe saith thus:

Why are the new profiles called "DNG camera profiles"? Can they only be used with DNG images?

The name "DNG camera profile" is used because (1) the underlying camera profile format comes from the DNG 1.2 specification and (2) these camera profiles can be embedded in DNG images.

The name does not mean that these profiles can only be used to process DNG images. In fact, the new AS and CM profiles can be used to process both DNG and non-DNG images.​

Best regards,

Doug
 

Doug Kerr

Well-known member
The following from the Luminous Landscape discussion of DNG profiles:

It's worth noting that if you find yourself shooting under very weird lighting conditions, such as unknown flourescents and mixed conditions, all that one needs to do is take out a ColorChecker (even the little pocket version) and take a shot of it under the unusual lighting. Then, use that image to create a custom profile for shots taken under that particular lighting condition. Results can be quite remarkable, and almost impossible to otherwise achieve because the colour controls in the raw processing program itself can be too broad in their action.​

Incidentally, evidently, we can make a DNG profile from a shot of Barbie's eye shadow compact Once converted to DNG) with the Adobe DNG profile Editor, not needing the X-Rite genie (although that is available free).

Best regards,

Doug
 

Andrew Rodney

New member
You need a DNG to build the profiles after which you can use them on non DNG (proprietary) raw in an Adobe raw converter.
 

Andrew Rodney

New member
The DNG profile spec is open, those converters could use the profiles if they wanted I believe. But you presumably would build the profiles in their converter.
 

Asher Kelman

OPF Owner/Editor-in-Chief
The DNG profile spec is open, those converters could use the profiles if they wanted I believe. But you presumably would build the profiles in their converter.

Andy,

So far we discover that an XRite Passport corrected DNG file, saved as DNG has the identical ICC file but the Xrite "DNG" "Profile" is tagged on separately. So, when Capture One opens a saved DNG file from Adobe Camera RAW, it sees this file as no different from the original RAW file out of the camera! So, how can we bridge this? what's the way to force the ICC profile to be changed to reflect the DNG side file that XRite makes for Adobe's software?


Unfortunately, my one good idea was not good enough! Hopefully, you might do better with your experienced thinking cap... or even perhaps an email to someone at XRite.

Asher
 

Andrew Rodney

New member
So, when Capture One opens a saved DNG file from Adobe Camera RAW, it sees this file as no different from the original RAW file out of the camera! So, how can we bridge this?

That raw converter has to use the DNG profile as it would an ICC profile. But DNG profiles are built for scene referred and ICC profiles are built for output referred which is one of the big deals about DNG profiles and why the Adobe team was never that hot on the idea of using ICC profile for fingerprinting raw capture. IOW, C1 has to get their profile act together and offer both, assuming they still really feel an ICC camera profile is a good approach. Or they have to decide to allow DNG profiles to be used as designed.
 

Doug Kerr

Well-known member
Hi, Andrew,

That raw converter has to use the DNG profile as it would an ICC profile. But DNG profiles are built for scene referred and ICC profiles are built for output referred . . .
As I understand it, one of the implications of this is that the DNG-type profile can deal with differences in the camera's "response" at different places in the color space (for different wavelengths, if we want to look at it that way).

The ICC profiles wholly deal with the color coordinates of the image (always only three). Thus, if we were to look to them to "equalize" the camera response (much as we might do in audio work by applying a custom equalizer to a microphone before any further processing of the signal), the best they can do is to use different transfer functions for the three color coordinates.

(Now I have to read the spec for DNG profiles and see if what I said makes any sense!)

Best regards,

Doug
 

Doug Kerr

Well-known member
It appears that the basic structure of the part of the DNG profile that presents the "response" of the camera works this way:

• The coordinates of the color "observed" by the camera are transformed into Hue-Saturation-Value form (where Value is a quasi-luminance). (I don't yet know what HSL model is used - there are many, some of them rather revolting.)

• Typically, Value is ignored so that we are only dealing with chromaticity (in terms of the Hue and Saturation coordinates).

• There is, in the profile, a lookup table that for any given "input" Hue-Saturation combination (a chromaticity), delivers an "output" Hue-Saturation combination (a chromaticity), which is then passed on for further processing. (I assume that this table may be more or less "coarse"; that is, it might have only a few entries spanning the chromaticity gamut or a lot.

(More on this later - I only report here the results of a "three-minute" analysis of this very complex specification, previously unread by me.)

Note that this table cannot be applied to the raw data itself - it can only be applied to a developed color image (in its "initial" form.

Best regards,

Doug
 

Andrew Rodney

New member
As I understand it, one of the implications of this is that the DNG-type profile can deal with differences in the camera's "response" at different places in the color space (for different wavelengths, if we want to look at it that way).

The ICC profiles wholly deal with the color coordinates of the image (always only three). Thus, if we were to look to them to "equalize" the camera response (much as we might do in audio work by applying a custom equalizer to a microphone before any further processing of the signal), the best they can do is to use different transfer functions for the three color coordinates.

(Now I have to read the spec for DNG profiles and see if what I said makes any sense!)

I think that’s correct <g>. Again, when I refer to scene vs. output referred, my understanding is that ICC profiles are created from rendered (output referred) data. DNG profiles take what I feel is a more logical approach. They are using scene referred (or as close as scene referred as the Adobe converters can hand off) to create the profile. If you build output referred profiles for printers, that makes total sense, it seems far less sensible to do this for raw data that itself has to be rendered. What happens if you alter the rendering? I think this is the reason that for years, Adobe has stayed away from using ICC camera profiles. DNG profiles are “used” early in the process, prior to the output referred rendering. Then users apply their preferred rendering settings “on top” of that initial calibration if you will (calibration because of the calibration tab we’ve had from day one in ACR and LR).

As to other converters, they too would have to totally alter the processing path to deal with DNG profiles. It would be vastly different from the ICC profiles they use today. In theory, I suppose they could have a switch that moves from early, scene referred, DNG profile processing to late, output referred ICC processing. That might be big engineering hence the reason we don’t see others using DNG profiles. Or they suspect that users feel the current solution works OK. There were debates for years about how awful it was that Adobe didn’t support ICC profiles. Well they decided to do the processing in what I think makes far more sense. They had to develop the technology. With a label such as DNG, that too can put a bad taste in other parties mouths which doesn’t bode well for us end users.
 

Andrew Rodney

New member
Note that this table cannot be applied to the raw data itself - it can only be applied to a developed color image (in its "initial" form.)

Right. It couldn’t be applied to raw data for obvious reasons. But its applied very early in the process. Presumably once the converter “assigns” what it wants to assume for the color space of the data after demosaic (ProPhoto primaries with TRC 1.0?). And presumably the profile is used at this early stage which in my mind, makes far more sense that far later in the process as ICC profiles require.
 

Doug Kerr

Well-known member
Well, sadly, the DNG spec does not define what HSV color space is involved here. I guess it is meant to be one of the "recognized" ones. Let me look into that.

Best regards,

Doug
 

Andrew Rodney

New member
Well, sadly, the DNG spec does not define what HSV color space is involved here. I guess it is meant to be one of the "recognized" ones. Let me look into that.

It has to be whatever the raw converter wants to assume the color space is.

This goes back to a long standing argument “does a raw file have a color space”? It stems from an early writing of Bruce Fraser who wrote “essentially, raw data is Grayscale.” Some argue a raw file is essentially a single channel Grayscale with the potential to be any color space but that begs the question, what color space is it? My understanding (and please do look farther) is that at such a stage in the demosic process, a color space has to be assumed. In Adobe products, I believe its ProPhoto primaries but its possible this comes a bit later in the process. At some point ProPhoto RGB with a linear TRC is the processing color space used prior to rendering and encoding out to the color space you the user ask for on export. But I’m not sure if that’s from the get go or after some initial processing.

This is probably why the DNG spec is what it is. They let any 3rd party raw converter decide what and how they want to deal with the data.

If we could find someone writing code for a raw converter that handles DNG profiles, they could probably clarify this. You might want to contact Brian Griffith <brian@iridientdigital.com> who writes Raw Developer which I think does handle DNG.
 

Andrew Rodney

New member
Doug, I did find an email from Thomas Knoll when asked “does raw have a color space”

The fact that a mosaic array is “grayscale” is a red herring in this argument. An early processing step fills in the missing values, and you have a 3 or 4 channel image as a result. For most cameras, if you just “assign” a working space RGB profile, you get a recognizable color image as a result, so it certainly seems like a color space.

The camera color space differences from a more common working color space in that it does not have a unique one-to-one transform to and from CIE XYZ space. This is because the camera has different color filters than the human eye, and thus sees colors differently. Any translation from camera color space to CIE XYZ space is an approximation because of this.

And this:

Fundamentally, absolutely YES, but we may not know what that color space is. The image was recorded through a set of camera spectral sensitivities which defines the intrinsic colorimetric characteristics of the image. An simplistic way to think of this (while not purely accurate) is that the image was recorded through a set of "primaries" and these primaries define the color space of the image.

Practically, it makes little difference unless you are interested in accurate scene-referred data. In the context you described, a simple transform is applied to convert from the camera primaries to a new set of primaries (eg CIE or working space) that have more desirable characteristics than the RAW primaries.

Mathematically, of course, you know the foregoing is true. Matrix algebra informs that if the final color image encoding is a "color space" then so must be all of its infinite number of linear transforms (lets ignore the 1 D non-linear transfer functions), including the original RAW encoding.

Eric Walowit


Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry.

The same thing could be said about film negative densities.

Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like with film scanning).

A purist might argue that a color space not based on colorimetry is not really a color space because it is not an assignment of numerical values to colors, defining colors as a human sensation. In the standards committees we decided it is useful to be able to talk about non-colorimetric color spaces so we allow them and use “colorimetric color spaces” when appropriate.

Jack Holm
 

Doug Kerr

Well-known member
I suspect that the HSV coordinate system used in the DNG profile definition is the one defined (in terms of R, G, and B) by these equations:

HSV_equations_01.gif


(This is based on an analysis of the current Wikipedia article on the HSL and HSV color spaces, which is quite good. It is a very curious, and somewhat depressing, area.)

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Andrew,

Doug, I did find an email from Thomas Knoll when asked “does raw have a color space”

Thanks, and very interesting, but not the point at issue.

Then point at issue is that, to apply the DNG camera profile, the raw data has to be developed to RGB, then the RGB values for each pixel converted to "HSV", and then the HSV coordinates (only HS, actually) is used to enter a table, which delivers a new HSV value for the pixel (corrected for camera response), which is then converted to whatever color space we want the output file to be in.

The kernel of the question is, "which HSV color space is that", since there are several.

[Note that the camera response matrix cannot be applied to the raw data.]

Indeed the question of "is there a color space for raw data" is a fascinating (and tricky)
one, on which I have written here:

http://www.openphotographyforums.com/forums/showthread.php?t=11331

I think the three quotations you cite are all reasonable outlooks on this matter

Thanks.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Andrew,

This quote is most helpful

"Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry.

The same thing could be said about film negative densities.

Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like with film scanning).

A purist might argue that a color space not based on colorimetry is not really a color space because it is not an assignment of numerical values to colors, defining colors as a human sensation. In the standards committees we decided it is useful to be able to talk about non-colorimetric color spaces so we allow them and use “colorimetric color spaces” when appropriate.

Jack Holm"​

Also your reference to scene-referred color space v. an output-referred color space helps separate the two concepts well. It would still be nice to have a software app to internalize the XRite-modified DNG file to the a "RAW-modified" form which any other package could use.

Asher
 

Doug Kerr

Well-known member
To elaborate on the matter of "is there a color space for the raw data", let me summarize my lengthy discussion cited above.

First, to avoid a complication that is not helpful, forget the CFA arrangement and imagine that we have all three raw outputs (I will ignore the "two sets of green" situation) for each pixel of the image.

The raw "coordinates" for each pixel (the three values we usually label "R", "G", and "B", although the color names these imply are not very apt in this case) describe the color of the pixel in a "tristimulus" way, just as is done in the RGB color spaces or in the CIE XYZ color space. Accordingly, those values constitute the coordinates of a bona fide color space - just not one prescribed by any published specification, or having a familiar name.

In every tristimulus color space, there are stated or implied "primaries". In the RGB color spaces, these have "visible" chromaticities (we could actually generate viable radiation having those chromaticities).

In the case of the CIE XYZ color space, those primaries are "hypothetical". They do not represent any kind of radiation (visible or invisible). But they obey the laws for the mathematical "combining" of primaries (useful so long as the combination is "visible").

Now, regarding the primaries of the color space of the raw data, we can show (through a certain train of mathematical logic, outlined in my longer article), its primaries must likewise be "hypothetical" - they do not represent any kind of radiation (visible or invisible). But we can still describe the color of visible radiation in terms of "combinations" of them (just as we can with the primaries X, Y, and Z).

In fact, based on analysis of the way we derive color values from the raw representations of pixels, we could plot those primaries (for a particular kind of camera) in the CIE XYZ color space (just as we can plot there the X, Y, and Z primaries themselves).

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Andrew,

Well said.

Right. It couldn’t be applied to raw data for obvious reasons. But its applied very early in the process. Presumably once the converter “assigns” what it wants to assume for the color space of the data after demosaic (ProPhoto primaries with TRC 1.0?).

In fact, the spec defines just that color space as the "intermediate" form from which the HSV values are determined. (It must be standardized if the profiles can be applied consistently by different apps.) (I had missed that aspect of the spec before.)

Of course, the definition I know of for the HSV color space is predicated on some (not-well-defined) gamma precompensated R, G, and B coordinates, so it may be more nonsensical than usual running from linear R, G, and B values!

Best regards,

Doug
 
Top