David Ellsworth
New member
As far as I know, every digital camera currently in existence uses a basic metering philosophy of "try to get the correct exposure", where "correct" means that luminosity percentages in the photo correspond to reflectivity percentages in reality. This metering algorithm (or group of algorithms) seems to be centered around the tenet that on average, all photographic subjects will have an average luminosity of 18% gray reflectance, which is 2.5 stops below 100% white.
To my way of thinking, this type of metering hearkens back to film days. On negative film, highlights saturate quite smoothly, so getting a good exposure probably involves centering most of the histogram on a sweet spot of mid-range luminosity. However, digital cameras have a linear response and clip highlights abruptly. So why are we still using the same old metering algorithms?
Exposure-to-the-Right (ETTR) involves manipulating the exposure (ISO, shutter speed, and aperture) so that the entire histogram is used, giving the best signal-to-noise ratio obtainable without clipping (blowing out) any highlights where the photographer wants to preserve detail.
Current DSLRs, including the most high-end ones, obtain lower amplifier noise at higher ISO levels, so ETTR achieves the best SNR even at high ISOs. Eventually there may be a camera that does two passes of readout, one at base ISO (usually 100) and another at high ISO (e.g. 1600). Then shooting at high ISO could be "faked" and highlights would be virtually guaranteed not to clip. This would make ETTR obsolete at high ISOs. However, we're not there yet; all DSLRs still do only one pass of readout. So the benefits of ETTR are pretty much universal. (And will always be, at base ISO.)
Is there any reason not to expose to the right in absolutely every situation? I don't think so. Since ETTR by definition doesn't clip any important highlights, the exposure can always be corrected in post-processing, especially when using a RAW format.
Currently, photographers are forced to "fight" the exposure metering on their digital cameras to achieve ETTR. This either involves going Full Manual, or choosing an exposure compensation that works for the subject being photographed. But what is this exposure compensation, really? As I see it, it's an arbitrary number with no basis in reality; it is a number that corrects for errors in the assumptions of the camera's metering system. The only way in which it gives the photographer creative control is in deciding which luminosity level beyond which it is okay to clip/saturate.
Choosing an exposure compensation that works requires a combination of memory, guesswork, and chimping, and must be constantly updated as the subject lighting and background change. For quick-draw situations, chimping is right out; you need to be able to get the shot before the opportunity is gone. For situations in which you have plenty of time, you can take some test shots and see which areas of the photograph are clipped, and if they are important to you, reduce the exposure accordingly. However, even this method is flawed.
AFAIK, all current DSLRs with Clipped Highlight Preview are actually showing you which highlights are clipped in the RGB representation of the photo after color profile conversion (usually sRGB). So there can be flashing bits of the photo, especially colored areas, that are actually *not* clipped in the RAW representation — highlights that could be recovered without any algorithmic guesswork. The same applies to in-camera histograms.
There is a band-aid solution for this, called Unity White Balance (UniWB). This is seriously flawed, because not only does it make chimping much less fun (all photos will be severely green-tinted) but it doesn't even work. Saturated colors can still be clipped during color profile conversion, when in the RAW representation they are not clipped. And furthermore, the camera's histogram preview will still be affected by its gamma and tone curve, making it much harder to determine exposure compensation amounts.
DSLRs *should* provide an option to display a RAW RGB linear histogram (and AFAIK, none do). But there is more that they can do.
I am asking, why do we still not have a DSLR that can *automatically* do ETTR metering? There are some technical considerations, but *any* automatic ETTR would be much quicker and easier to use than what photographers are forced to do without it.
The simplest type of auto-ETTR would involve taking a test shot, and telling the camera to meter the next set of photos based on that test shot. You would have to tell the camera which highlights are acceptable to clip. One method might be a percentage. Telling it "clip 0.00%" would give you photos in which nothing is clipped in the RAW representation of the image, not even a single pixel, assuming that your test shot has the brightest highlights that are going to occur in your photo session. Telling it "clip 1%" might work nicely if you have some occasional specular highlights that don't need to be preserved photometrically; this would allow no more than 1% of the area of the photo to reach RAW saturation.
There should be a single button that tells the camera "apply ETTR metering based on the last shot taken". You should also be able to apply this based on earlier shots, during Preview/Playback. The "ETTR button" could share functionality with an "AE-lock button" (i.e., be the same button), with its functionality depending on whether you are in ETTR mode or not.
In traditional metering modes, you directly adjust two out of three of the shutter speed, aperture, and ISO, and the third is chosen by the metering system. (Assume that we already have a proper auto-ISO mode, even though Canon xxD/xxxD DSLRs don't have one, and should.)
In my hypothetical ETTR metering mode, the "exposure compensation" (EV) adjustment would be replaced with an adjustment of absolute EV, or LV (Light Value). Instead of letting the camera measure the LV itself using a metering algorithm, on top of which exposure compensation would traditionally be added, you'd get to choose the LV yourself, directly, and no exposure compensation would be needed. You could then use Aperture+Shutter priority, Aperture+ISO priority, or Shutter+ISO priority, and the camera would use the LV to calculate the third exposure parameter.
This should ideally be combined with a per-lens calibration of the actual light transmitted at each aperture, and if possible, continuous adjustment of shutter speed (as opposed to stepwise adjustment) so that ETTR can precise. But this is probably optional and not necessary for a very usable and practical auto-ETTR implementation (without it, the algorithm would need to do some rounding-down in certain circumstances to ensure the clipping percentage is not exceeded).
Pressing the ETTR button would cause the LV to be automatically adjusted. Otherwise the LV would stay locked and would be manually adjustable.
(A more advanced ETTR metering mode might somehow use the matrix metering sensors that are standard in all SLRs that use a half-silvered mirror (i.e. basically all of them), to do a sort of guesswork-auto-ETTR that doesn't require taking a test shot, but this would be complicated, since in most situations the highlights are smaller than the metering sensors. Anything like this would just be icing on the cake, and should definitely be *optional* and not replace what I have described previously. It may not even be possible to do this usefully, in which case I would not miss its absence at all.)
What do you think of this idea? Has anybody else thought of it? Is there are reason it has not been implemented in DSLRs?
Ideally this would be implemented through firmware updates to existing cameras, as well as future cameras being given the feature, but the best we can hope for is probably the latter. So how can we make this happen?
To my way of thinking, this type of metering hearkens back to film days. On negative film, highlights saturate quite smoothly, so getting a good exposure probably involves centering most of the histogram on a sweet spot of mid-range luminosity. However, digital cameras have a linear response and clip highlights abruptly. So why are we still using the same old metering algorithms?
Exposure-to-the-Right (ETTR) involves manipulating the exposure (ISO, shutter speed, and aperture) so that the entire histogram is used, giving the best signal-to-noise ratio obtainable without clipping (blowing out) any highlights where the photographer wants to preserve detail.
Current DSLRs, including the most high-end ones, obtain lower amplifier noise at higher ISO levels, so ETTR achieves the best SNR even at high ISOs. Eventually there may be a camera that does two passes of readout, one at base ISO (usually 100) and another at high ISO (e.g. 1600). Then shooting at high ISO could be "faked" and highlights would be virtually guaranteed not to clip. This would make ETTR obsolete at high ISOs. However, we're not there yet; all DSLRs still do only one pass of readout. So the benefits of ETTR are pretty much universal. (And will always be, at base ISO.)
Is there any reason not to expose to the right in absolutely every situation? I don't think so. Since ETTR by definition doesn't clip any important highlights, the exposure can always be corrected in post-processing, especially when using a RAW format.
Currently, photographers are forced to "fight" the exposure metering on their digital cameras to achieve ETTR. This either involves going Full Manual, or choosing an exposure compensation that works for the subject being photographed. But what is this exposure compensation, really? As I see it, it's an arbitrary number with no basis in reality; it is a number that corrects for errors in the assumptions of the camera's metering system. The only way in which it gives the photographer creative control is in deciding which luminosity level beyond which it is okay to clip/saturate.
Choosing an exposure compensation that works requires a combination of memory, guesswork, and chimping, and must be constantly updated as the subject lighting and background change. For quick-draw situations, chimping is right out; you need to be able to get the shot before the opportunity is gone. For situations in which you have plenty of time, you can take some test shots and see which areas of the photograph are clipped, and if they are important to you, reduce the exposure accordingly. However, even this method is flawed.
AFAIK, all current DSLRs with Clipped Highlight Preview are actually showing you which highlights are clipped in the RGB representation of the photo after color profile conversion (usually sRGB). So there can be flashing bits of the photo, especially colored areas, that are actually *not* clipped in the RAW representation — highlights that could be recovered without any algorithmic guesswork. The same applies to in-camera histograms.
There is a band-aid solution for this, called Unity White Balance (UniWB). This is seriously flawed, because not only does it make chimping much less fun (all photos will be severely green-tinted) but it doesn't even work. Saturated colors can still be clipped during color profile conversion, when in the RAW representation they are not clipped. And furthermore, the camera's histogram preview will still be affected by its gamma and tone curve, making it much harder to determine exposure compensation amounts.
DSLRs *should* provide an option to display a RAW RGB linear histogram (and AFAIK, none do). But there is more that they can do.
I am asking, why do we still not have a DSLR that can *automatically* do ETTR metering? There are some technical considerations, but *any* automatic ETTR would be much quicker and easier to use than what photographers are forced to do without it.
The simplest type of auto-ETTR would involve taking a test shot, and telling the camera to meter the next set of photos based on that test shot. You would have to tell the camera which highlights are acceptable to clip. One method might be a percentage. Telling it "clip 0.00%" would give you photos in which nothing is clipped in the RAW representation of the image, not even a single pixel, assuming that your test shot has the brightest highlights that are going to occur in your photo session. Telling it "clip 1%" might work nicely if you have some occasional specular highlights that don't need to be preserved photometrically; this would allow no more than 1% of the area of the photo to reach RAW saturation.
There should be a single button that tells the camera "apply ETTR metering based on the last shot taken". You should also be able to apply this based on earlier shots, during Preview/Playback. The "ETTR button" could share functionality with an "AE-lock button" (i.e., be the same button), with its functionality depending on whether you are in ETTR mode or not.
In traditional metering modes, you directly adjust two out of three of the shutter speed, aperture, and ISO, and the third is chosen by the metering system. (Assume that we already have a proper auto-ISO mode, even though Canon xxD/xxxD DSLRs don't have one, and should.)
In my hypothetical ETTR metering mode, the "exposure compensation" (EV) adjustment would be replaced with an adjustment of absolute EV, or LV (Light Value). Instead of letting the camera measure the LV itself using a metering algorithm, on top of which exposure compensation would traditionally be added, you'd get to choose the LV yourself, directly, and no exposure compensation would be needed. You could then use Aperture+Shutter priority, Aperture+ISO priority, or Shutter+ISO priority, and the camera would use the LV to calculate the third exposure parameter.
This should ideally be combined with a per-lens calibration of the actual light transmitted at each aperture, and if possible, continuous adjustment of shutter speed (as opposed to stepwise adjustment) so that ETTR can precise. But this is probably optional and not necessary for a very usable and practical auto-ETTR implementation (without it, the algorithm would need to do some rounding-down in certain circumstances to ensure the clipping percentage is not exceeded).
Pressing the ETTR button would cause the LV to be automatically adjusted. Otherwise the LV would stay locked and would be manually adjustable.
(A more advanced ETTR metering mode might somehow use the matrix metering sensors that are standard in all SLRs that use a half-silvered mirror (i.e. basically all of them), to do a sort of guesswork-auto-ETTR that doesn't require taking a test shot, but this would be complicated, since in most situations the highlights are smaller than the metering sensors. Anything like this would just be icing on the cake, and should definitely be *optional* and not replace what I have described previously. It may not even be possible to do this usefully, in which case I would not miss its absence at all.)
What do you think of this idea? Has anybody else thought of it? Is there are reason it has not been implemented in DSLRs?
Ideally this would be implemented through firmware updates to existing cameras, as well as future cameras being given the feature, but the best we can hope for is probably the latter. So how can we make this happen?