View Full Version : The scene database

Doug Kerr
July 6th, 2011, 06:08 PM
On of the beloved bedtime stories about the Canon evaluative exposure metering systems (Jerome Marot just recently invoked it in another thread) is that of the "scene database".

The story is usually told like this:

After the scene is examined by the exposure metering system, a data base of "scene models" (held in the camera) is searched for one that most closely matches the properties of the scene apparently before the camera. That data base entry returns something that the exposure metering system uses to complete teh process by which it chooses the "most likely best" photographic exposure for the shot.

But not only don't I know "exactly, for sure" how that works, I don't even have in hand a fanciful story, reaching from end to end, about what might actually happen.

Maybe somebody out there has a credible story, which I'd be glad to hear. I don't look for it to be "the real story". Just a whole story.

For now, let me think a little about the matter, as If I were inventing the scheme from scratch.

At the very top of the thought chain, what would we like to know about the actual scene if we could determine it directly in some practical way?

Perhaps we would like to know the maximum luminance anywhere in the scene, so we could plan an "expose to the right" shot.

Or perhaps we would like to know the maximum luminance and the minimum luminance that is of any importance in capturing all the scene detail, if possible, so we can plan sort of a "zone system" exposure.

Or something else, but it has to be something that could be described in technical terms - it can't be the "mellowness of the scene" or such.

And, once we had defined it, it could be determined directly if we knew the apparent luminance of every pixel spot in the image.

But (in an EOS camera of traditional form) we do not examine, during metering "every pixel spot" in the image. For example, the maximum luminance in our scene necessarily fall at any of the metering "segments". So we can't determine the needed properties directly. And that's why we need to have recourse to the magic database.

In effect, we say, "for scenes like this one, what is most likely the maximum luminance (as a ratio to, for example, the average of the measured luminances)?

"Like this one" means perhaps "having these ratios between 10 of the 30 luminances we measured" (the "search vector").

Then when a match is found, the entry "returns" that ratio, the metering system multiplies that by the average of the values in the search vector, and gets the "equivalent scene luminance" which it enters into the metering equation to determine the photographic exposure (actual Ev) that is best for the shot. (We know that for the special case of a uniform-luminance scene, that equivalent scene luminance is the same as the average measured luminance. That's probably one of the scene models: "all search vector values equal".)

Well just some thoughts to fill space until somebody with a real story can bring it to mind.

Best regards,


Asher Kelman
July 6th, 2011, 10:00 PM
They can make some guesses as to what's important when there's a backlit person or a football game. However, the big change will be shortly when Canon and HP release the CMOS sensor that is made up of individually addressed cameras in each sensel. This technology will at last allow the highlights to be never blown out and all the shadow detail to be maintained.

They just are so s l o w in giving us the technology they have. They sell enough now, so they do not have to give out!


Jerome Marot
July 7th, 2011, 12:05 AM
These systems were developed in the late 80s and at the time processing power was quite limited. The first cameras to introduce this system was the Nikon FA (1983), the Minolta 7000 (1985) and the Canon EOS 650 (1987).

The systems combine information from the focus point with information from the metering system to try to find out what is photographed. Actual algorithms are kept secret but, basically, they are designed with the assumption that you are taking relatively standard pictures: (family) portraits, simple landscapes. For example, they would determine whether you are shooting a portrait or landscape from the focus distance, and then minimize the influence of the top part (sky) for a landscape and simply use the metering cell close to the focus point for a portrait. From personal experience with the Minolta honeycomb evaluating system at the time (on the 9xi, which had 4 AF points and could detect whether it was used in portrait or landscape orientation), I would say that:
-it tried to avoid specular reflections (basically ignored the brighter cell)
-it minimized the influence of the sky (the top cells)
-if only one AF point was selected, it metered there for a standard caucasian skin (it was a pain if you photographed persons with dark skin...).

With increased processing power came more powerful systems. For example, about every P&S recognizes faces today (P&S have powerful metering and AF system, since they use the array of sensors used for imaging). More advanced P&S can even learn to recognize the faces of people in your family and discard the others for metering and AF.

This should hint about the reasoning of the engineers who design the system: they consider that most people want to photograph other people and design the metering system to correctly expose a backlight face. If you photograph something else, you are probably an advanced photographer who can use manual modes. However, word is that the top of the Nikon and Canon line is designed with a database incorporating situations standard in professional sport photography. The manual of the EOS 1 line explains, for example, how to program the AF system to follow a swimmer whose head is under the water some of the time.

Doug Kerr
July 7th, 2011, 07:14 AM
Hi, Jerome,

Thank you for that insight. It is very helpful to my trying to understand (to the degree that is possible to "outsiders") the general principles involved.

My characterization above did not embrace the prospect that the analysis (guided by "hints" from the "matched" scene model) would in fact be selective with respect to the roles of the different metering segments.

This class of approach of course emerges in the context of working from a relatively-small constellation of metering points.

I'm sure that as we move toward the adoption of "mirrorless" modalities in high-performance cameras, the more direct approach (doubtless already used in compact cameras), of course with increasing sophistication, will be coming into our lives.

Asher, your comment fits in with that prospective.

Best regards,