• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

EOS 1Ds Mark III analysis, System gain

Hi folks,

This is the second in a series of technical analyses I performed as based on my 1Ds3.

Another one of the useful key figures in the analysis of a camera/sensor array performance is; System gain. System gain describes one of the manufacturer's design choices expressed in a simple key figure, a figure that quantifies the input signal amplification and conversion from Photons to Data numbers (DN), also known as analog to digital units (ADU), as recorded in the Raw data file. When photons reach the sensor elements, they are converted into a charge (each photon is converted into 1 electron Volt). During the exposure time charge accumulates until either the exposure ends, or the storage capacity for that sensel is filled to its maximum.

The data for this analysis was collected from a series of Raw files of incremental uniform light exposures. An attempt was made to get an as uniform as feasible exposure across the frames but, to increase accuracy, a small crop from the center was used. That allows to use the light that's directed most perpendicular to the sensor, thus avoiding non-uniformity due to angle of incidence. It also allows to reduce the influence of dust shadows. Dust is more often situated near the edges of the frame.

The shots were made with an EF 100mm f/2.8 Macro lens, (manual) focus set to infinity, and aperture f/5.6 was used while only varying the exposure time for the different shots. The lens was pointing to a uniformly lit sky (cloudless at the time of shooting), and the lens front was covered with opaline glass to provide for a totally featureless, out of focus, and diffuse uniform lightsource.

The test frames were shot in pairs, at 4 second intervals (to avoid potential noise from the writing action to the CF card), thus allowing to reduce the potential effects of dead or hot pixels, pattern noise, and dust, by calculating the standard deviation of the difference between the files, divided by Sqrt(2). That standard deviation equates to total noise at each specific exposure level. The mean brightness was also calculated as a measure of that exposure level.

Initially ISO settings 'L', 100, and 200 were tested. Higher ISOs will be tested in the same manner, at a later stage. All files were shot at outdoor ambient temperatures of approx. 2 degrees Celsius, but the camera didn't get the time to acclimatize, and its internal temperature of course rose a bit as the series of shots progressed. The actual sensor elements used were a central crop of the same 400x400 sensels as used for my other (e.g. Black frame) analyses. That means that for each of the G/R/G/B Bayer CFA filtered sensels, 40,000 samples were available (a quantity which should be enough for statistically relevant sample populations). To allow comparisons with data collected by others, the frequently used IRIS software (version 5.51) was used to read the Raw (non-color balanced, non-demosaiced) 14-bit CFA data from the files.

Here is a graphical representation of the results for the green filtered sensor elements at an ISO 100 setting:
EOS-1Ds3_GreenGain100.png


And here are the links to the similar graphs for the Red and Blue filtered sensels.
http://www.xs4all.nl/~bvdwolf/temp/OPF/EOS-1Ds3_RedGain100.png
http://www.xs4all.nl/~bvdwolf/temp/OPF/EOS-1Ds3_BlueGain100.png

On the horizontal axis (abscissa) are the 14-bit data numbers that represent recorded brightness in the Raw file, although after subtraction of the black-point offset of 1024 (verified to be the same for each channel). On the vertical axis (ordinate) are the number of electrons that were responsible for that exposure.

The number of electrons plotted for each exposure were derived from the noise standard deviation as calculated from the difference between the almost identical exposure pairs, divided by Sqrt(2). That standard deviation represents total noise, (mainly) Photon shot noise and read noise, and 'subtraction in quadrature' of the read noise (which was quantified earlier for all ISO settings), leaves Photon shot noise (caused by the random arrival rate of photons during the exposure time).
Since the Photon shot noise equals the square root of the number of photons (=electrons) recorded, we can calculate the average number of incident electrons by squaring the noise value.

When we plot the electron count as a result of the average brightness DN, the slope of the linear regression will represent the conversion rate, known as system gain, expressed in electrons/DN or electrons/ADU. These regression line equations will also allow to approximate the saturation level of the sensels expressed in number of photons or electrons (at approx. DN=14257 saturation, black point subtracted).

That results, within the error margin that's inherent to such reverse engineering, in the following saturation or clipping levels:
Red saturation: 4.0839 x 14255 - 539.31 = 57677
Green1 saturation: 4.1307 x 14257 - 934.93 = 57956, or
Green2 saturation: 3.9822 x 14257 - 622.74 = 56480
Blue saturation: 3.9085 x 14255 - 293.97 = 55422
We could therefore assume that the actual average saturation level is close to 57,000 electrons.
Likewise we could assume that the actual average system gain is 4.03 at ISO 100.

A system gain of 4.03 at ISO 100 would lead to a so-called 'unity gain' at ISO 400. I'll get back to the potential implications of that in another thread/analysis.

57,000 electrons at saturation level, can be accurately represented by a 16-bit number (15.8 bits to be more exact), so the 4 electrons/DN gain at ISO 100 will allow accurate quantization in 14-bit values (if electronic noise is kept low) at ISO 100 (native sensitivity?). It looks like Canon chose the correct parameters for their encoding, but it leaves virtually no room for additional electronic noise. A 16-bit ADC encoding would have provided a safer margin (maybe we can expect an EOS-1Ds Mark III N version in the future that offers that?).

The system gain of approx. 4.03 at ISO 100 is more or less confirmed by the slope of the linear regresssion formulae derived at ISO 200, here they are:
Red: 1.9074 x DN + 28.026
Green1: 1.9221 x DN - 9.0413
Green2: 1.9263 x DN - 68.3
Blue: 1.8939 x DN + 44.694
Which would lead to an average system gain of 1.91 at ISO 200. It is not exactly half of the ISO 100 gain as one might expect, but I've seen similar trends in other cameras (maybe deliberate to accommodate for more highlight noise (without clipping) at higher ISOs?). We'll have to see how that translates to the higher ISO determinations, to be determined later. For ISO 200 the per channel gains seem to be very close to eachother, and could be assumed to be accurate.

The results for the special ISO 'L' confirm the puzzling read noise similarity between ISO 'L' and ISO 100, that I detected earlier. Here are the linear regressions at ISO 'L':
Red: 4.1085 x DN - 732.86
Green1: 4.0862 x DN - 933.31
Green2: 3.898 x DN -780.42
Blue: 4.0246 x DN - 784.83
Which would lead to an average system gain of 4.03 at ISO 'L', just like at ISO 100, while for true ISO 50 performance one would expect something close to 8.
Speculation: Maybe internally the calculations are done at 16-bits precision, but truncated to 14-bits (Emil, John S., let's have your best hunch/insight on this one) ???
Conclusion: While ISO 'L' currently doesn't seem to offer any benefit to noise reduction, which it does offer in the 1Ds2 (as if it has an approx. ISO 75 performance), it doesn't lose any performance either, because the clipping point is at the same Raw data number levels as with ISO 100. It can be safely used if (abundant) lighting requires to , although there will be no benefits in noise performance.
As I said, a bit puzzling it is. Could be interesting to investigate the ISO '50' versus ISO 100 behavior further on other cameras, although I'd be surprised to find different behavior ...

That's it for now.

Bart
 
Last edited:

Ray West

New member
Hi Bart,

Looking at it another way, a 16bit number can accommodate a gain of 1 if the maximum electrons is 57000 (65535 == 16 bits) but the required d/a converters will be either more expensive or slower or more inaccurate than 14 bit ones. Also the associated wiring? data storage, etc. will be larger/slower.

a 14 bit number ( 16383 == 14bits) can accommodate a maximum gain of = 3.5.

I've no idea of the a/d 'jitter', or other errors, and afaik, a 'noise voltage' can be subtracted or added at any stage. It will be a compromise, the whole design, trading one thing off against another.

Now, at the end of the day, if the image is then edited in 8 bit, and the result printed from a jpeg, it will make little difference to the end result if it is 16 or even 10 bit resolution from the sensor.

Also, I think, if you are dealing with a fine resolution conversion, the other noise will have to be correspondingly lower. (You mentioned waiting a few seconds between each shot to allow things to settle - I think your graphs show jitter - clusters of points?- even with taking that care at 14 bit resolution.)

As an aside, some time ago, I experimented with creating colour patches, in an 8 bit colour space, by altering just one value (by one bit). I forget how many bits I needed to alter to see the difference when adjacent to its neighbour. (This sort of gives an idea of what you can get away with, in general sort of photography - of course depending on my eyesight)

I am only trying to give perhaps a more pragmatic reason why things may be as they are, wrt 14 versus 16 bit, I'm not in any way being critical of your interesting tests.

Best wishes,

Ray
 
Hi Bart,
...

I am only trying to give perhaps a more pragmatic reason why things may be as they are, wrt 14 versus 16 bit, I'm not in any way being critical of your interesting tests...

Ray, I'm appreciative of any/all (of your) remarks on my findings. Good science requires (not only because of your being crittial or appreciative) to be appreciate for all/any comments, and I sincerely am. Sincere comments will, again thanks (to all), further insight.

Bart
 
Now, at the end of the day, if the image is then edited in 8 bit, and the result printed from a jpeg, it will make little difference to the end result if it is 16 or even 10 bit resolution from the sensor.

With the small reminder that editing will be on gamma adjusted image data, which tends to amplify shadow differences versus reduced highlight differences. It also allows to squeeze the higher number of bits into the final numerical precision with minimal rounding errors, but it can help to start with excess precision.

Also, I think, if you are dealing with a fine resolution conversion, the other noise will have to be correspondingly lower. (You mentioned waiting a few seconds between each shot to allow things to settle - I think your graphs show jitter - clusters of points?- even with taking that care at 14 bit resolution.)

The small differences between the exposure pairs are (mainly) caused by shutterspeed variance (causing differences in the electron counts). That will also result in slight differences in the shot noise for each image of the pairs. Besides repeating the experiment many times to improve the combined statistics, it's the best we can do without ripping the sensor array out of its enclosure, which would still leave us with a single production sample of the imaging chip ...

Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Very nice report, on an important topic.

The number of electrons plotted for each exposure were derived from the noise standard deviation as calculated from the difference between the almost identical exposure pairs, divided by Sqrt(2).

Remind me how that works. Are we able to deduce the number of electrons (at a certain "point" on our curve) from knowing the standard deviation of the charge for a cerrtain absolute charge? (I think that's the basic principle.)

But if we don't yet know the gain (the ratio between DN and charge), how do we do that? What am I missing.

(Actually, I bumped into a description of this the other day, but I can't locate it!)

Thanks.
 
The number of electrons plotted for each exposure were derived from the noise standard deviation as calculated from the difference between the almost identical exposure pairs, divided by Sqrt(2).

Remind me how that works.

Each of the two uniform exposure image crops has its random noise which can be considered to have a Gaussian distribution. Photon shot noise (which dominates the exposures at these higher brightness levels) has a Poisson distribution which approximates a Gaussian distribution when enough samples are taken. We can quantify the variations around the mean level by the standard deviation.

When we subtract the two identical (except for random noise) images, we eliminate their common elements (mean level and systematic patterns) and are left with the random elements between them, the difference image. To prevent zero clipping of the lower half of the Gaussian, an offset is added.

In e.g. IRIS this can be done by typing the following commands in the command window which can be opened from the toolbar:
>load file1
>sub file2 4000
>stat

Both files are assumed to reside in the IRIS working directory, unless a path is also specified with the filenames. The file extension is assumed to be of the type that's specified in the preferences setup. The 'stat' command gives common statistical descriptors, which allows to verify if the offset is high enough to avoid a minimum of zero (a signal for potential zero clipping). The resulting 'sigma' (standard deviation) value needs to be divided by Sqrt(2) when subtracting 2 images.

Are we able to deduce the number of electrons (at a certain "point" on our curve) from knowing the standard deviation of the charge for a cerrtain absolute charge? (I think that's the basic principle.)

Yes, after correction of the mean level for the black-point offset in the Raw data, and of the differences' standard deviation for the read-noise. I prefer to use a linear regression because it is easier to detect outliers, and the multiple samples make for more robust statistics. The issue with using a single point is that we are assuming that there is only photon shot noise. Subtracting the read-noise will get us closer, but we don't know whether there are other noise sources involved, so we might still overestimate the number of electrons. A trend will be more exact.

But if we don't yet know the gain (the ratio between DN and charge), how do we do that? What am I missing.

This reference describes the procedure, and should make it easier to see the connection.

Bart
 
Last edited:

Emil Martinec

New member
A thorough treatment, and nice presentation of the analysis, Bart.

The results for the special ISO 'L' confirm the puzzling read noise similarity between ISO 'L' and ISO 100, that I detected earlier. Here are the linear regressions at ISO 'L':
Red: 4.1085 x DN - 732.86
Green1: 4.0862 x DN - 933.31
Green2: 3.898 x DN -780.42
Blue: 4.0246 x DN - 784.83
Which would lead to an average system gain of 4.03 at ISO 'L', just like at ISO 100, while for true ISO 50 performance one would expect something close to 8.
Speculation: Maybe internally the calculations are done at 16-bits precision, but truncated to 14-bits (Emil, John S., let's have your best hunch/insight on this one) ???
Conclusion: While ISO 'L' currently doesn't seem to offer any benefit to noise reduction, which it does offer in the 1Ds2 (as if it has an approx. ISO 75 performance), it doesn't lose any performance either, because the clipping point is at the same Raw data number levels as with ISO 100. It can be safely used if (abundant) lighting requires to , although there will be no benefits in noise performance.
As I said, a bit puzzling it is. Could be interesting to investigate the ISO '50' versus ISO 100 behavior further on other cameras, although I'd be surprised to find different behavior ...

Bart

The obvious conclusion is that ISO L is ISO 100, one stop overexposed (pulled by 1EV), just as the "high ISO extension" on Canon's is the highest native ISO, pushed by one stop after the fact. A pulled ISO 100 would have the same read noise and gain as ISO 100, just as you have observed; call it ISO 50 if you want. Just like HTP ISO x is really ISO x/2 in the raw data, the camera manufacturers play fast and loose with the ISO standard.
 

John Sheehy

New member
A thorough treatment, and nice presentation of the analysis, Bart.

The obvious conclusion is that ISO L is ISO 100, one stop overexposed (pulled by 1EV), just as the "high ISO extension" on Canon's is the highest native ISO, pushed by one stop after the fact. A pulled ISO 100 would have the same read noise and gain as ISO 100, just as you have observed; call it ISO 50 if you want. Just like HTP ISO x is really ISO x/2 in the raw data, the camera manufacturers play fast and loose with the ISO standard.

It's probably a quandary for the MFR; if they increase quantum efficiency in a camera series that used to sport ISO 50 (and barely, at that, with reduced highlight headroom), it can no longer support ISO 50 in any meaningful way but through metering. To drop it completely, however, would upset some people, and make it appear that the camera has been compromised in some way, by losing the low ISO, as it might seem to someone who doesn't understand the current reasons for the presence of low ISO on cameras (inability to capture photons efficiently; of course, future technologies may allow more total photon counts per unit of area).

The ironic thing is that other than the loss of a stop of potential headroom, ISO 100 pulled to ISO 50 has much cleaner deep shadows relative to metering, and is better than a "real" ISO 50 would be, given the limits in maximum signal to read noise ratios (~3000:1) with current technology.

In the future, prismatic microlens arrays might guide all photons into sensels based on wavelength, and the base ISO may go up to 200 or 250; then ISO 50 might be hard to maintain in any way at all.
 
Top