• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Noise vs EV

Emil Martinec

New member
I've been having some fun poking around in the noise of Canon DSLR's, and found something puzzling. I got to the point of measuring the Poisson noise. I took pairs of shots of a cloudless sky with my 20D and 100-400L @400mm, f7.1, ISO 100; the camera in manual focus at MFD with IS off. Shutter speeds increased from well beyond saturation to 1/8000 sec, in 1/3 stops. I then analyzed the raw data in IRIS, separating the CFA into g2, b, r g1 channels. Taking the difference of the two exposures for the g2 subarray, I recorded the standard deviation and the average raw value (subtracting the black point of 128). I was able to reproduce classic Poisson noise (the first plot below), with the best fit curve giving read noise of 2.7 ADU and 11.7 electrons/ADU. The latter is a little low relative to other measurements, which arrive at a typical value of 12.4 electrons/ADU, but not far off.

More puzzling is that I did the same comparison by subtracting the two green channels of a single image (g2-g1) and using the standard deviation as a measure of the noise. This came out rather higher than the difference of two images taken in quick succession. Subtracting in quadrature the Poisson noise of the difference of two images from the noise obtained from (g2-g1) of a single image, and taking the square root, yields the second plot below -- within experimental error quite linear with respect to exposure. Note that this residual noise is quite comparable in magnitude to the Poisson noise.

Again, the first plot below is the rms fluctuation in ADU of the difference of two successive images, versus signal in ADU. The second plot is the rms fluctuation in ADU of the difference of the two green subarrays of the Bayer pattern of a single image from the pair, versus signal in ADU, after subtracting off the Poisson noise.

20dnoise_vs_EV.jpg


The approximate best fit to the noise in ADU is (here x is the average signal in ADU with blackpoint subtracted)

noise = Sqrt[7.1 + .086 x + (.0058 x)^2]

So my question is what is the potential source of this noise, and is this something well known that I have overlooked. What is needed is a noise whose amplitude is proportional to signal, and that drops out if take the difference of two images. This rules out quantities that vary from image to image, since such an effect produces noise in the difference more than in (g2-g1) of a single image. Things I can think of:

1. My sensor is incredibly dirty at the pixel level, so that the transmissivity of incident light to the sensels varies on the .6% level between pixels diagonally adjacent to one another, across the entire sensor.

2. The gain applied at individual photosites varies on the .6% level, so that (g2-g1) of a single image has a variance above and beyond that due to Poisson noise.

Both these effects would produce a "noise" or variance in the recorded exposure that is proportional to exposure, as I have observed (and as distinct from Poisson noise, which varies as the square root of exposure). Any other suggestions or illuminating comments are most welcome.
 
Last edited:
Taking the difference of the two exposures for the g2 subarray, I recorded the standard deviation and the average raw value (subtracting the black point of 128). I was able to reproduce classic Poisson noise (the first plot below), with the best fit curve giving read noise of 2.7 ADU and 11.7 electrons/ADU. The latter is a little low relative to other measurements, which arrive at a typical value of 12.4 electrons/ADU, but not far off.

Well done, it reconfirms that today's image sensors are Photon shot noise limited. This would also suggest that your 20D's 'unity gain level' is reached at around ISO 1000-1250 for the green channel (a similar check at e.g. ISO 800 would somewhat confirm that hypothesis). Important is that the difference between 2 exposures will eliminate/reduce systematic contribution to noise (vignetting, mildly non uniform lighting, dust, and to a certain extent hot/dead pixels).
You didn't specify, but I tend to take the 2 consecutive exposures at a 4 second interval. That should be long enough to avoid CF card writing activity and LCD influence while exposing the second exposure, and short enough to reduce the probability of lightsource variation (blue skies are useful, although not spectrally uniform).

More puzzling is that I did the same comparison by subtracting the two green channels of a single image (g2-g1) and using the standard deviation as a measure of the noise. This came out rather higher than the difference of two images taken in quick succession.

As you have noticed, the problem with subtracting different sensels is that things like e.g. banding noise, dust, Bayer CFA variations, and per sensel amplification variation, start to play an increasingly more important role.

Subtracting in quadrature the Poisson noise of the difference of two images from the noise obtained from (g2-g1) of a single image, and taking the square root, yields the second plot below -- within experimental error quite linear with respect to exposure. Note that this residual noise is quite comparable in magnitude to the Poisson noise.

The magnitude also surprises me at first glance. Are you sure you aren't subtracting two standard deviatons (Mathmatica's In[111]), which would overstate random noise by a factor of Sqrt(2)? Math isn't my forte, but I know just enough to be dangerous...

Any other suggestions or illuminating comments are most welcome.

Not all sensors produce meaningful exposure data over the entire surface that e.g. IRIS reads. It might help to use only a crop from the center of the image, just make sure that it doesn't invalidate the g2, r, b, g1 ordering. That would also reduce the influence of effects like 'Amp glow', although it might increase the influence of dust and hot sensels a bit more.
Also, although I'm not suggesting there is in your case, there can be an influence from the lens electronics. That could be verified by comparing with a different lens. And as always, temperature differences between shooting sequences play a role, although that wouldn't have much influence on your above mentioned evaluation.

Bart
 

Emil Martinec

New member
You didn't specify, but I tend to take the 2 consecutive exposures at a 4 second interval. That should be long enough to avoid CF card writing activity and LCD influence while exposing the second exposure, and short enough to reduce the probability of lightsource variation (blue skies are useful, although not spectrally uniform).

I didn't wait, rather took the images in quick succession. But the buffer was always emptying as I took the series of shots at different shutter speeds, in fact I often had to wait for the buffer to clear a shot or two as I flipped the exposure down another 1/3 stop. So that was constant across the sequence. I know, another source of variability, but I was impatient. Yeah I know, sloppy experimental technique.

BTW one effect that can be discarded is thermal noise, which would mimic the above effect via my choice of methodology. I varied exposure by changing shutter speed (from 1/8000 sec down to 1/10 sec, with neutral exposure being about 1/400). So thermal noise, being proportional to exposure time, would naturally increase proportional to my total exposure. However, the fact that the difference of images revealed no such noise while the difference of g2-g1 did, means that the result can't come from that. Anyway it's my understanding that thermal noise is negligible for such short exposures.

As you have noticed, the problem with subtracting different sensels is that things like e.g. banding noise, dust, Bayer CFA variations, and per sensel amplification variation, start to play an increasingly more important role.

I think this is the main issue. A little more googling revealed what is called "pixel-response non-uniformity" (just google PRNU) which is a variation in the response of sensels and is typically
up to a few percent. I think this is what I measured. I have some more data from ISO 800 that I hope to have time to get around to analyzing soon, in the same way. That should help to tease out if the effect is indeed the same across ISO's. Since it is proportional to signal, while shot noise is proportional to its square root, this PRNU "noise" (really a form of fixed pattern noise) should become less important relative to shot noise as the signal is lowered, and I should find less of it at high ISO, if they were comparable for bright illumination at low ISO.

BTW, I think this PRNU could be eliminated by dividing by a flat-field obtained from a large number of stacked exposures of uniform intensity. I don't know if people do that in astro, since they always have low intensity. It also wouldn't affect shadows much in ordinary photography, but I wonder if it's a major contributor to noisy skies in landscape shots.


The magnitude also surprises me at first glance. Are you sure you aren't subtracting two standard deviatons (Mathmatica's In[111]), which would overstate random noise by a factor of Sqrt(2)? Math isn't my forte, but I know just enough to be dangerous...

No, the math is OK.


Not all sensors produce meaningful exposure data over the entire surface that e.g. IRIS reads. It might help to use only a crop from the center of the image, just make sure that it doesn't invalidate the g2, r, b, g1 ordering. That would also reduce the influence of effects like 'Amp glow', although it might increase the influence of dust and hot sensels a bit more.

Results were always taken from a crop in the middle of the frame, taking care to avoid dust spots etc as much as possible. The average of several such crops was used.

I'll try to go through the rest of my shots and see what's up at ISO 800.
 

Asher Kelman

OPF Owner/Editor-in-Chief
So, Eric and Bart,

Can you imagine the crcumstances at which PRNU "noise" could be the worst and impact on photography?

Asher
 
So, Eric and Bart,

Can you imagine the crcumstances at which PRNU "noise" could be the worst and impact on photography?

Asher

Hi Asher,

It depends on how the Raw converter handles it, and of couse the actual magnitude of the differences. As always, smooth gradients suffer most (because it is easier to see there) so I'd expect e.g. blue skies to show a somewhat mottled appearance (it affects blocks of output pixels due to demosaicing) if the raw converter fails to handle it well. I wouldn't worry too much about it, but it is good to know it exists.

Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Asher,

It depends on how the Raw converter handles it, and of couse the actual magnitude of the differences. As always, smooth gradients suffer most (because it is easier to see there) so I'd expect e.g. blue skies to show a somewhat mottled appearance (it affects blocks of output pixels due to demosaicing) if the raw converter fails to handle it well. I wouldn't worry too much about it, but it is good to know it exists.

Bart

Bart,

This is interesting since blue skies and continuous tone in steel or aluminum metal surfaces in architecture apparently show more noise with Phase One backs according to pictures Rainer has shared with me.

Asher
 
Bart,

This is interesting since blue skies and continuous tone in steel or aluminum metal surfaces in architecture apparently show more noise with Phase One backs according to pictures Rainer has shared with me.

It's hard to comment on without seeing the affected images and analyzing the hardware, because there are many possible causes for that noise. Light (photon arrival rate) itself is noisy, and the camera design (CCD well-depth and electronic noise) will pose some restictions. Postprocessing has it's effects as well, but I wouldn't automatically attribute what you've seen to the non-uniformity in sensel response mentioned.

If it's an issue that needs a resolution, I'm for hire ...
wink.gif


Bart
 

Emil Martinec

New member
So, Emil and Bart,

Can you imagine the crcumstances at which PRNU "noise" could be the worst and impact on photography?

Asher

Since it is proportional to EV while photon shot noise goes as the square root of EV, PRNU dominates in the brightest tones. On my 20D it is the dominant source of noise above midtones at ISO 100. I haven't had any time to test it on my 1D3 so far. But only at low ISO, since when you raise the ISO you are throwing away high illumination components of the scene (in terms of numbers of photons). By the time you get to ISO 1600, PRNU is totally negligible as a component of tonal fluctuations.

So as Bart mentioned, I'd expect to see it in skies and other bright areas at low ISO. Unless there is a systematic effect in sensel response as a function of spatial location, the fluctuations should be randomly distributed spatially and look just like photon shot noise which is for all practical purposes Gaussian and spatially random.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Since it is proportional to EV while photon shot noise goes as the square root of EV, PRNU dominates in the brightest tones. ...................So as Bart mentioned, I'd expect to see it in skies and other bright areas at low ISO. Unless there is a systematic effect in sensel response as a function of spatial location, the fluctuations should be randomly distributed spatially and look just like photon shot noise which is for all practical purposes Gaussian and spatially random.

Eric,

Have you any examples to show with PRNU at ISO 200 which is abolished at ISO 400 or 800?

Asher
 

Emil Martinec

New member
EMIL,

Have you any examples to show with PRNU at ISO 200 which is abolished at ISO 400 or 800?

Asher

No, but you can read it off my original post. The top plot of shot noise equals the bottom plot of PRNU at a raw level of around 2400 or so, and PRNU exceeds shot noise above this value, and is less than shot noise below this value. So PRNU exceeds shot noise only in the upper half of the raw data at ISO 100. At ISO 200, this half of the raw data is amplified beyond clipping, so PRNU is already less than shot noise at this ISO since you will only be able to record the bottom half of the two plots in the available range of raw levels (ie, when doubling the ISO to 200, the part of the graphs between 0 and 2047 gets expanded to fill the whole 0-4095 range of ISO 200 data). For every doubling of ISO, the relevance of PRNU goes down by a factor of sqrt(2). Since noise adds in quadrature, very quickly PRNU becomes irrelevant. By ISO 1600 it's less than a 5% correction to shot noise.

I'm not sure I explained myself well, if not I can try again.
 

John Sheehy

New member
No, but you can read it off my original post. The top plot of shot noise equals the bottom plot of PRNU at a raw level of around 2400 or so, and PRNU exceeds shot noise above this value, and is less than shot noise below this value. So PRNU exceeds shot noise only in the upper half of the raw data at ISO 100. At ISO 200, this half of the raw data is amplified beyond clipping, so PRNU is already less than shot noise at this ISO since you will only be able to record the bottom half of the two plots in the available range of raw levels (ie, when doubling the ISO to 200, the part of the graphs between 0 and 2047 gets expanded to fill the whole 0-4095 range of ISO 200 data). For every doubling of ISO, the relevance of PRNU goes down by a factor of sqrt(2). Since noise adds in quadrature, very quickly PRNU becomes irrelevant. By ISO 1600 it's less than a 5% correction to shot noise.

I'm not sure I explained myself well, if not I can try again.

This is something you see more in some cameras than others, correct? I see something like this in my XTi. SNR is limited to about 100:1, even in the highlights of ISO 100, where it should be the square root of 40000 photons, or 200:1.

I have found that for some cameras, it is necessary to measure photon collection from shadow areas of low ISOs or from high ISOs, as the highlights of low ISOs are unreliable. For the 1Dsmk3, going by bright areas of ISO 100, I could only project about 30K photons; going into the shadows and higher ISOs, however, I can project about 60K at the level of ISO 100 saturation.
 

Emil Martinec

New member
This is something you see more in some cameras than others, correct? I see something like this in my XTi. SNR is limited to about 100:1, even in the highlights of ISO 100, where it should be the square root of 40000 photons, or 200:1.

I have found that for some cameras, it is necessary to measure photon collection from shadow areas of low ISOs or from high ISOs, as the highlights of low ISOs are unreliable. For the 1Dsmk3, going by bright areas of ISO 100, I could only project about 30K photons; going into the shadows and higher ISOs, however, I can project about 60K at the level of ISO 100 saturation.

The only other camera I've investigated is the 1D3, where I had someone take test shots for me. Again for bright areas at ISO 100, PRNU and shot noise were comparable in magnitude. I suspect it's about the same for Canon cameras, given that and your comments.

It's easy enough to check with the camera in hand -- take two successive shots of a bright uniform OOF subject (a white wall, or cloudless sky), measure the noise of the difference image and compare to the difference of the green subarrays for one of the images.

The measurements indicate that PRNU amounts to a gain variation of 0.5-1 percent; that seems like pretty good uniformity of sensel characteristics, no?
 

John Sheehy

New member
The only other camera I've investigated is the 1D3, where I had someone take test shots for me. Again for bright areas at ISO 100, PRNU and shot noise were comparable in magnitude. I suspect it's about the same for Canon cameras, given that and your comments.

It's easy enough to check with the camera in hand -- take two successive shots of a bright uniform OOF subject (a white wall, or cloudless sky), measure the noise of the difference image and compare to the difference of the green subarrays for one of the images.

The measurements indicate that PRNU amounts to a gain variation of 0.5-1 percent; that seems like pretty good uniformity of sensel characteristics, no?

You're talking about gain now; I thought you were talking about std deviation. I have to say, I did not follow your original post. I did not see in the charts what you seemed to be talking about, and went back to it a few times, and just drew a blank each time. Perhaps some kind of assumption is needed to follow the train of thought, and my mind tends to be especially reluctant to assumptions about what other people mean. I'm a "what do you mean by ...?" kind of guy.

If you're talking about pure gain, it's fairly common for the two green quarter-images to have slightly different gain, as alternate lines are often at two different general levels of gain. My 10D has a 1.9% difference in the mean of the two green quarter-images, always. In order to do any real math between the two, I would think that rescaling one of the channels would be necessary. I've noticed that the Canon converter, DPP, seems to look at the means and scales internally. I don't see the artifacts in that converter when boosting contrast to extremes, as would occur in other conversions.

If you mean that the green quarter-images have a fixed noise in their difference, which repeats in different exposures, then that would be a repeating noise that could be calibrated out. One of my peeves with RAW converters is that they do not bother to learn your camera, as an option. Only do-it-yourself astrophotography software allows that type of thing, and their conversions don't always have the best color or demosiacing options.

If you mean that there is a noise in the difference between the two green quarter images that is greater than between the same green quarter-image from two successive exposures, then that is truly an interesting phenomenon. I'll have to look at my cameras when I have the time, and see how they do in these regards.
 

John Sheehy

New member
Well done, it reconfirms that today's image sensors are Photon shot noise limited.

Why do you need to reconfirm that shot noise is usually the main noise in highlights? That's all it demonstrates, really. It demonstrates that the highlights' noise is mainly shot noise. It does not demonstrate at all that current sensors are shot noise limited, because his lowest tone was stops above middle grey in the green channel, and even further above it in the red and blue channels.

I can not fathom how anyone can take Roger's statement seriously. The biggest obstacle to low-light shooting with current sensors is read noise, especially random banding noise. To declare that shot noise is the major limit is ridiculous. Shot noise, ADU for ADU, is much less distracting and obscuring of details, especially in the deep shadows where shot noise alone would allow black to be black, while read noise forces it to be mottled.

This would also suggest that your 20D's 'unity gain level' is reached at around ISO 1000-1250 for the green channel (a similar check at e.g. ISO 800 would somewhat confirm that hypothesis).

How can anyone take the concept of unity-gain seriously, when the real world value of an ADU is TOTALLY ARBITRARY. Does changing the bit depth change the unity gain? Try to look past coincidences of momentary history, and into more meaningful models. Are you going at unity speed on the highway at a mile per minute, or a kilometer per minute? Or why not a meter per minute, or a yard per hour?

"Unity Gain", to me, sounds like academic fluff. What is happening is a ratio, not a gain. ADUs are not electron volts; they are arbitrary ratios to them.
 

John Sheehy

New member
Remind us who Roger is and what he said, I'm lost!

Roger Clark (http://www.clarkvision.com). I disagree with several of his conclusions from his experiments, and he is the most-quoted person on digital camera image quality, at a technical level.

He talks about the significance of "Unity gain". He says that current cameras are "shot-noise limited", which, to me implies that read noise is not a big issue, and it clearly is. He is fixated on the quality of the individual pixel, and does not recognize that many more inferior pixels can give a better image than fewer, but "cleaner" pixels, as bigger pixels generally just redistribute noise into a bigger grain. He says that a sensor that had so many photosites that each would have a full-well capacity of 1 photon would have no dynamic range (despite the fact that this is a lot like what real light is, outside of retinas and cameras, and DR does not get lost in translation there), he judges pixel size based on cameras with different size sensors' full images. These things have people repeating things with authority, which are simply not true or meaningful.
 
Roger Clark (http://www.clarkvision.com). I disagree with several of his conclusions from his experiments, and he is the most-quoted person on digital camera image quality, at a technical level.

He talks about the significance of "Unity gain". He says that current cameras are "shot-noise limited", which, to me implies that read noise is not a big issue, and it clearly is.

I think you are reading more from his words than he says. The shot-noise limitation clearly refers to the situation where there are enough photons, like with an expose-to-the-right scenario. That doesn't imply what you are apparently reading from it.

Unity gain is a totally different concept, and Roger Clark treats it as such. It's not fair to put words or conclusions in his mouth that he didn't say or intended, especially when he's not around to clarify misinterpretations of what he really said.

He is fixated on the quality of the individual pixel, and does not recognize that many more inferior pixels can give a better image than fewer, but "cleaner" pixels, as bigger pixels generally just redistribute noise into a bigger grain.

He's not fixated on anything, in fact he's a man of science (member of the Cassini/Huygens team involved in the spectral analysis of Saturn and its moons) and an avid nature photographer. He's only too grateful to change his position if proven wrong, because that furthers progress (AKA peer review in academic circles). From his background in astrophysics he recognizes the importance of optimizing photon collection, which gains in importance when photons are in limited availability.

I'll leave your interpretations of what Roger may or may not have said for what they are, but I'm willing to discuss relevant matters. I tend to avoid discussions about people when they are not around.

Your point on the distinction between individual pixel quality and collective pixel quality (simplification is mine) is interesting enough to merit discussion, without flaming others.

Bart
 
Why do you need to reconfirm that shot noise is usually the main noise in highlights? That's all it demonstrates, really.

Proper science needs to verify that certain assumptions are still valid, before drawing new conclusions based on those assumptions. It is good to establish that the noise characteristics of the new generation of Canon CMOS devices still follows the laws of nature, and that there is no artificial modification going on (such as non-linear noise reduction algorithms before Raw data collection).

It demonstrates that the highlights' noise is mainly shot noise.

And it is good to having that established before drawing conclusions about per ISO gain.

I can not fathom how anyone can take Roger's statement seriously.

Roger has stated that sensors are already photon shot noise limited at the levels somewhat above the lower brightness levels, levels which themselves are predominantly limited by Read noise. I see no problem with such an observation.

How can anyone take the concept of unity-gain seriously, when the real world value of an ADU is TOTALLY ARBITRARY.

You seem to forget statistics. Unity gain (a concept that's well known in many fields involved with signal amplification) simply means that input equals output, no modification to the signal strength. In our field of photosensitive sensor technology it describes the amplification level where an average change of one photon (=1 electron Volt), results in a change of one analog-digital-unit (ADU) or data-number (DN). That is an important level controlled by the ISO setting on our camera's, because further amplification will not lead to more information, it probably will lead to more noise.

Does changing the bit depth change the unity gain?

No. Massaging the numbers after the actual shooting has no effect on the conclusions that can be drawn from the actual capture amplification and at the ADC quantification bit depth. In the only too common apples and oranges comparisons it may conceptually help to scale all values, including unity gain, but that has little to do with amplification.

What does change the unity gain, is the amplification used to linearly span the range from zero exposure to saturation, the latter determined by the 'full well' storage capacity of a sensel. There it is important to note the quantification bit depth, because it will impact the unity gain level.

Bart
 

Emil Martinec

New member
You're talking about gain now; I thought you were talking about std deviation. I have to say, I did not follow your original post. I did not see in the charts what you seemed to be talking about, and went back to it a few times, and just drew a blank each time. Perhaps some kind of assumption is needed to follow the train of thought, and my mind tends to be especially reluctant to assumptions about what other people mean. I'm a "what do you mean by ...?" kind of guy.

Sorry, the graphs in my initial post are unlabelled, and I should have taken the time to put the labels into mathematica before grabbing the output. But I did state in the text above the plots what was being plotted.

If you're talking about pure gain, it's fairly common for the two green quarter-images to have slightly different gain, as alternate lines are often at two different general levels of gain. My 10D has a 1.9% difference in the mean of the two green quarter-images, always. In order to do any real math between the two, I would think that rescaling one of the channels would be necessary. I've noticed that the Canon converter, DPP, seems to look at the means and scales internally. I don't see the artifacts in that converter when boosting contrast to extremes, as would occur in other conversions.

PRNU, as I understand it, is a variation from sensel to sensel of their response (output voltage) for a given input light exposure. I thought that this was due to variability in the gain of the on-chip amplifiers, but I suppose it could have other sources (differing physical characteristics of the sensel, etc). Regardless, the bottom line is a a different raw level will be output for a given number of photons input, from sensel to sensel.

What I did was to compare the standard deviation of the difference of two images to the standard deviation of a single image, in a uniform OOF area. It was long enough ago for me to start losing the details, but I don't think it made any difference if, in measuring the noise in the single image, whether I looked at a single green subarray or took the difference of the two green subarrays. I usually do the latter in order to remove possible bias from slow gradients across the image, but in small enough patches that's not an issue. The std dev of the difference of the g1 subarrays of two successive images will contain only the photon shot noise; the std dev of the difference of g2 and g1 subarrays of a single image will contain both shot noise and PRNU. I subtracted the shot noise from the first measurement from the total noise of the second measurement (in quadrature of course) in order to get at the PRNU, and that is the second plot of the OP. It is nicely linear with respect to exposure, so I think that says it's due to a variability in the gain of the sensels. Could this be due to a difference in the gain of the two subarrays, while within a subarray the gains of individual sensels are identical? I don't think such a scenario would generate the nice linear data of the second plot, but I'll have to think about that more.

If you mean that the green quarter-images have a fixed noise in their difference, which repeats in different exposures, then that would be a repeating noise that could be calibrated out. One of my peeves with RAW converters is that they do not bother to learn your camera, as an option. Only do-it-yourself astrophotography software allows that type of thing, and their conversions don't always have the best color or demosiacing options.

If you mean that there is a noise in the difference between the two green quarter images that is greater than between the same green quarter-image from two successive exposures, then that is truly an interesting phenomenon. I'll have to look at my cameras when I have the time, and see how they do in these regards.

Yes, I am suggesting that, because the noise is proportional to EV, it is the result of variation in the gain of the sensels. And you can calibrate it out. I thought this is what astrophotographers do when they divide out by a "flat field" -- average a whole bunch of uniform exposures to get rid of the shot noise, and all that is left is the PRNU. This gives you the flat field; you can then divide any image by the flat field image to remove the PRNU.
 

Emil Martinec

New member
You seem to forget statistics. Unity gain (a concept that's well known in many fields involved with signal amplification) simply means that input equals output, no modification to the signal strength. In our field of photosensitive sensor technology it describes the amplification level where an average change of one photon (=1 electron Volt), results in a change of one analog-digital-unit (ADU) or data-number (DN). That is an important level controlled by the ISO setting on our camera's, because further amplification will not lead to more information, it probably will lead to more noise.

No. Massaging the numbers after the actual shooting has no effect on the conclusions that can be drawn from the actual capture amplification and at the ADC quantification bit depth. In the only too common apples and oranges comparisons it may conceptually help to scale all values, including unity gain, but that has little to do with amplification.

What does change the unity gain, is the amplification used to linearly span the range from zero exposure to saturation, the latter determined by the 'full well' storage capacity of a sensel. There it is important to note the quantification bit depth, because it will impact the unity gain level.

Bart

Bart, I don't follow you here. The "gain" here is not a gain per se (a gain is the ratio of two like quantities, eg a small input voltage is amplified to a large output voltage, the ratio of these two voltages is a pure number which is the gain); rather, the "gain" in the present context is a conversion factor -- how many electrons at capture translate into an ADU after quantization. This use of "gain" has units and so John's analogy is rather apt -- if we replace the conversion from electrons to ADU by the translation of time into distance by a moving object, unity velocity depends on whether you're talking about feet per second, kilometers per hour, etc.

Consider the new Nikons, which have the option of 12-bit or 14-bit encoding. According to the definition of unity gain as I understand it, the camera having a unity gain of X in 12-bit encoding, would have a unity gain of X/4 in 14-bit encoding. I have often read the statement that there is no advantage to setting the ISO above unity gain; since there is no way to "divide an electron" into smaller bits, having "more than one ADU per electron" is pointless, according to this line of reasoning. But then the logical conclusion from that is that one shouldn't set the D300 above ISO X in 12-bit mode, and not above ISO X/4 in 14-bit mode. That seems patently silly to me. An advantage to increasing the ISO will still be conferred so long as the read noise in electrons is advantageously lowered in doing so, and that is indendent of how many ADU that read noise translates to.

But perhaps I am misunderstanding the utility of the concept of unity gain. If so, I'd like to know what it is good for.
 
Last edited:
I thought this is what astrophotographers do when they divide out by a "flat field" -- average a whole bunch of uniform exposures to get rid of the shot noise, and all that is left is the PRNU. This gives you the flat field; you can then divide any image by the flat field image to remove the PRNU.

Astrophotographers do a lot more. They shoot a larger number (16+) of:
- Dark frames (non-exposures at the same exposure time of the actual images) to calibrate the darkest possible exposure signal, on the per sensel CFA levels, effectively readnoise + temporal noise (which will build-up during the long exposure times, mostly due to raised temperature effects).
- Offset or bias frames (non-exposures at the shortest possible exposure time) to calibrate the minimum possible signal, effectively readnoise.
- Light frames (actual exposures). There are several strategies to combine these, depending on the amount of light (or the lack thereof) per image, including adding or several kinds of averaging (with different trade-offs).
- Flat field frames (uniform area exposures) to calibrate things like vignetting, light pollution, and other non-uniformities across the image.

An excellent example of what can be achieved with a relatively affordable 40D (with modified IR filter) and 86 3-minute exposures (256 minutes total @ ISO 800, hint, note the ISO choice for unity gain optimization) can be found at: http://www.machunter.org/hap_m81.html.

hap_M81.jpg


The phenomenal subtleties of the image quality will be recognized/valued by astrophotographers.

Bart
 
Last edited:
Bart, I don't follow you here. The "gain" here is not a gain per se (a gain is the ratio of two like quantities, eg a small input voltage is amplified to a large output voltage, the ratio of these two voltages is a pure number which is the gain); rather, the "gain" is a conversion factor -- how many electrons at capture translate into an ADU after quantization.

I fail to see the real difference, maybe due to language, maybe because I look at the practical implication? I think I now understand the subtlety of "like quantities", i.e. voltage to voltage, but to me there is not that much practical difference with e.g. 'x' photons translate to 'y' data numbers due to a selectable voltage amplification at the sensel, AND a conversion factor at the ADC quantization stage. I could 'see' some difference in read-noise being scaled at the ADC stage, but not as much by the on-sensel amplification stage.

Consider the new Nikons, which have the option of 12-bit or 14-bit encoding. According to the definition of unity gain as I understand it, the camera having a unity gain of X in 12-bit encoding, would have a unity gain of X/4 in 14-bit encoding.

When we talk about truely amplified sensel voltage to ADC scaling and quantification levels, rather than just stripping the LSBs, I'd agree with that. It would generate 2 distinct unity gain levels.

I have often read the statement that there is no advantage to setting the ISO above unity gain; since there is no way to "divide an electron" into smaller bits, having "more than one ADU per electron" is pointless. But then the logical conclusion from that is that one shouldn't set the D300 above ISO X in 12-bit mode, and not above ISO X/4 in 14-bit mode.

That indeed is the logical conclusion, and it's how I interpret it (again assuming non-stripped LSBs).

That seems patently silly to me. An advantage to increasing the ISO will still be conferred so long as the read noise in electrons is advantageously lowered in doing so, and that is indendent of how many ADU that read noise translates to.

I don't see a lowering of the read noise as a result of increasing the ISO, at least not in the Canon samples that I've investigated. It perhaps becomes important to separate between amplification (of like units) and mathematical scaling to achieve a different quantized ADU. The difference between 'regular' amplified ISOs (100, 200, ..., 1600) and the intermediate scaled ones seems clear to me though.

But perhaps I am misunderstanding the utility of the concept of unity gain. If so, I'd like to know what it is good for.

The practical implication of it, for me (and confirmed by practical tests), is that there is no benefit to in-camera boosted ISOs beyond the unity gain output level, at least for Raw files, and obviously beyond the shutterspeed/DOF implications. Mathematical 'exposure' boosts in Raw conversion have similar or better noise performance in the resulting images compared to in-camera high ISO boosts (which also scale read noise).

I tend to look at things in a practical (black-box) sense when we don't know the intricacies of what exactly goes on under-the-hood. Guesses/speculation of the black-box conversions may well be wrong anyway.
My black-box view is the following; a sensel's signal level can be represented by any output signal level between zero and saturation, within the precision allowed by the quantization bit-depth. That means that the ADC needs to scale the output to a linear level between zero and saturation photon input levels (after ISO determined amplification of the electron voltage).

Bart
 
Last edited:

Emil Martinec

New member
I don't see a lowering of the read noise as a result of increasing the ISO, at least not in the Canon samples that I've investigated. It perhaps becomes important to separate between amplification (of like units) and mathematical scaling to achieve a different quantized ADU. The difference between 'regular' amplified ISOs (100, 200, ..., 1600) and the intermediate scaled ones seems clear to me though.

The practical implication of it, for me (and confirmed by practical tests), is that there is no benefit to in-camera boosted ISOs beyond the unity gain output level, at least for Raw files, and obviously beyond the shutterspeed/DOF implications . Mathematical 'exposure' boosts in Raw conversion have similar or better noise performance in the resulting images compared to in-camera high ISO boosts (which also scale read noise).

I tend to look at things in a practical (black-box) sense when we don't know the intricacies of what exactly goes on under-the-hood. Guesses/speculation of the black-box conversions may well be wrong anyway.
My black-box view is the following; a sensel's signal level can be represented by any output signal level between zero and saturation, within the precision allowed by the quantization bit-depth. That means that the ADC needs to scale the output to a linear level between zero and saturation photon input levels (after ISO determined amplification of the electron voltage).

Bart

Let me take some data from Roger Clark:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/
For the 14-bit 1D3:

ISO ReadNoise Gain

400 3.82 .62
800 2.61 .31
1600 2.16 .15

Thus, for the 1D3 according to these figures unity gain is about ISO 250; yet pushing ISO 400 by two stops gives a read noise 4*3.82/.62=24.6 ADU, while exposing properly at ISO 1600 gives a read noise of 2.16/.15=14.4 ADU.

The read noise figures point to the fact that exposing properly gives lower noise than pushing underexposure, even beyond unity gain ISO. So what do you mean when you say that "there is no benefit to in-camera boosted ISOs beyond the unity gain output level"?

Data I have analyzed on the D300 points in the same direction -- there is minimal difference between 12-bit and 14-bit read noises in electrons, and so the choice of whether to push lower ISO or use higher ISO at proper exposure certainly does not depend on whether you are working in 12-bit or 14-bit, while the unity gain hypothesis would argue that starting at 4x lower ISO there is no need to raise ISO because you will do as well by underexposure and pushing in post.
 
Let me take some data from Roger Clark:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/
For the 14-bit 1D3:

ISO ReadNoise Gain

400 3.82 .62
800 2.61 .31
1600 2.16 .15

Thus, for the 1D3 according to these figures unity gain is about ISO 250; yet pushing ISO 400 by two stops gives a read noise 4*3.82/.62=24.6 ADU, while exposing properly at ISO 1600 gives a read noise of 2.16/.15=14.4 ADU.

The read noise figures point to the fact that exposing properly gives lower noise than pushing underexposure, even beyond unity gain ISO. So what do you mean when you say that "there is no benefit to in-camera boosted ISOs beyond the unity gain output level"?

No benefit is perhaps a bit strong, I admit, but there is more than just read noise we need to consider.

Allow me to quote a few lines from Roger Clark's web page that you referenced above. I'm not trying to cherry pick, but he explains it better that I apparently do:
"The Unity Gain ISO is the ISO of the camera where the A/D converter digitizes 1 electron to 1 data number (DN) in the digital image. ... Since 1 electron (1 converted photon) is the smallest quantum that makes sense to digitize, there is no point in increasing ISO above the Unity Gain ISO."

However, he then continues a bit further along with the following refinement:
"Unity Gain ISO describes the high signal part of an image (the highlights) at high ISO, and read noise the performance corresponding to the low signal end of the photograph. But if a camera was delivering more photons to a pixel, then read noise alone does not give a complete story of the performance in the shadows".

Which by the way shows he is not fixated on Unity gain, but takes in all factors concerned.

He then introduces an interesting metric called "the Low-Light Sensitivity Factor" which "describes the high iso shadow performance", thus combining high and low signal content in a single image.

When we are faced with a photon starved situation, the scenario that John Sheehy seems to take as his reference, then read noise differences do become the more important factor. I tend to strive for well exposed images, at any ISO, so more in line with the Low-Light Sensitivity Factor (LLSF, how's that for an acronym?) qualification.

Finally, we also need to consider human vision which integrates detail and noise within a certain viewing angle and over time, which makes perceived image quality a tad more complex than numbers alone can describe. Which is by the way not to demean how useful these numbers are for understanding the fundamental principles at work.

Bart

P.S. This reference also describes the trade-off between noise amplification and dynamic range optimization.
 

Emil Martinec

New member
No benefit is perhaps a bit strong, I admit, but there is more than just read noise we need to consider.

Allow me to quote a few lines from Roger Clark's web page that you referenced above. I'm not trying to cherry pick, but he explains it better that I apparently do:
"The Unity Gain ISO is the ISO of the camera where the A/D converter digitizes 1 electron to 1 data number (DN) in the digital image. ... Since 1 electron (1 converted photon) is the smallest quantum that makes sense to digitize, there is no point in increasing ISO above the Unity Gain ISO."

However, he then continues a bit further along with the following refinement:
"Unity Gain ISO describes the high signal part of an image (the highlights) at high ISO, and read noise the performance corresponding to the low signal end of the photograph. But if a camera was delivering more photons to a pixel, then read noise alone does not give a complete story of the performance in the shadows".

Which by the way shows he is not fixated on Unity gain, but takes in all factors concerned.

I notice that for 14-bit cameras such as the 1D3, what he plots as unity gain in Fig 6 of
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/
is rescaled to 12-bit normalization; so it seems he is not abiding by the literal definition of unity gain, since unity gain ISO on the 1D3 is 250, not the 1000 that he plots there.

The concept of unity gain is really a derivative of the gain itself, factoring in the ISO at which the "gain" is measured. If the 1D3 has gain 2.46 electrons/14-bit ADU at ISO 100, then unity gain is ISO 246 in 14-bit units. The unity gain concept is thus simply a restatement of what the gain is, by normalizing it relative to ISO, and nothing more. Rather than describing "the high signal part of an image (the highlights) at high ISO" it characterizes the high signal at any ISO, since it will tell you for a given raw level of signal what the photon shot noise is, and shot noise dominates the total noise at high signal.

Apart from the gain, which relates shot noise to measured raw signal in ADU, the one other figure of merit (assuming we are not discussing long exposure times where thermal noise enters) is the read noise.
The conclusion that "there is no point in increasing ISO above the Unity Gain ISO" seems to me incorrect, as the 1D3 example I worked out in my previous post shows. For a given camera with a given gain, that statement would lead one to underexpose lower ISO rather than expose properly at higher ISO, yet for a given Tv and Av, the higher ISO is better (provided one doesn't blow highlights) because the read noise continues to drop in electrons above the unity gain ISO. The photon shot noise is the same when one varies the ISO at fixed Tv and Av, and so the read noise is the only consideration to obtaining the highest SNR. It's really an extension of the "expose to the right" mantra that, under conditions of fixed Av and Tv, raising the ISO as much as possible without blowing highlights is the thing to do, and it certainly runs counter to the unity gain philosophy.

He then introduces an interesting metric called "the Low-Light Sensitivity Factor" which "describes the high iso shadow performance", thus combining high and low signal content in a single image.

When we are faced with a photon starved situation, the scenario that John Sheehy seems to take as his reference, then read noise differences do become the more important factor. I tend to strive for well exposed images, at any ISO, so more in line with the Low-Light Sensitivity Factor (LLSF, how's that for an acronym?) qualification.

Bart

The definition of LLSF seems a bit imprecise: it is stated as "Unity Gain ISO divided by read noise in electrons". Read noise measured at what ISO?

Since as I pointed out above, unity gain ISO is simply the gain itself times the ISO at which the gain is measured, this LLSF is (for whatever ISO the read noise is being taken from) simply the inverse of the read noise in ADU (times the ISO). I'm not sure why this quantity is a preferred metric of low-light sensitivity. It's clearly larger for sensors with lower read noise in ADU which is a good thing. Why does this measure combine high and low signal performance?

I would have said that the better low-light camera at high signal is the one which collects the most photons at a given ISO, so that the most important thing is the gain itself. The read noise is then a secondary consideration which tells us how soon the noise is photon-statistics limited. The SNR curve for a sensor is a curve which is one slope when read noise dominated, then has a knee where photon noise takes over, and another slope above the knee governed by the gain. You can't characterize the peformance of the sensor in any kind of light by one number; you need two given the nature of the SNR curve -- the point where the knee is and the slope above the knee, for instance. That is, you need both the read noise and the gain, otherwise there are degeneracies that the metric cannot distinguish.

For instance, consider two cameras: camera A with gain g=2, and read noise R=2 electron; and camera B, with g=10 and R=10. Both have LLSF =1. But here are their SNR's vs EV (the vertical axis is the base two log of the SNR, the horizontal axis is the signal level in ADU from 0 to 4095, on a log scale of stops), cam A is the red curve, cam B is the blue curve:

LLSF.jpg


I think I'd want camera B, suggesting that the figure of merit is the gain and not the LLSF.

To make the point differently, consider camera C with g=5 and R=2, so LLSF=2.5; cam C is the green curve, cam B is the blue curve:


LLSF2.jpg


cam C is better in shadows but worse in midtones and highlights. I'm not sure what I'm supposed to conclude from its higher LLSF.
 

John Sheehy

New member
Sorry, the graphs in my initial post are unlabelled, and I should have taken the time to put the labels into mathematica before grabbing the output. But I did state in the text above the plots what was being plotted.

PRNU, as I understand it, is a variation from sensel to sensel of their response (output voltage) for a given input light exposure. I thought that this was due to variability in the gain of the on-chip amplifiers, but I suppose it could have other sources (differing physical characteristics of the sensel, etc). Regardless, the bottom line is a a different raw level will be output for a given number of photons input, from sensel to sensel.

One begins to wonder why Canon goes out of their way to perfect black calibration, but ignores scale, especially in an $8K camera.

What I did was to compare the standard deviation of the difference of two images to the standard deviation of a single image, in a uniform OOF area. It was long enough ago for me to start losing the details, but I don't think it made any difference if, in measuring the noise in the single image, whether I looked at a single green subarray or took the difference of the two green subarrays. I usually do the latter in order to remove possible bias from slow gradients across the image, but in small enough patches that's not an issue. The std dev of the difference of the g1 subarrays of two successive images will contain only the photon shot noise; the std dev of the difference of g2 and g1 subarrays of a single image will contain both shot noise and PRNU. I subtracted the shot noise from the first measurement from the total noise of the second measurement (in quadrature of course) in order to get at the PRNU, and that is the second plot of the OP. It is nicely linear with respect to exposure, so I think that says it's due to a variability in the gain of the sensels. Could this be due to a difference in the gain of the two subarrays, while within a subarray the gains of individual sensels are identical? I don't think such a scenario would generate the nice linear data of the second plot, but I'll have to think about that more.

OK, now I think I know what you are talking about. It seems to be the same thing I noticed in the XTi and others' 1Dsmk3s. You might call this a fixed pattern scalar noise.

I would guess then that flipping one of 2 copies of g1 horizontally or vertically, or rotating it 180 degrees would have a similar effect when subtracting it to subtracting g2 from g1.

Yes, I am suggesting that, because the noise is proportional to EV, it is the result of variation in the gain of the sensels. And you can calibrate it out. I thought this is what astrophotographers do when they divide out by a "flat field" -- average a whole bunch of uniform exposures to get rid of the shot noise, and all that is left is the PRNU. This gives you the flat field; you can then divide any image by the flat field image to remove the PRNU.

It would be interesting to see if this ever changes over time. If not, one master could be used for the sensor (if not for the lens - half the stack could be inverted to ignore lens falloff) for the life of the camera.

I mentioned the concept of "scalar noise" in the past, but no one was interested. I told Roger Clark about it in my XTi, and he said it was not possible and that I was making an error if I saw the SNR limited to about 100:1 at ISO 100 near saturation.

I have probably taken exposure pairs of OOF flat areas with all of my cameras a few times each, but I can never find the files when I need them. I'll have to do it all again and see where they stand.
 

Emil Martinec

New member
I would guess then that flipping one of 2 copies of g1 horizontally or vertically, or rotating it 180 degrees would have a similar effect when subtracting it to subtracting g2 from g1.

The purpose of subtracting g2 from g1 was to eliminate any bias from exposure gradients across the image, so flipping or rotating would not help that. Perhaps better to take two copies of the same g1 (say), and translate one image by a pixel or two in some direction, then subtract them. That will remove gradients but leave the total noise intact. BTW 1D3 blackframe images I have seen have a mod 4 regularity in their pattern noise, suggesting two pixel shift in the g1 subarray in that case is better.

Whether PRNU is stable in time might depend on its source. If it is gain variability in the electronics, the calibration could drift over time; if it is variation in physical characteristics (size/shape of sensel, transmissivity of microlenses or color filters) that would presumably be fixed over time. There is of course the added fun of accumulated gunk on the surface, which changes over time and with each sensor cleaning, and of course filters the light reaching the sensel too.
 
Last edited:

John Sheehy

New member
But perhaps I am misunderstanding the utility of the concept of unity gain. If so, I'd like to know what it is good for.

Part of the reasoning given at clarkvision is that there is no point in enumerating the number of photons beyond the actual number of photons. This might be almost true if there were no read/ADC noises, but even then, care would be needed to make sure that each discreet number of photons fell exactly into the middle of a quantization rounding range, and gain would have to be extremely precise to make this happen over the entire range of ADUs. You would really need a ratio of about 1.1 ADU to each photon level to avoid merging two photon levels into one ADU without any read/ADC noises at all, and even then, you would wind up with the occasional histogram gap, unless the data was examined and re-written (this kind of quantization is not an issue with real-world noise levels, but without noise, the separation of all levels would be valuable in the deepest shadows). Add read/ADC noise at a small level, and then you need to increase the gain to keep the counts distinct. Raise the read/ADC noise a little more, and now individual photon counting becomes impossible, as the noise is confusing levels - the only thing that matters at this point is SNR.

Some of the real-world examples given by Roger to support his claim of the significance of unity gain as a limiting factor, seemed to have the read noise in electrons leveling off in the vicinity of "unity gain", but only of Canons of similar technology. The examples from Nikon that he uses are not convincing at all, nor the P&S cameras all of which have only one real ISO, and only vary ISO by an auxiliary amplifier after the initial read, and only vary in read noise in electrons because of the difference in ADC noise contribution at different gains (typically 15x the total read noise at ISO 1600 compared to 100).

That's too small of a sample to have any real significance, IMO, and regardless, the standard deviation of read noise is not the only factor in its visual significance. Any component of read noise which is one-dimensional (what people often call "banding") is far more visually potent than fully 2-dimensional pixel noise, because it does not lose its visibility when taking the image in as a whole, even at low image magnifications. It does not reduce, statistically, either, at the same rate as 2-D noise when downsampled or binned. Here are the results of some binnings I did of Canon DSLR blackframes:

61051205.jpg


The left column is the pixel ratio, the second is the standard deviation measured at the new bit depth, the third column is the second normalized by the bin ratio, and the fourth is the predicted noise at each binning based on dividing by the square root of the bin ratio. The actual results in the bottom row are almost 3x the predicted results, and I'm sorry I didn't include the RAW images that resulted, but the 4096:1 binning had no visible 2-D noise whatsoever; it was a pure horizontal and vertical line-fest. If it weren't for banding noise, IMO, Canon DSLRs could be shooting reasonable web-size or newpaper images at ISOs in the hundreds of thousands, or even millions.

Line noise's ability to be confused as image detail is also greater than 2-D noise. Line noises, when measured in electrons, do not level off near so-called unity gain, when measured in electrons or proportional units. Here is a graph of the read noises of my 20D, at all ISOs. I have included total read noise, which is divided by 10 in this graph to make it fit in snugly with the other noises, which are horizontal banding and vertical banding. All ISOs above 100 are normalized to it. The horizontal banding component's curve shows no sign of leveling off (as measured in units of electrons) at the "unity gain" of this camera (about ISO 1300):

65737967.jpg


Roger discounted this noise when I discussed this, by giving it a name (incorrectly; "fixed pattern noise", but it is not fixed; it is totally random but 1-dimensional).

I have simply had no satisfaction with Roger's explanations of some of his conclusions, like his declaration that a sensor that detected only one photon per sensel but had billions of them would have "a dynamic range of 1" or no dynamic range at all. This is one of the reasons why I said that he is focused on the pixel, earlier.
 

Emil Martinec

New member
The purpose of subtracting g2 from g1 was to eliminate any bias from exposure gradients across the image, so flipping or rotating would not help that. Perhaps better to take two copies of the same g1 (say), and translate one image by a pixel or two in some direction, then subtract them. That will remove gradients but leave the total noise intact. BTW 1D3 blackframe images I have seen have a mod 4 regularity in their pattern noise, suggesting two pixel shift in the g1 subarray in that case is better.

OK, I did the above exercise for one example, as a quick check on my methodology. The std dev of the same g2 subarray crop of two successive images was 18.3; this is the photon shot noise with PRNU subtracted out. The difference of g2-g1 subarrays of a single image (same crop) had a std dev of 24.1. The difference of the crop of g2, and the crop window shifted by two pixels vertically (so one is subtracting the same subarray, two copies with pixels shifted; the main pattern noise is vertical banding so the shift was chosen to avoid this effect as much as possible) had a std dev of 23.7. The mean of the crop was about raw level 1980, and the raw was taken at ISO 100.

My conclusion is that little of the effect observed can be attributed to using g2-g1, vs using two crops of the same subarray.
 
Top