• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

How many photons?

Jerome Marot

Well-known member
Maybe somebody can answer this simple question.

Suppose we have a camera. We set its meter to a given value, say ISO 100, and point it at a uniform and evenly illuminated grey surface, for example a white wall under cloudy skies.

How many photons reach the surface of the sensor when we take a picture (per unit of surface, to make the result independent of sensor size)?
 

Doug Kerr

Well-known member
Hi, Jerome,

Maybe somebody can answer this simple question.

Suppose we have a camera. We set its meter to a given value, say ISO 100, and point it at a uniform and evenly illuminated grey surface, for example a white wall under cloudy skies.

How many photons reach the surface of the sensor when we take a picture (per unit of surface, to make the result independent of sensor size)?

I'll try and deal with that this afternoon. Just now I'm getting ready to go to a movie with Carla!

But in any case the answer depends on the average radiometric exposure on the sensor. The average radiometric exposure is determined by:

• The average luminance of the scene.

• What we will assume is the "representative" wavelength of the light from the scene.

• The effective f-number for the exposure (that is, taking into affect at the least the lens transmission).

• The exposure time.

But your predicate of a metered exposure (for a stated ISO sensitivity) will allow a lot of those things to be disposed of. (At least approximately, given the vagaries of exposure metering!)

So I can start with the "standard" photometric exposure (for an exposure index of ISO 100) that underlies classical exposure metering theory.

But I have to dig up a lot of data to proceed.

Later.

Doug
 
Maybe somebody can answer this simple question.

Suppose we have a camera. We set its meter to a given value, say ISO 100, and point it at a uniform and evenly illuminated grey surface, for example a white wall under cloudy skies.

How many photons reach the surface of the sensor when we take a picture (per unit of surface, to make the result independent of sensor size)?

Hi Jerome,

Simple questions are not always simple to answer, but an approximation is possible when additional information is know.

Other than Doug's initial approach, I'd use a different angle of attack, based on statistics. We know that Photon shot noise has a Poisson distribution due to the variable arrival rate during the exposure time of the photons that actually reach the sensor (after lens losses, sensor filterstack, Bayer CFA, sensor coverglass, spectral sensitivity of the doped silicon and unobstructed photosite aperture, and quantum efficiency in converting photons to electrons).

What's needed is:
1. The actual standard deviation of the signal as recorded by the sensor. Since there will be other noise sources added to the Photon shot noise, e.g. read-noise, offset noise, and pattern noise, it would be a more reliable estimate if one subtracts two images and divides the standard deviation by Sqrt(2). The subtraction will take out any non-random, i.e. systematic, noise, and the division is needed to compensate for the subtraction of random noise (which increases the standard deviation of random noise).
2. Next one needs to determine the sensor gain at the used ISO setting. My Canon 1Ds Mark III for instance has a gain of approx. 4 at ISO 100, which means that for each 4 photons that get converted to electrons, the digital number in the Raw file data increases by 1. So we need to multiply the digital numbers (e.g. obtained by RawDigger) by 4 in my case, to estimate the number of photons that caused that level. A useful 1st approach source of gains is (from Sensorgen.com) Max Saturation capacity (e-) divided by actual Digital Numbers for saturation in the Raw file from RawDigger, although not all sensors have a linear tonecurve recording in the Raw data.

Since Photons have a Poisson distribution, the standard deviation equals the square root of the number of photons. Inversely, the standard deviation therefore is an indication of the number of photons that created the signal (stdev^2 = signal in photons that were converted to electrons), statistically speaking that is.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Jerome,

A quick and dirty workup (and this has not yet even been audited against possible error - it might be way off) suggests that for a "standard" metered exposure of a uniform-luminance object at an exposure index of ISO 100, and assuming a wavelength of 555 nm (about the middle of the visual band), the photon density would be about 4.1 E+08 photons per square mm.

I have a number of other irons in the fire at the moment, but as soon as I get a little out from under, I'll validate this and describe the rationale.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Jerome,

Well, I find that I have a little relief from other obligations, so here we go.

According to the ISO standard for automatic exposure control systems, for a metered exposure, the "target" photometric exposure, H, on the focal plane (in the course of the exposure) is given by:

H = 10/S (lux-seconds)

where S is the exposure index as an ISO speed.

So for ISO 100, the target value of H is 0.1 lx-s.

1 lx-s is 1 lumen/m^2, so the target photometric exposure (in the course of the exposure) is 0.1 lm/m^2.

Now at a wavelength of 555 nm (about the center of a the visible band):

683 lm = 1 watt.

So 683 lm-sec = 1 Joule

So our target radiometric exposure would be approximately:

1.5 E-04 J/m^2 (0.1/683)

Now, the energy of a photon depends linearly on its frequency, approximately 6.62 E+34 J/Hz.

The relationship between wavelength and frequency, for light is:

fL = c

where f is the frequency in Hz, L is the wavelength in meters, and c is the speed of light, 300 E+06 m/s.

So at a wavelength of 555 nm (555 E-09 m), the frequency is:

f = 5.4 E+14 Hz (300 E+06/555 E-09).

Thus the energy of a photon at this frequency is:

E = 3.57 E-19 J (5.4 E +14 Hz * 6.62 E-34 J/Hz)

So the photon density will be:

4.2 E+14 photons/m^2 (1.5 E-04 J/m^2 / 3.57 E-19 J/photon)

As 1 m^2 = 1.00 E+06 mm^2, that would be:

4.2 E+08 photons/mm^2 (4.2 E+14/ 1.00 E+06)

I think.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Bart, Jerome and Doug and everyone else, may I extend the O.P. Question? So where are we up to in the road to absolute sensitivity to photons. It seems to me that the depth of the sensel site on the wafer and the ability to measure the resultant voltage with account for noise, should allow for amazing expansion of camera sensitivity.

What is the minimal standard deviation needed to reliably count the collection of incoming photons so that there's sufficient contrast to build a line? I don't know. The formula I believe is (root n)/n X 100 as a percent. Doubtless this is well known. I would imagine a tighter certainty would be needed according to how many shades of grey one needs to differentiate.

Asher
 

nicolas claris

OPF Co-founder/Administrator
Hmmmmm
Headache for me! LoL!
What about if there's dust spot(s) on the sensor or rain drops on the front lens?
Magnifying or diminishing?
Would the photons fly different if the camera is hold by a drone or an helicopter?

Sorry for the joke, guys, but I couldn't resist!
I love you all for such problematic!
 

Jerome Marot

Well-known member
the photon density would be about 4.1 E+08 photons per square mm.

I think that your estimate is a bit low. For a 6µm sensel, it would give a photon density of about 15000.

From a quick look at your post, it seems that you were computing the number of photons at the target, not at the sensor?
 

Jerome Marot

Well-known member
Bart, Jerome and Doug and everyone else, may I extend the O.P. Question? So where are we up to in the road to absolute sensitivity to photons.

I think that today's cameras are already counting individual photons.


It seems to me that the depth of the sensel site on the wafer

The depth of the sensel is not that relevant, as photons do not penetrate silicon very deep. Actually, the depth is linked to the wavelength, a property used by Foveon sensors.
 

Jerome Marot

Well-known member
Hmmmmm
Headache for me! LoL!
What about if there's dust spot(s) on the sensor or rain drops on the front lens?
Magnifying or diminishing?
Would the photons fly different if the camera is hold by a drone or an helicopter?

Sorry for the joke, guys, but I couldn't resist!
I love you all for such problematic!


Maybe there is a practical, photographic reason for my question. Just saying.
 

Doug Kerr

Well-known member
Hi, Jerome,

I think that your estimate is a bit low. For a 6µm sensel, it would give a photon density of about 15000.

That might well be. I would be glad to see other estimates of the value.

From a quick look at your post, it seems that you were computing the number of photons at the target, not at the sensor?

I assume by target you mean the "scene".

I used the word "target" to mean "that which we seek to have happen." I was referring to the average photometric exposure, H, that we seek to attain on the sensor for an exposure done under control of a reflected-light exposure control system "calibrated" in accordance with the applicable ISO standard.

That is prescribed to be 10/S lux-seconds, where S is the exposure index (that is, what we tell the exposure metering system is the sensitivity of the film or sensor) as an ISO speed.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Michael,

Jerome,

maybe this could be of interest for you.

Thank you for that link. That paper looks to be quite thorough. When I get a chance, I'll follow it and see how his process compares to mine. He uses a wholly different approach than I did, I think more round about.

I only went so far in my comparison of his work to mine as to find that his estimate of the the energy at a wavelength near the one I assumed comports well with my value. He gives the derivation of that value in a more thorough way than I reported here, which was "predigested".

I guess I need to spend enough time with that paper to find out where his answer comes out, and what it is.

Thank again.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
I think that today's cameras are already counting individual photons.




The depth of the sensel is not that relevant, as photons do not penetrate silicon very deep. Actually, the depth is linked to the wavelength, a property used by Foveon sensors.

Correct, at present, Jerome, but lower density materials in the sensel might allow more photons to be collected. I am thinking of some sort of photosensitive polymer.

Asher
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hmmmmm
Headache for me! LoL!
What about if there's dust spot(s) on the sensor or rain drops on the front lens?
Magnifying or diminishing?
Would the photons fly different if the camera is hold by a drone or an helicopter?

Sorry for the joke, guys, but I couldn't resist!
I love you all for such problematic!

Nicolas,

Actually this is a good joke but also exactly where new designs of sensels will allow for algorithms to recognize and account for those challenges.

So a robber in the shadows through a drenched window will be clearly imaged. The challenge is getting enough low light measured to define points of low contrast.

Ultimately, each sensel will be an independent camera, collecting more and more photons until the math is right. All the sensels will work as cooperative groups based on the subject, with supervisory pattern recognition libraries to help build with even the lowest light levels.

So your joke is really part of the future path and in the sights of those working to pull clear images from sub-optimal conditions.

The key is dealing with noise and increasing detection limits with new materials.


Asher
 

nicolas claris

OPF Co-founder/Administrator
Asher
it was just a joke while passing by, because I am unable to understand any sophisticated (to me!) technical language.
However I want you and others to know that I'm more than happy that some persons are able to use such language and be searching what will give me later the opportunity to express my photographic glance even better.
Without this spirit of research, digital photography (and all technologies!) would not exist and our lives would be very much different.
Being an ardent user of progress, I can only be pleased that some are still searching and searching days after days!

Tomorrow will be brighter! (thanks to photons ;)
 

Jerome Marot

Well-known member
4.2 E+08 photons/mm^2 (4.2 E+14/ 1.00 E+06)

As I said, for a 6µm sensel, this estimate would give a photon density of about 15000.

6 µm is about the size of the sensel of a 24x36mm camera with 24 millions pixels. This estimate that, everything being otherwise perfect, the sensel of this camera would be illuminated by about 15000 photons if we point it at a 20% grey card, set the ISO value to 100 and shoot.

If we point the camera at a real scene with contrast, we would have light and dark parts. The dark parts will receive very few photons and the light parts a lot more. For "standard" photographic scenes, the light parts will receive about 5 times more photons (that is the reason for the standard grey card to be "20% grey"...), so about 75000. That should be about the lightest parts in a "standard" photographic scene, close to saturation.

Of course, the sensel does not measure all the photons that impact it. First, there is quantum efficiency, the ratio between impacting photons and converted photo-electrons. For modern sensors, this may be as high as 80%. Then there are the colour filters of the Bayer array. These, of course, remove part of the light to select the colour. One may think that since we have 3 colours, each filter selects 33% of the light, but they can be a bit less selective. I'll be optimistic and take a value of 50% for these filters.

Under these optimistic assumptions, a 6µm sensel should yield a signal of 75000 *80% *50% = 30000 photo-electrons for the lighter parts of the picture.

For real cameras with 6µm sensels, however, the saturation values are closer to 60000 photo-electrons. Hence my comment that the calculated value was a bit low. Still, we agree on the order of magnitude.
 

Doug Kerr

Well-known member
Hi, Jerome,

As I said, for a 6µm sensel, this estimate would give a photon density of about 15000.

6 µm is about the size of the sensel of a 24x36mm camera with 24 millions pixels. This estimate that, everything being otherwise perfect, the sensel of this camera would be illuminated by about 15000 photons if we point it at a 20% grey card, set the ISO value to 100 and shoot.

Actually, for a metered exposure (we assume that the underlying metering here is "reflected light"), the reflectance of the scene, or the average scene luminance, do not affect the average photometric exposure on the focal plane. The metering system works to always bring that to a predetermined level for any given exposure index setting (if the metering system works on average scene luminance).

If we point the camera at a real scene with contrast, we would have light and dark parts. The dark parts will receive very few photons and the light parts a lot more. For "standard" photographic scenes, the light parts will receive about 5 times more photons (that is the reason for the standard grey card to be "20% grey"...) . . .

Yes. That is, about five times the average (per unit area, of course).

Of course, the sensel does not measure all the photons that impact it.

Quite. But that of course is not involved in the question that was originally posed.

Best regards,

Doug
 

Jerome Marot

Well-known member
More on the subject of photons.

Let us take a camera with 6 µm sensels (that would be a 12 mpix APS-c camera, a 24 Mpix 24x36 camera or a 50 mpix medium format camera). Let us set the meter to ISO 100 and take a picture. From the estimates computed earlier (which diverge, so I take an optimistic guess), it seems that each sensel will receive between 0 photons (for the very, very dark parts of the picture) and about 60 000 photons (for the almost burnt-out highlights). The grey tones and skin tones will receive about 12 000 and 24 000 photons (for a light, "caucasian" skin). This takes into account the photons absorbed by the filters of the Bayer array, but supposes that all the rest is perfect.

2^16 is equal to 65536, so with a 16 bits counter we can count our signal perfectly: 1 photon, 2 photons, 3 photons... all the way up to 60 000 photons. Our source signal, light, is a discrete signal which comes in individual photons and these can be counted one by one by a 16 bits counter. There aren't that many of them, even at ISO 100. And there are cameras with 16 bits counter on the market.

The situation is tighter if we raise the ISO value. Each time we double the ISO value, the sensels get half the number of photons. Therefore:

ISO Photons
100 60 000 24 000
200 30 000 12 000
400 15 000 6 000
800 7 500 3 000
1600 3 750 1 500
3200 1 875 750
6400 937 375
12800 469 187
25600 235 94
51200 117 47
102400 58 23
204800 29 12
409600 14 6
 

Doug Kerr

Well-known member
Hi, Jerome,

More on the subject of photons.

Let us take a camera with 6 µm sensels (that would be a 12 mpix APS-c camera, a 24 Mpix 24x36 camera or a 50 mpix medium format camera). Let us set the meter to ISO 100 and take a picture. From the estimates computed earlier (which diverge, so I take an optimistic guess), it seems that each sensel will receive between 0 photons (for the very, very dark parts of the picture) and about 60 000 photons (for the almost burnt-out highlights). The grey tones and skin tones will receive about 12 000 and 24 000 photons (for a light, "caucasian" skin). This takes into account the photons absorbed by the filters of the Bayer array, but supposes that all the rest is perfect.

2^16 is equal to 65536, so with a 16 bits counter we can count our signal perfectly: 1 photon, 2 photons, 3 photons... all the way up to 60 000 photons. Our source signal, light, is a discrete signal which comes in individual photons and these can be counted one by one by a 16 bits counter. There aren't that many of them, even at ISO 100. And there are cameras with 16 bits counter on the market.

The situation is tighter if we raise the ISO value. Each time we double the ISO value, the sensels get half the number of photons. Therefore:

ISO Photons
100 60 000 24 000
200 30 000 12 000
400 15 000 6 000
800 7 500 3 000
1600 3 750 1 500
3200 1 875 750
6400 937 375
12800 469 187
25600 235 94
51200 117 47
102400 58 23
204800 29 12
409600 14 6

Thanks for that very cogent analysis.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
More on the subject of photons.......



The situation is tighter if we raise the ISO value. Each time we double the ISO value, the sensels get half the number of photons. Therefore:

ISO Photons
100 60 000 24 000
200 30 000 12 000
400 15 000 6 000
800 7 500 3 000
1600 3 750 1 500
3200 1 875 750
6400 937 375
12800 469 187
25600 235 94
51200 117 47
102400 58 23
204800 29 12
409600 14 6

So, Jerome, the accuracy of counting these photons, assuming 16 BIT citmrcuit, (actually most, I thought were 12 or 14 BIT at the best), is going to be high. At 10,000 photons counted, Accuracy is only plus or minus 1%.

By just 1000 pixels being counted, the accuracy falls to plus or minus 3%

So how does this degradation of accuracy get addressed and what is the consequence in terms of contrast, resolution and other measures of quality?

Asher
 

Jerome Marot

Well-known member
So, Jerome, the accuracy of counting these photons, assuming 16 BIT citmrcuit, (actually most, I thought were 12 or 14 BIT at the best), is going to be high. At 10,000 photons counted, Accuracy is only plus or minus 1%.

By just 1000 pixels being counted, the accuracy falls to plus or minus 3%

So how does this degradation of accuracy get addressed and what is the consequence in terms of contrast, resolution and other measures of quality?

The degradation cannot be addressed. It is a direct consequence of the quantum nature of light. This is actually the reason for my original question: most people do not realise that current photographic cameras work with a relatively small number of photons. We expect light to be continuous. It is not.

Modern cameras are close to detecting individual photons. Quantum efficiency of modern detectors is as high as 80%. Noise figures are down to a few photo-electrons and there are other limits to that unless one lowers the temperature to a few Kelvins. There is not much which can be gained.

Marketing wants us to believe that future cameras will further improve on things like dynamic range and high iso noise. They will not, we are already extracting all the dynamic range there is and the noise is in the signal itself. Even more: if we divide the sensor size by two, we halve the number of photons at ISO 100. This has direct consequences for the tiny sensors used in small cameras and cell phones. The iPhone 6 has a pixel size of 1.22µm, it will get 24 times less photons than our 6µm camera. If, keeping sensor size constant, we double the number of sensels, we also halve the number of photons. This also puts practical limits on the maximum resolution of cameras.
 

Asher Kelman

OPF Owner/Editor-in-Chief
The degradation cannot be addressed. It is a direct consequence of the quantum nature of light. This is actually the reason for my original question: most people do not realise that current photographic cameras work with a relatively small number of photons. We expect light to be continuous. It is not.

Modern cameras are close to detecting individual photons. Quantum efficiency of modern detectors is as high as 80%. Noise figures are down to a few photo-electrons and there are other limits to that unless one lowers the temperature to a few Kelvins. There is not much which can be gained.

Marketing wants us to believe that future cameras will further improve on things like dynamic range and high iso noise. They will not, we are already extracting all the dynamic range there is and the noise is in the signal itself. Even more: if we divide the sensor size by two, we halve the number of photons at ISO 100. This has direct consequences for the tiny sensors used in small cameras and cell phones. The iPhone 6 has a pixel size of 1.22µm, it will get 24 times less photons than our 6µm camera. If, keeping sensor size constant, we double the number of sensels, we also halve the number of photons. This also puts practical limits on the maximum resolution of cameras.


A good explanation, Jerome. Thanks!

Still, if we move to individual addressable pixels then we can, if we wish shut off counting when the count is accurate "enough". This, of course would be limited by spill over and the requirements for very fast shutter speeds, but in many circumstances, optimizing photo-electron collection based on counting needs, should widen dynamic range.

There are several such sensors in production on CMOS wafers in the past 10 years. Canon has access to at least 2.

Asher
 

Doug Kerr

Well-known member
By the way, I may have improvidently agreed with a statement by somebody in this thread that the average photometric exposure on the sensor during a "metered" exposure is about 1/5 the "saturation" photometric exposure (the largest photometric exposure at which the sensor output is a meaningful indicator of changes in photometric exposure.

In fact (as usual, assuming that the exposure metering strictly follows the applicable ISO standards), the average photometric exposure on the sensor during a "metered" exposure is about 1/8 the "saturation" photometric exposure (about 0.125).

But why do we so often hear that the "target" average photometric exposure is about 18% (0.18) of the saturation exposure?

Well, the whole story of the premise of the exposure metering standards is that, ideally, the average photometric exposure resulting from a metered exposure is 18% of a value that is about 1/2 stop below saturation. And of course the last part of that is often ignored.

The underlying concept is this: if a scene has an average reflectance of 18% (somehow chosen to be "typical"), and a maximum reflectance of 100%, then on the sensor, for a metered exposure, the highest photometric exposure will be 1/2 stop below saturation. This 1/2 stop "gap" can be thought of as "headroom" in the exposure.

I realize that this subtle difference has essentially no effect on the broad estimates involved in the discussions in this thread. But I just wanted to make sure that I did not inadvertently "nod my head" to concur with a widely held, but inaccurate, view of the details.

Best regards,

Doug
 

Doug Kerr

Well-known member
A matter even more distantly removed from the theme of this thread, but perhaps a justifiable follow-on from my recent discussion about average photometric exposure, is the use of a neutral (gray) card with a reflectance of 18% (that would be 82% gray, but is often called "18% gray) as an "exposure metering target".

There are several ways we can look at this practice. One is that it converts the camera's "reflected-light" metering scheme to metering on the "incident light" principle. But then, if we seek to further analyze the implications of this, we encounter the rather ambiguous ISO standards for incident light metering.

Another way is much more intuitive. Basically, if we make our exposure on the basis of "metering" on an 18% reflectance neutral target, then:

If in the actual scene, the maximum reflectance is about 5.6 times the average reflectance (1/0.18) (as for the "classical" case in which the average reflectance is 18% and the maximum reflectance is 100%), then, on the sensor, the maximum photometric exposure will be about 1/2 stop short of "saturation".​

So what if the scene maximum reflectance is not about 5.6 times the average reflectance? Well then we wouild get a different result. Will that be satisfactory? Maybe not so much.

Best regards,

Doug
 

Jerome Marot

Well-known member
Now, what is quite amusing is that photographic units are powers of two: for each stop, we double or halve the light. Shutter speeds go in powers of two. EV and ISO go in powers of two, etc...

Everything becomes much simpler if we count in powers of two, and is equivalent to the bit depth of our "photon counter".

2^16 is equal to 65536, so with a 16 bits counter we can count from 1 photon to 65536 photons. 2^14 (most camera "counters" are 14 bits) is 16384, so we can count from 1 photon to 16384 photons or from 4 photons to 65536 photons in groups of 4 photons. Etc...

If we go back to our ISO values, we have, in powers of two:

ISO Photons Bits
100 60 000 16 (or 14 if we count groups of 4)
200 30 000 15 (or 14 if we count groups of 2)
400 15 000 14
800 7 500 13
1600 3 750 12
3200 1 875 11
6400 937 10
12800 469 9
25600 235 8
51200 117 7
102400 58 6
204800 29 5
409600 14 4

Surprisingly simple, actually.

Interestingly, the right hand column directly gives us the maximum dynamic range (in EV) that is in our source signal for a given ISO value (the limit that physics dictates for a "perfect" camera and lens).

Interestingly as well, we see that for a camera with a 14 bits ADC, we digitise the complete signal from ISO 400. Therefore we should get the same results at any sensitivity above ISO 400 by taking a picture at that ISO value and raising the values on the computer.

Last but not least, all this is for a sensel size of 6µm. If we have sensels of 3µm, the sensels surface is 4 times as small and 14 bits is sufficient. If we have 1.5µm (about the size of smart phones), 12 bits is enough (and is also the maximum dynamic range).
 

Doug Kerr

Well-known member
Hi, Jerome,

Thanks for this worthwhile analysis.

Interestingly, the right hand column directly gives us the maximum dynamic range (in EV) that is in our source signal for a given ISO value (the limit that physics dictates for a "perfect" camera and lens).

That assumes a definition of "dynamic range" that flows directly from numerical resolution considerations (a very useful metric). Of course, often different definitions of "dynamic range" are used with respect to digital camera performance.

Best regards,

Doug
 

Jerome Marot

Well-known member
That assumes a definition of "dynamic range" that flows directly from numerical resolution considerations (a very useful metric). Of course, often different definitions of "dynamic range" are used with respect to digital camera performance.

No, not really. All the time, I am analysing the physical signal which, due to the quantum nature of light, has inherent quantisation noise (often called photon shot noise). And the "dynamic range" in this exercise is what we have in the physical signal, for a hypothetical perfect camera.
 

Doug Kerr

Well-known member
Hi, Jerome,

No, not really. All the time, I am analyzing the physical signal which, due to the quantum nature of light, has inherent quantisation noise (often called photon shot noise). And the "dynamic range" in this exercise is what we have in the physical signal, for a hypothetical perfect camera.

I understand, and you use the term "dynamic range" in a way that is consistent with those physical concepts.

But, the metric "dynamic range" as reported for digital cameras is often pursuant to an ISO standard definition in which the "low end" (denominator!) is defined in terms of a certain signal-to-noise ratio.

I also caution against the equating of "quantization noise" with "shot noise".

"Quantization noise" is a concept that takes quantization error (which is not a creature of random uncertainty in the quantity being measured) and characterizes it as if it were "noise", the point being (in, for example, digital audio systems) to allow us to quantify the impact of quantization error in a way that is comparable to the way we quantify noise.

It is a concept that, in my opinion, would best not have been transported from the fields in which it is useful into digital photography.

The matter is discussed in this article:


Shot noise is of course a bona fide random component in the quantity being measured, to which you have nicely referred.

Best regards,

Doug
 
Top