• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Impact of 14-bit ADC in the EOS-1D Mk III?

While some could say this isn't totally scientific, it illustrates what happens when the rubber meets the road. It also confirms, unless otherwise dismissed, my feeling that this is more about marketing. We are in agreement.

Hi Andrew:

Well, it wasn't like science was entirely disregarded. The additional two bits were SO transparent in the output files we actually questioned whether or not they existed; in fact, using special S/W tools the presence of the additional two bits was confirmed.

I did notice something interesting about the banding I discussed in my previous post. The "average" intensity change from one band to another seemed to change by approximately 4 counts out of 256 (0-255 range); this worries me more than just a little bit because the band to band transition is such a large change. This banding stuff makes me wonder if Canon is applying internal noise reduction within bands of intensity (below the mid-tones and down to the shadows) in the "real RAW" data before before actually writing what we see as RAW data into user memory (e.g. our CF modules). Maybe the banding is the result of the "internal noise reduction" I'm referring to (perhaps a poor transition from one band of NR to the next, I just don't know).

A real mystery to me is Canon's 1DmkIII white paper Canon claims no improvement in the sensor's DR compared to the 1DmkII (N or non-N version); maybe it is because they can't (i.e. there is no DR improvement at low ISO settings). So, how does a camera achieve better noise performance at the high ISO settings without Canon claiming an improvement in DR at high ISO settings (I mean, Madison Avenue would never miss an opportunity like this, right)? I'm guessing that Canon isn't claiming an increase in high ISO DR because again, they can't; they couldn't claim an improvement in in the sensor's DR if all they are doing is using some of that Dual Digic III horsepower to implement crude noise reduction techniques in the dark part of the image at the expense of shadow detail and maybe creation of these bands that I see when you have a smooth transition from shadows on up to the mid-tones. An Imatest DR analysis at high ISO settings will probably show an increase in DR because of how DR is measured by that tool. Maybe Canon is content to letting the user community "think" there is an improvement in high ISO DR without making a false claim about the sensor (I just don't know).

Oh well, the heck with it, I'm more interested in taking pictures and all this mental anguish over the technical aspects of cameras (ones I own and the 1DmkIII, one I would like to buy) doesn't help my composition or catching a potentially nice scene out of the corner of my eye. I'm ten days away from a three week photography trip and still confirming fifty million reservations - I'm back to work!

Regards,

Joe Kurkjian
 
The "average" intensity change from one band to another seemed to change by approximately 4 counts out of 256 (0-255 range);

That would be consistent with some ISO setting higher than 800. I don't know exactly how much higher (ISO 3200?), because I haven't seen the proper Raw files yet to determine it.

A real mystery to me is Canon's 1DmkIII white paper Canon claims no improvement in the sensor's DR compared to the 1DmkII (N or non-N version); maybe it is because they can't (i.e. there is no DR improvement at low ISO settings).

Dynamic range is the ratio between the saturation level and the read noise level. The saturation level is dependent on the potential well depth, which in turn is related to the actual surface of a sensel. The read noise level depends on the ISO setting, and is most likely lowest at an ISO 'L' or 100 setting. Canon claims* that despite the reduced sensel pitch, the active area of the sensel is the same between the 1DMk2 and 1DMk3. Therefore a dynamic range increase can only come from lower read noise. Quantizing the signal with 14-bits will allow to more accurately distinguish between low signal levels and read noise.

*Quote from the White paper, page 10:
"Although the pixel size of the 1D Mark III is 1 micron smaller than the pixel size of the EOS-1D Mark II and Mark II N, the photodiode size of both sensors is the same, thanks to the optimized photodiode construction and more sophisticated processing of the 1D Mark III."


An Imatest DR analysis at high ISO settings will probably show an increase in DR because of how DR is measured by that tool.

There's nothing wrong with the method that Imatest uses. What one needs to understand, is that the flare resistance of the lens used will influence the outcome of such a single shot step wedge test, test setup flaws and all will be accurately shown. That means that a 1 stop higher Imatest DR result from the 1DMk3 is in line with the expectations.

Bart
 

Doug Kerr

Well-known member
Pushability of 14-bit raw files

Will Thompson reports to me that he finds that 14 bit raw outputs can be successfully "pushed" farther (to recover from underexposure) than 12-bit raw outputs. I believe his criteria for "successful" includes consideration of noise in the shadow detail in the developed image.

I haven't yet had a chance to examine his results, nor to develop a conceptual rationale for the behavior he reports.

Best regards,

Doug.
 

Doug Kerr

Well-known member
Hi, Bart,

Dynamic range is the ratio between the saturation level and the read noise level. The saturation level is dependent on the potential well depth, which in turn is related to the actual surface of a sensel. The read noise level depends on the ISO setting . . . .

As of course does (by definition) the saturation level, which is (under the ISO definition of dynamic range) is not for the sensel itself, but is for the combination of sensel, amplifier, and ADC scheme. Thus, saturation occurs (for a particular "ISO" setting) when the ADC delivers its full-scale output.

By your use of "read noise" I assume you mean to exclude other components of the noise. But I'm not sure I understand exactly what read noise is anyway. Can you help me out here?

Thanks.

Best regards,

Doug
 
By your use of "read noise" I assume you mean to exclude other components of the noise. But I'm not sure I understand exactly what read noise is anyway. Can you help me out here?

Hi Doug,

In practical terms, because we don't have direct access to the voltage out of the sensor array, it means:
The quantized noise we can read from a Raw file (before demosaicing) which was virtually unexposed (no lens, body cap on, viewfinder closed, no noise reduction) at the shortest possible exposure time (1/8000 s) to avoid dark current as much as possible.

That will represent the noise that was generated by just reading the sensels (and quantizing them, the ADC stage). That is, within our capabilities to investigate, what I consider "Read noise", from what is also called a "Black frame". One could also call it "Black frame noise", which is probably even more correct.

A similar reading can be obtained from a longer 'exposure' time than 1/8000 s, but then there will be some additional (thermal) Dark current noise, hence such an image is called a "Dark frame", to make the distinction. The difference between Read and Dark noise as we can detect it, will grow exponentially with 'exposure' time.

If one wants to get very reliable results, only representing random noise, one can subtract (with an appropriate offset to avoid negative numbers) two such Black or Dark frames, which will remove systematic (non-random) noise patterns, such as hot/stuck sensels, banding, or amplifier glow.
Because subtracting 2 random noise sources will increase the resulting standard deviation, one will have to divide the standard deviation of the subtraction result by Sqrt(2). That is by the way the reason why no noise reduction in the shape of Dark frame subtraction must be used for this test, because it will not only 'remove' hot sensels, but also increase Random noise.

Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Hi Doug,

In practical terms, because we don't have direct access to the voltage out of the sensor array, it means:
The quantized noise we can read from a Raw file (before demosaicing) which was virtually unexposed (no lens, body cap on, viewfinder closed, no noise reduction) at the shortest possible exposure time (1/8000 s) to avoid dark current as much as possible.

That will represent the noise that was generated by just reading the sensels (and quantizing them, the ADC stage). That is, within our capabilities to investigate, what I consider "Read noise", from what is also called a "Black frame". One could also call it "Black frame noise", which is probably even more correct.

Thanks for clarifying that.

As I'm sure you know, the ISO definition of dynamic range is conceptually the ratio of the saturation photometric exposure to the photometric exposure at which the signal to noise ratio (observed in the digitized output, of course, but considered as the RMS variation in photometric exposure that would produce the observed RMS variation in digital output) would be unity.

However, at such a low photometric exposure, the actual observed noise will typically be understated owing to the clipping of the signal-to-noise voltage resulting from the fact that the sensel voltage cannot go negative.

Thus, we instead observe the noise (again, considered on a photometric exposure basis) at a specified modest photometric exposure (one high enough so that the noise is not substantially "clipped" by the fact that sensel voltage cannot go negative). That RMS noise value is considered to be numerically the same as the photometric exposure at which the signal to noise ratio would be unity were it not for zero-clipping.

In any case, the noise considered there is not what you characterize as "read noise".

Thanks for your help here.

Best regards,

Doug
 

Doug Kerr

Well-known member
Further on the ISO defintion of dynamic range

In my preceeding note, I said:

. . .the ISO definition of dynamic range is conceptually the ratio of the saturation photometric exposure to the photometric exposure at which the signal to noise ratio (observed in the digitized output, of course, but considered as the RMS variation in photometric exposure that would produce the observed RMS variation in digital output) would be unity.

A further comment:

This definition of dynamic range is based on the (highly-arbitrary) notion that the "darkest" acceptable portion of an image would be that where the SNR was 1.0.

I don't know if this choice was justified by any extensive subjective testing or if it just seemed like a handy criterion. It's a suspiciously-tidy number!

And at this point I'm not even sure what an image area with an SNR of 1.0 typically looks like. Probably pretty nasty!

Best regards,

Doug
 
Bart, a couple of comments.

You wrote:
"I don't know exactly how much higher (ISO 3200?), because I haven't seen the proper Raw files yet to determine it."

Well, ISO is part of the issue I have in my head, the shots I'm talking about were taken at an ISO-100 setting. You know, I sure wish a Canon engineer was around to explain the ENTIRE process prior to writing the final RAW file into memory. Speculating can be fun but after a while it becomes a real drain.

--------------------------------------------------------------------------------------------------

You wrote:
"Therefore a dynamic range increase can only come from lower read noise. Quantizing the signal with 14-bits will allow to more accurately distinguish between low signal levels and read noise."

Distinguishing a difference between low signal levels and read noise is one thing and calculating a ratio relative to the maximum signal is quite another. For a second here let us just talk about one more bit, say bit #13. "If" the resolution of bit #13 is on the order of 1/32 of the read noise then our new found ability to more accurately read noise will "on average" not significantly impact the measured value of noise and therefore our ability to compute S/N or DR (based on the largest measured signal amplitude). I'm trying to be a little practical here, the DR is already pretty low (i.e. noise is unfortunately very high) and if the "value of noise amplitude" is accurate to 3% versus 1.5 % it will not greatly affect the final DR number in f/stops.

There is another point I'd like to make here. A 14-bit A/D implies monotonicity and 1/2 bit accuracy for the LSB (well, one can only hope) and therefore by default the 12th bit must be accurate to 1/8 of an LSB. On average 14 bit accuracy for the 12th bit should provide sufficient accuracy if we limited ourselves only to a 12 bit output (now with 1/8 bit accuracy); this way we have great 12-bit converter and don't have to pay the penalty associated with the larger output file size of the full 14 bits.

---------------------------------------------------------------------------------------------------

You wrote:
"What one needs to understand, is that the flare resistance of the lens used will influence the outcome of such a single shot step wedge test, test setup flaws and all will be accurately shown. That means that a 1 stop higher Imatest DR result from the 1DMk3 is in line with the expectations."

Let us say (1) we have a light table set up vertically and our step wedge has an optical block around it such that the only light source seen by the camera is that of the wedge itself and (2) we use a 300mm or 400mm lens to shoot the wedge. Are you implying (1) the flare of the lens or (2) the stray light in the sensor cavity is going to be a limiting factor in a DR calculation? Also I don't understand the basis of your second sentence. Why is "1 stop" in line with expectations? What expectations are you talking about (are you referring to your expectations at some point in time)?

Sorry to be slow on the uptake and ask so many questions but this is a pretty interesting subject; thanks in advance for any additional comments you have.

Regards,

Joe Kurkjian
 

Doug Kerr

Well-known member
Hi, Joe,

There is another point I'd like to make here. A 14-bit A/D implies monotonicity and 1/2 bit accuracy for the LSB (well, one can only hope) and therefore by default the 12th bit must be accurate to 1/8 of an LSB.

I'm not sure I understand. (I assume you are counting bits from left to right, so that bit 14 is the LSB).

Indeed, we assume that the quantizing error of the whole number (not just the LSB) is ±1/2 LSB . Why would a bit whose weight is 4 times that of the LSB (bit 12) need to have an accuracy of 1/8 LSB? What would that even mean?

On average 14 bit accuracy for the 12th bit . . .

What do you mean by that?

You seem to be making an interesting point, but I can't follow your notation.

Thanks.

Best regards,

Doug
 
To Doug

You wrote:
"Why would a bit whose weight is 4 times that of the LSB (bit 12) need to have an accuracy of 1/8 LSB? What would that even mean?"

I'm talking about the accuracy of a 12-bit converter comprised of the upper 12 bits of a 14-bit converter. Let's back off a little and let me say a couple of things. Greater accuracy, increasing in a binary weighted fashion for the higher order bits is necessary to ensure the A/D process meets the requirement for 1/2 LSB accuracy "and" monotonicity throughout the analog input range. A/D resolution by itself is meaningless without tying in the requirements of accuracy and monotonicity, again, throughout the analog range.

Meaning now that you have begin to think of the A/D converter output as the digital input to a D/A (whose input is the "D" part of the A/D output we are talking about), the output of which is compared to the analog input. Very simply stated, the analog output of the D/A is driven to equal the analog signal we are converting and once equality is achieved the digital input to the D/A is sampled and becomes the digital output of the A/D.

So let us talk about a 4-bit D/A whose job is to produce a 0-15V signal in 1V increments with a +/- 1/2 volt accuracy at any count (a very strict requirement to ensure monotonic behavior). If you bear with me and assume the error for the LSB is +/- 1/4V, the next least significant bit is +/- 1/8V, the next least significant bit is 1/16V, and the MSB is 1/32V (now you can see that you are now converging to +/- 1/2V accuracy throughout the entire 15V analog range provided the error budget is binary weighted). This is the only way I know of to meet accuracy requirements with a standard binary D/A converter and simultaneously guarantee monotonic behavior.

BTW, from this point forward start thinking about the numbers above (and below) as percent of full scale.

Working with the example above now let us try to make two two separate 2 bit D/A converters to produce a 3V maximum output with one volt increments, one using the lower two bits of the four bit design and the second using the upper two bits of the four bit design. The design using the lower two bits will have a maximum error of +/- 1/4V +/- 1/8V for a worst case error of +/- 3/8V; if you started out in life with a 2-bit D/A design requirement nobody would bother making it more accurate than this (make believe this is the 1DmkII A/D design). The design using the upper two bits will have a maximum error of +/- 1/16V +/- 1/32V for a worst case error of +/- 3/32V; think of this as the 1DmkIII design with the lower two bits tossed out (the improved 2-bit accuracy is free if for some reason you were forced into a four bit design and learned the lower two bits were not "really" necessary).

Would it ever make sense to do what I've discussed above? When would you really want a super duper 2-bit D/A converter? Well, suppose you had an analog driven stroke (as versus raster) display and you wanted to display four dots with VERY equal separation between them. I might choose a 10-bit D/A and drive zeros into the lower 8 bits (or even a 12-bit and discard the lower 10 bits).

----------------------------------------------------------------------------------------------

You wrote:
"What do you mean by that?" referring to my "On average 14 bit accuracy for the 12th bit . . . "

Well, the noise input is random but over any period of time voltage fluctuations of very nearly identical amplitudes will occur over and over again; pick any noise amplitude and depending on how close that amplitude is to the RMS value it may be repeated quite often or rarely but it will be nearly exactly repeated if you look at it for a long enough period of time. Now slightly OT for a second, over any given bandwidth the RMS value of noise and to within certain probabilities the peak to peak value of that noise signal can be computed by integrating the noise power spectral density over the signal bandwidth into the A/D; I mean the calculation will very closely match what you would get taking a reading with a true RMS meter. Okay, if you have an inaccurate A/D with limited resolution an input noise fluctuation will sometimes appear much brighter or darker than it should be; thus the change in noise fluctuations become more obvious; this is one thing that has bothered me when looking at noise. Now, if during the time the shutter is flying you have enough samples (i.e. time to form an average) a higher resolution (fractional resolution) can be computed (you switch to a much higher precision calculation and then average) from a more accurate lower resolution conversion and this should not affect the DR calculation. I "think" Canon is employing some form of internal noise reduction prior to writing the final RAW data into user memory and this to me is means some form of averaging and then rounding off to 14-bits (but something is wrong because I'm seeing banding jumps that IMO should not be there, the 1DmkII and 1DmkIII behave identically and equally bad).

Regards,

Joe Kurkjian
 
You know, I sure wish a Canon engineer was around to explain the ENTIRE process prior to writing the final RAW file into memory. Speculating can be fun but after a while it becomes a real drain.

Yes, although we can come quite a way by careful testing. Maybe we can even get further than a Canon engineer is willing or allowed to disclose. A final "we agree with the assessment", would be nice though, I agree.
--------------------------------------------------------------------------------------------------

Distinguishing a difference between low signal levels and read noise is one thing and calculating a ratio relative to the maximum signal is quite another. For a second here let us just talk about one more bit, say bit #13.

I'll leave that part for now, since Doug already responded. No need to created double branches of a thread.

---------------------------------------------------------------------------------------------------

Let us say (1) we have a light table set up vertically and our step wedge has an optical block around it such that the only light source seen by the camera is that of the wedge itself and (2) we use a 300mm or 400mm lens to shoot the wedge. Are you implying (1) the flare of the lens or (2) the stray light in the sensor cavity is going to be a limiting factor in a DR calculation?

Not exactly, what I meant is that when an image contains both the brightest and darkest parts of the dynamic range, the lens itself becomes a factor in the test of what's sometimes assumed to be a test of the sensor array alone. We can accept that influence of the lens in the equation, which is perfectly valid since it is a fact of life, we can't avoid using a lens. However, we should be careful in attributing the observed result to the sensor array alone. It is as much a test of the lens as it is of the sensor array, in fact it is a system test. The Imatest software will show a non-linear 'toe' area of the density curve due to internal reflections in the lens which affects the darkest parts of the image relatively the most. In fact, Imatest will show the difference 2 lenses can have on the same setup. Typically, despite coatings, lenses with more groups/elements will in general show more flare than lenses with lower numbers.

Also I don't understand the basis of your second sentence. Why is "1 stop" in line with expectations? What expectations are you talking about (are you referring to your expectations at some point in time)?

This thread showed Imatest results of, what turned out to be, a slightly underexposed step wedge shot. Even with the minor shortcomings, it showed an almost 1 stop higher DR than I have ever observed from a 12-bit ADC, in particular my 1DsMk2 which has the same sensel pitch.

Adding 2 bits of quantization doesn't necessarily equate to 2 stops of additional DR, there are other factors involved like e.g. analog gain. But we'll get to that with suitable Raw test images.

Bart
 

Doug Kerr

Well-known member
Hi, Joe,

You wrote:
"
So let us talk about a 4-bit D/A whose job is to produce a 0-15V signal in 1V increments with a +/- 1/2 volt accuracy at any count (a very strict requirement to ensure monotonic behavior). If you bear with me and assume the error for the LSB is +/- 1/4V, the next least significant bit is +/- 1/8V, the next least significant bit is 1/16V, and the MSB is 1/32V (now you can see that you are now converging to +/- 1/2V accuracy throughout the entire 15V analog range provided the error budget is binary weighted).

Ah, you're talking about an ADC where each bit is in effect an independent slicer working from attenuated copies of the input variable. Is that how ADC's are made these days? I really have no idea.

Thanks for clarifying your outlook.

Best regards,

Doug
 
Hi, Joe,



Ah, you're talking about an ADC where each bit is in effect an independent slicer working from attenuated copies of the input variable. Is that how ADC's are made these days? I really have no idea.

Thanks for clarifying your outlook.

Best regards,

Doug

No, not really "attenuated copies of the input variable", "attenuated copies of a reference voltage" (for the D/A part of the A/D) would be a better way of making the statement. Using your terminology, attenuated (or amplified) slices of a reference voltage are combined to make the D/A output equal the sensor input to the A/D and then the digital input to the D/A is sampled and this digital value becomes the A/D output.

Joe
 

Doug Kerr

Well-known member
Hi, Joe,

No, not really "attenuated copies of the input variable", "attenuated copies of a reference voltage" (for the D/A part of the A/D) would be a better way of making the statement. Using your terminology, attenuated (or amplified) slices of a reference voltage are combined . . .

If the reference voltage corresponded to one unit, then these would be enlarged copies of it (for the higher portions of the DAC).

Thanks for clearing that up.

Best regards,

Doug
 
Hi, Joe,



If the reference voltage corresponded to one unit, then these would be enlarged copies of it (for the higher portions of the DAC).

Thanks for clearing that up.

Best regards,

Doug

No sweat Doug, actually they could be attenuated copies of the reference (the usual case) and quite often the reference isn't really a voltage but a current (there are so many different ways to skin the cat). Also I'd like to point out, in fact emphasize that I've been talking more from the standpoint of basic concepts versus what Canon "really" does. For the Canon camera application I would guess the A/D conversion would employ a form of flash conversion (but let's not go there). You get the general idea and that is exactly where we want to be. Only engineers at Canon would know exactly what technology is (or isn't) part of their camera design. We have to keep in mind that some functional elements are actually integrated into the sensor design and that is about the extent of my very limited insight into a Canon camera.

Please be aware my basic knowledge of Canon gear is very limited; they make good equipment and I'm thrilled with everything I own. However, I do know full well that I've made some of my good luck by carefully testing all the gear right off the bat (rulers, cars, etc.). LOL, my simple "basic photographer" philosophy is to forget about all the unnecessary technical stuff, read the manual, find/locate all of Canon's white papers, read all of CW's technical reports, take the picture, if good keep it and heap praise on myself, and if bad blame Canon; this approach, although unfair from Canon's standpoint works well on just about every forum. :) Sometimes I think putting all the blame on Canon (and some of it is deserved) is like walking into a room full of people you have never met and loudly proclaiming you hate income taxes and the entire IRS system; this isn't exactly fair but you can quickly make a lot of friends. :)

Joe
 

Doug Kerr

Well-known member
The ISO definition of dynamic range

In a number of earlier notes in this thread, I have muttered about the ISO definition of the dynamic range of a digital camera.

For those who might like to know more (and it's not a pretty story!), the topic is covered at length in my tutorial article, "The ISO Definition of the Dynamic Range of a Digital Still Camera", available here:

http://doug.kerr.home.att.net/pumpkin/index.htm#ISO-DR

Best regards,

Doug
 

Tim Gray

New member
From a pure experience perspective, I've always tended to bracketing HDR type stuff a bit more than necessary. With the 1d3 I'm able to get excellent results from the "properly exposed" image of the bracketed set - noticeably more often than the 1d2.
 
Who was looking for 14 bit 1DMKIII/40D raw files?

Depending on the images, I wouldn't mind having access.

John Nevill has kindly made some test file pairs (Black noise frames) available, I'm not sure if he got around to doing the saturation level shot sequence pairs before he had to return the camera to its owner. If not, it would be helpfull to be able to analyze some of those.

If you are able to shoot a nice blue sky gradient with the Mark III at the various ISOs (Raw and sRaw), that would be helpful as well (I've seen reports of BIF shooters that rave about sky and OOF rendition). If you happen to be in a position to do the same with a 1DMkII then that would make for a wonderful comparison (it might also help Doug a bit for his exposure/sensitivity difference quest, but see the above referenced thread for what he really wants).

Bart
 
Bart, can you give me step by step instructions for both cameras for what you need.

Sure.
1. A blue sky gradient (e.g. at 90 degrees from the sun angle) with both cameras, at all ISOs, as Raw files.

From this post the numbers 2 and 3 from the Mark III are not yet available at this time.

So to recap:
2. Subject is a uniformly lit surface, shot slightly OOF to suppress any surface structure at aperture f/5.6.
The exposures are made in pairs of identical exposure (wait for the card to finish its writing activity between shots), without any noise reduction. The pairs are shot in a sequence of 1/3rd stop increasing exposure time, starting with average metered exposure for each ISO used +2EV upto the point of total clipping of all color channels (probably somewhere between +3EV and +4EV, maybe as high as +5EV).

ISO 800 is very useful (especially when shot as both Raw and sRaw). ISO "L" and ISO 100 (just Raw is sufficient) are very welcome to augment the ISO 800 usefulness even further, and determine maximum useful camera ISO and maximum dynamic range.

After evaluation, a "unity gain" ISO can be determined for verification of the hypothesis, and comparisons between the higher ISO's versus a post processing pushed version of the Unity Gain ISO can be compared.

3. It would also answer a few questions if e.g. the ISO 800 1/3rd stop exposure sequence is also executed with the highlight tone priority activated, but at exactly the same manual exposure as without that function.

That material would probably allow to answer the most frequently asked questions about image quality, but each of the first 2 remaining subjects mentioned will be quite useful by themselves.

This is becoming an OPF group effort, very nice. I'll do the number crunching part, which may take some time, but the results will be independently verifyable for anyone using the same software, once we have a set of Raws.

Thanks to all participants,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

After evaluation, a "unity gain" ISO can be determined for verification of the hypothesis, and comparisons between the higher ISO's versus a post processing pushed version of the Unity Gain ISO can be compared.

What do you mean a "unity gain" ISO? Would that be the hypothetical "ISO" setting at which saturation of the digital output fell at the same photometric exposure level that was actual ultimate saturation (well saturation) of the sensor proper? Or do you mean something else?

Best regards,

Doug
 
What do you mean a "unity gain" ISO?

Hi Doug,

At the lower ISOs, like ISO 100, it requires some 18 photons (or after conversion, electrons) to change the 12-bit Digital or Data Number (DN, also known a Analog Digital Unit or ADU) by 1 unit in my 1DsMk2. That is due to the analog gain and associated amplifier circuits setting of the ADC at that ISO setting, as far as I can derive from a series of 1/3rd stop exposures near the sensel saturation level (full potential well in CCD terminology).

That analog gain setting has to do with the full well capacity, Dynamic Range, and the ADC bit depth, at a given ISO setting. There are several ways of determining it, and it's described by several (usually astronomy related) sources*. I personally use the method of linear regression through nearly saturated DNs (and thus with known photon shot noise).

The "unity gain" level is the level of gain/amplification where 1 single converted photon or electron (-eV) will cause a change of 1 DN or ADU (within the bit precision of the ADC). In my 1DsMk2 it is somewhere in the ISO 1000 region. Any further (in camera) amplification will not benefit us in the slightest, because for the lack of meaningful signal it will only amplify noise generated by the camera electronics itself. So from that amplification level setting (governed by ISO setting) onwards, it's beneficial to resort to pure software 'pushing' of the recorded real signal.

Hope that helps,
Bart


* A couple of links for more background on Analog Gain, and such:
http://www.mirametrics.com/tech_note_ccdgain.htm.
http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html (for analog gain in particular, see reference #1 at the bottom of the page).
http://astrosurf.com/buil/5d/test.htm point 6, in the French language.
 
Last edited:

Doug Kerr

Well-known member
Hi, Bart,

The "unity gain" level is the level of gain/amplification where 1 single converted photon or electron (-eV) will cause a change of 1 DN or ADU (within the bit precision of the ADC).

Ah, so. That makes sense. Thanks.

In my 1DsMk2 it is somewhere in the ISO 1000 region.

Interesting. I had no idea.

Any further (in camera) amplification will not benefit us in the slightest, because for the lack of meaningful signal it will only amplify noise generated by the camera electronics itself. So from that amplification level setting (governed by ISO setting) onwards, it's beneficial to resort to pure software 'pushing' of the recorded real signal.

That all makes sense, too.

Thanks so much.

Best regards,

Doug
 

John Sheehy

New member
The "unity gain" level is the level of gain/amplification where 1 single converted photon or electron (-eV) will cause a change of 1 DN or ADU (within the bit precision of the ADC). In my 1DsMk2 it is somewhere in the ISO 1000 region. Any further (in camera) amplification will not benefit us in the slightest, because for the lack of meaningful signal it will only amplify noise generated by the camera electronics itself. So from that amplification level setting (governed by ISO setting) onwards, it's beneficial to resort to pure software 'pushing' of the recorded real signal.

I have a very hard time understanding why anyone would see any significance in "unity gain". The bit depth is totally arbitrary, and the analog noise is far greater than a single digital step. these cameras are *NOT* counting photons, not even close to doing so.

The "unity gain" of the 1Dmk3 is about ISO 300 - 400, yet the blackframe noise, in electrons, still drops beyond that.
 

Alan T. Price

New member
No real impact

There is practically zero noticeable impact from using 14-bit raw instead of 12-bit raw.

Shadow levels are still obscured by lack of sensor detail.
Highlight details are still pegged at about 3.8 stops above middle grey.

The 1D3 images are cleaner due mainly to improved processing on the camera by the Digic 3 computer and this can easily lead One to think that the 14-bit has had something to do with it. However, there is no more detail to be had - just less noise.

You are not getting two extra stops of dynamic range by having two extra bits of data.

The 14-bit data is used to describe the underlying physical sensor dynamic range with four times as many discrete levels for each colour channel, but at about 9 stops below maximum you run out of detail no matter how many bits are used. Also, by the time you convert the image to 8-bit for printing there is no advantage left.

You could take a 1Ds2 image, clean it up, and produce the same or better print.

You are getting bigger raw files with 14-bit than with 12-bit, but that's all.

- Alan
 
Top