• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

D3 dynamic range

Emil Martinec

New member
I've made a preliminary analysis of the Nikon D3's dynamic range, using the Nikon D3 raw samples posted here:

http://aaronlinsdau.com/gear/articles/d3.html

The latest build of the raw converter dcraw can understand D3 raw files (though it seems there are some bugs with some of the conversion options, I found one that gave me a linear 16-bit conversion that I could analyze). Looking at the ISO 200 sample , I analyzed a few regions of the image, measured the noise in IRIS and extrapolated the result to vanishing luminance. The dynamic range seems to be about 11.7 stops (read noise is about 4.9 in 14-bit raw levels, and the highlights clip at 16383 in those units).

I had to extrapolate down to zero raw level since Nikon clips their blackpoint and so the noise is not a Gaussian distribution but a half-gaussian at the bottom end, and fluctuations are thereby clipped as well. This is a *very* rough measurement. A proper measurement would use a blackframe image, shot at a high shutter speed with a lens cap on the body. That will take a lot of sloppiness out of the above analysis, but I would be surprised if the dynamic range is more than 12 stops. A similar analysis at ISO 1600 yields about 10 stops DR. Thus it would appear that the D3's dynamic range is about the same as Canon 1 series DSLR's.

Unfortunately the sample images are not appropriate for determining the quantum efficiency of the sensor, which to my thinking is one of the determining factors in high ISO performance.

The raw was shot in 12bit mode rather than 14-bit mode (all raw levels were multiples of four in 14-bit units). It will be interesting to see whether an image shot in 14-bit mode has more dynamic range, since the 12-bit sample here has a quantization step which just about equals the noise. Theoretically that could get you a very tiny fraction of a stop more. Anyway, DR appears to roughly equal the Canon Mark 3 at ISO 200.

BTW, pattern noise (banding) seems rather well controlled (though visible in the deepest shadows if you boost them enough).
 
Last edited:

Jane Auburn

New member
I've quit worrying about the dynamic range of today's digital cameras. None of them can come close to what you can do with the HDR products currently on the market. Take a look at this tutorial on HDR.

http://photocamel.com/forum/hdr-high-dynamic-range-photography/28455-hdr-tutorial.html

You don't really have to end up with the "overprocessed" look that typifies HDR stuff.

I doubt even today's best digital cameras can come close to the drnamic range that you can get with software.
 

John Sheehy

New member
I doubt even today's best digital cameras can come close to the drnamic range that you can get with software.

Well, it's a given that a program can take two exposures and get more DR than one exposure has.

It's nice to get as much as possible in a single shot, however, as many subjects don't wait for you to take another shot while they freeze for you.
 

John Sheehy

New member
The latest build of the raw converter dcraw can understand D3 raw files (though it seems there are some bugs with some of the conversion options, I found one that gave me a linear 16-bit conversion that I could analyze). Looking at the ISO 200 sample , I analyzed a few regions of the image, measured the noise in IRIS and extrapolated the result to vanishing luminance. The dynamic range seems to be about 11.7 stops (read noise is about 4.9 in 14-bit raw levels, and the highlights clip at 16383 in those units).

I had to extrapolate down to zero raw level since Nikon clips their blackpoint and so the noise is not a Gaussian distribution but a half-gaussian at the bottom end, and fluctuations are thereby clipped as well. This is a *very* rough measurement. A proper measurement would use a blackframe image, shot at a high shutter speed with a lens cap on the body.

When you have a true blackframe, you need to multiply the sigma of the halved noise curve by the square root of 3 (~1.73) to get what it would be without the black clipping. The noise can't just be zeros and ones, of course, because the threshold is unknown, but when you have at least a few different values present, you can check to see if the noise is properly clipped. You need to subtract the non-zero population from zero population, and the result should be the same as the non-zero population. If it is not, then the data is not really clipped at black.

How did you extrapolate? One way of estimating blackframe noise from actual exposure is to convert everything to units of photons or electrons, and using the formula:

totalnoise^2 = readnoise^2 + averagephotons

Of course, you need a flat, out-of-focus area in only one CFA color channel to calculate this.

I prefer the actual blackframe method, but as I said, you must check that you truly have the proper blackpoint first.

Clipping at the blackpoint in RAW data is not a great idea, IMO. You should only clip at black just before going to a final RGB display, to keep blacks as black as possible, with a minimum of chromatic noise.
 

Emil Martinec

New member
When you have a true blackframe, you need to multiply the sigma of the halved noise curve by the square root of 3 (~1.73) to get what it would be without the black clipping.

I'm not sure where you're getting the sqrt(3) factor from. If you compute the std dev of a half gaussian, its width is sqrt(1-2/pi)~.603 of the full gaussian, or ~1.66 times narrower. Of course that's within the error of what I did.

The noise can't just be zeros and ones, of course, because the threshold is unknown, but when you have at least a few different values present, you can check to see if the noise is properly clipped. You need to subtract the non-zero population from zero population, and the result should be the same as the non-zero population. If it is not, then the data is not really clipped at black.

I'm not following what you're trying to say here. Could you rephrase?

How did you extrapolate? One way of estimating blackframe noise from actual exposure is to convert everything to units of photons or electrons, and using the formula:

totalnoise^2 = readnoise^2 + averagephotons

Of course, you need a flat, out-of-focus area in only one CFA color channel to calculate this.

I prefer the actual blackframe method, but as I said, you must check that you truly have the proper blackpoint first.

Clipping at the blackpoint in RAW data is not a great idea, IMO. You should only clip at black just before going to a final RGB display, to keep blacks as black as possible, with a minimum of chromatic noise.

I took sample patches from the deepest shadows of the image, and got the std dev in iris. I then plotted the results and did a straight line fit. Since the scene in the file is of the display area of the camera demo site, it is not uniform; I used rather small oof patches to keep the selection uniform as possible in tonality, there is thus some scatter in the data because the patches were small. Here is the blue channel:

noiseplot.jpg


The top plot is all the data points, the bottom plot threw out all the points with raw level below 20 in 15-bit units (this strange normalization comes because the dcraw conversion I used, while linear, did a bit shift so that the raw levels went from 0 to 32767). The horizontal axis here is 15-bit raw level, the vertical axis is std dev of the patch in the same units. You can see the lowest illuminated patches start to tail off in their std dev since the width is approximately the level and the clipping starts to be important. The straight line fit of all the data is below the second plot, of the data with the lowest few points excised is the bottom line. You can see that excising these few points revises the intercept upward by a little over one raw level in these units. The intercept without those lowest points is the blackframe noise which is about 9.86, or 4.9 in 14-bit units.

The red channel gave a similar result. Note that since the max illumination level here is ~120 (60 in 14-bit units, 15 in 12-bit units) I think we're well into the regime where the read noise dominates, and the only effect of photon shot noise is the small drift upward of noise with illumination level which is well approximated with a linear fit (if you like, I'm just expanding the sqrt of the two noises combined in quadrature). Actually I think this is probably a better way to determine the noise than blackframes -- you don't have to guess where the noise distribution was clipped (it might not have been clipped exactly at the midpoint of the gaussian), you just have to throw out the data from the region where it is clipped. I'm sure with images taken with this application in mind (eg underexposed step wedge or colorchecker charts) one could get a quite accurate measure of the read noise, better than this first attempt for the D3.

BTW, I was having problems with dcraw and the -D -4 -T options giving a nearly black seeming output into IRIS. It turns out that the data is there, but IRIS seems only to read the highest 8 bits of the 16-bit tiff for some reason. The workaround is to just use -D -4 to make a .pgm file, then use the pnmtofits conversion of the netpbm package (if you have access to a *nix machine, including Macs of course). Finally I can work on these files without destroying the CFA data. Dunno what happened, the tiff output used to be fine for me, now it doesn't even work with files I know I successfully did a linear conversion on previously. I don't think the problem is dcraw, it occurs also when I convert the .pgm to tiff and then feed that to IRIS.
 

John Sheehy

New member
BTW, I was having problems with dcraw and the -D -4 -T options giving a nearly black seeming output into IRIS. It turns out that the data is there, but IRIS seems only to read the highest 8 bits of the 16-bit tiff for some reason. The workaround is to just use -D -4 to make a .pgm file, then use the pnmtofits conversion of the netpbm package (if you have access to a *nix machine, including Macs of course).

Thanks for the tip. Everything got strange here recently, too. I always opened TIFFs and PGMs and PBMs OK in ImageJ, but the D300 and D3 conversions from DCRAW opened quickly on the screen, and then were replaced by a skewed, periodic pattern that seemed to have nothing to do with the image.

netpbm is available for Windows, BTW.

I've only looked at one D3 RAW image so far, ISO 1600. Looks like about 6700 photons full well at ISO 1600 (which would be 53500 at ISO 200), the highest quantum efficiency yet in a full-frame sensor; that's slightly more than double the Canon 5D, for a 1/2 stop lower shot noise. The read noise is about mid-way between the 5D and the 1Dmk3, at the pixel level. The camera should generally out-perform the 5D at high ISO. One little problem is the slight vertical banding near black in the image I looked at, but I don't know how the converters deal with that; I am just looking at the RAW.
 

Emil Martinec

New member
I've only looked at one D3 RAW image so far, ISO 1600. Looks like about 6700 photons full well at ISO 1600 (which would be 53500 at ISO 200), the highest quantum efficiency yet in a full-frame sensor; that's slightly more than double the Canon 5D, for a 1/2 stop lower shot noise. The read noise is about mid-way between the 5D and the 1Dmk3, at the pixel level. The camera should generally out-perform the 5D at high ISO. One little problem is the slight vertical banding near black in the image I looked at, but I don't know how the converters deal with that; I am just looking at the RAW.

Indeed your result is interesting. I measured the QE of the 1Dmk3 to be about 5.1 or 5.2 electron/14-bit ADU at ISO 100, which translates to about 39500 electrons full well at ISO 200. Scaling up the 7.2 micron photosites of the 1Dmk3 to the 8.46 micron photosites of the D3 would give 54,600 electrons full well -- the same within experimental error. Seems then that the D3 and the 1D mark 3 have about the same level of sensor technology, just different choices for sensor size and pixel pitch. The read noises are comparable too (with a very slight edge to Canon); I found 4.9 14-bit raw levels at ISO 200, 14.3 at ISO 1600, and 23.3 at ISO 3200 for the D3; and for the 1Dmk3, 4.9 @ISO200, 12.1@ISO1600, and 23.8@ISO3200. So it looks as though the D3 will indeed be the high ISO champion, by about .2 stop per pixel (and marginally better as a percentage of image area due to having 20% more pixels).
 
Last edited:

Emil Martinec

New member
OK, I am finding roughly 13 electrons/12-bit ADU at ISO 200 for the D3 for the sample that I have been analyzing (using the ISO 1600 image in the set I referenced in the OP). That translates into a full-well capacity of about 53,400, the same as John Sheehy found. Impressive indeed.
 

Jane Auburn

New member
Well, it's a given that a program can take two exposures and get more DR than one exposure has.

It's nice to get as much as possible in a single shot, however, as many subjects don't wait for you to take another shot while they freeze for you.

I think the deal is that you want your photos to capture at least what your eyes can, and HDR comes closest to that currently.
 
Top