• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

High F-Stops on DSLR's

Asher Kelman

OPF Owner/Editor-in-Chief
Excellent Asher! You've raised a very salient point -- that we don't have the brush-strokes of the masters to appreciate as we move in, and we do need something, so what we have at our disposal is more detail.

That said, do we thusly 'move in' to critical (8"?) viewing distance for *every* image regardless of size? If so, then maximum print size becomes directly dependent on pixel count. Or are we more inclined to move into larger prints in some non-linear fashion, and if so, at what size do we start keeping our distance so to speak? In which case we have a non-linear need for increased sensor resolution as print size increases...
Hi Jack,

I think we are now dealing with something that might separate a lot of art from just pictures. I believe that art, at it's best, lives independently of the artist. The massive photograph, however, is quickly changed and seen less than what it must be if one breaks through the aura of the pictures "being". So when one is faced with the same grain or digital dots one sees on any other print, the uniqueness is lost and the spell broken.

Living things maintain their character on close presence. So detail in a print is our only defense.

Asher
 

Jack_Flesher

New member
Living things maintain their character on close presence. So detail in a print is our only defense.

Asher

<GRIN> Indeed it is!

So from this, I gather there are two schools of thought, though perhaps more importantly, we can choose the one we're from depending on the intended use of our print:

1) Print it as large as you want since viewing distance cancels out resolution concerns as print sizes increase;

2) Print it only as large as the fine detail holds up to close inspection.

Cheers,
 

John Sheehy

New member
FWIW, and we're getting OT here, but I want to mention that while I understand the theory behind this DR comment, in practice I am not seeing that to be the case.

That's because it isn't the case; not even in theory, if you use a theory that takes all known factors into account. There is no direct relationship between pixel size and DR; not even at the pixel level. For individual pixels, DR is determined by the total amount of noise added during readout, compared to the maximum signal. For the entire image, the DR is equal to that value (as a linear ratio) divided by a factor proportional to linear pixel resolution of an image, assuming all images viewed at the same final size and distance.

If you take an exposure with the cleanest P&S sensors available today, like that in the Panasonic LZ3, and use a medium format digital with a lens with the same real focal length, give the same real exposure of the same subject from the same distance, and take a crop from the MF the same size as the LZ3 sensor, the LZ3 will walk all over the crop of the MF in resolution, noise, and DR, when viewed at the same physical size, and with the same RAW conversion. Having a MF sensor with that many pixels is still considered awkward, in terms of storage and speed; that's 9x as many pixels as your 6u MF sensor.

For sure, smaller sensuls do place higher demands on lenses and capture technique.

I wouldn't look at it that way; I'd say they allow more potential detail, and it simply becomes more challenging to reach that potential. AOTBE, higher pixel density does not ruin an image, optically. Yes, it might be clearer that there is motion blur (or diffraction), but that is only because there is less pixel blur, and the total blur is less, and the subject is easier to see detail in, despite the obvious motion blur (or diffraction), because the motion blur (or diffraction) is always there, and the same, in the analog focal plane, regardless of pixel density, but bigger pixels blur everything more, including other blurs *and* subject detail.
 
Last edited:

Asher Kelman

OPF Owner/Editor-in-Chief
In simple terms, tiny pixels better define detail, including diffraction which was always there but not visible because huge pixels cover too much detail.
 

Ben Rubinstein

pro member
<GRIN> Indeed it is!

So from this, I gather there are two schools of thought, though perhaps more importantly, we can choose the one we're from depending on the intended use of our print:

1) Print it as large as you want since viewing distance cancels out resolution concerns as print sizes increase;

2) Print it only as large as the fine detail holds up to close inspection.

Cheers,


I would add to point 2 'Print it only as large as the fine detail holds up to close inspection RELATIVE to viewer based expectations of a given photo.'

In other words point #2 may give very different print sizes even using the same camera and lens, depending on subject matter. An iconic PJ image may not need to subscribe at all to point 2 while still showing point 1 as being false as a general rule. A portrait might show plenty enough fine detail even for close inspection in a way that a landscape/architectural photo will not.
 
That's because it isn't the case; not even in theory, if you use a theory that takes all known factors into account. There is no direct relationship between pixel size and DR; not even at the pixel level.

I disagree. When we define DR as the ratio of maximum signal to noise floor (AKA engineering definition), then with a given noise floor the capacity of an individual sensel to store maximum charge is also meaningful. When we assume that sensels (currently) store some 1500 electrons per square micron, it becomes obvious that size matters to the maximum charge that can be stored, per pixel.

Bart
 

Doug Kerr

Well-known member
Hi, Bart,

I disagree. When we define DR as the ratio of maximum signal to noise floor (AKA engineering definition), then with a given noise floor the capacity of an individual sensel to store maximum charge is also meaningful. When we assume that sensels (currently) store some 1500 electrons per square micron, it becomes obvious that size matters to the maximum charge that can be stored, per pixel.
I note that the ISO definition of the dynamic range of a digital still camera is essentially based on the concept you mention.

To give a little more detail:

Rather than speaking of a "noise floor" (a slightly vague, though conceptually useful, quantity), it uses as the "denominator" of the dynamic range ratio the luminance of the image at which the delivered signal-to-noise ratio would be 1:1.

Then, because in reality at that luminance level the observed noise would be "clipped" (since the instantaneous observed value of the composite "signal plus noise" cannot be negative), the definition declares that the "denominator" luminance would be considered to be numerically the same as the RMS noise observed on the signal for a somewhat larger luminance (defined as a certain fraction of the "numerator" luminance).

Those interested in the details can find an extensive discussion in my article, "The ISO Definition of the Dynamic Range of a Digital Still Camera", available here:

http://doug.kerr.home.att.net/pumpkin/#ISO-DR

Best regards,

Doug
 
Hi Doug,

Rather than speaking of a "noise floor" (a slightly vague, though conceptually useful, quantity), it uses as the "denominator" of the dynamic range ratio the luminance of the image at which the delivered signal-to-noise ratio would be 1:1.

Thanks. I'm aware of the ISO definition, but I didn't want to scare folks. I'll leave that to you ;-).

In fact the noise floor in the engineering definition isn't a solid floor, it's more like an average ground fog which obscures low luminance detail. The definition I usually refer to is e.g. used by companies like Apogee instruments (Dynamic Range). That explanation is quite clear, and it provides calculation examples and a useful reference to meaningful quantization levels.

The well depth they mention is impacted by the surface area of a sensel. When the well depth is reduced, we need to reduce exposure to avoid clipping. Reducing exposure (*) will also push the lower luminances into the Read-out noise levels which constitute the other factor in Dynamic range.

(*) What John is hinting at is that by reducing the surface area of a given photon flux detector, exposure is automatically reduced proportionally. However, so is the low luminance photon flux, which disappears in the noise floor. One only needs to look at the shadow performance of a tiny sensel pitch camera like the Canon G10 (despite technological improvements). The downsampled image looks nice enough, but at the pixel level it's lacking a bit.

Those interested in the details can find an extensive discussion in my article, "The ISO Definition of the Dynamic Range of a Digital Still Camera", available here:

http://doug.kerr.home.att.net/pumpkin/#ISO-DR

Yes, that's a good explanation of the ISO standard. Thanks.

Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Bart and John,

For photography, how do both concepts of the floor for dynamic range get translated to selling cameras or DXO for declaring dynamic range sweepstakes winners.

For film, we could work this out in the darkroom!

Asher
 
For photography, how do both concepts of the floor for dynamic range get translated to selling cameras or DXO for declaring dynamic range sweepstakes winners.

Hi Asher,

The difficulty with a direct comparison between camera models is that there are usually several things that change at the same time. One may use better quality components (which usually reduces noise and increases DR), and at the same time use smaller sensels with larger microlenses and a different ASIC for data processing, possibly at a finer quantization level (10->12->14->16 bits). Too many things change at the same time to attribute visible differences to a single factor. On the other hand, the laws of physics are relatively constant so when we factor out variables the real contributors to quality can emerge.

One could e.g. compare the DR of 2 modern camera sensors from the same manufacturer as quantified by the DXO Mark site. Try the dynamic range tab of a comparison between a 5D Mark II and a Powershot G10.

In every day use, it shows as the possiblity to retain highlight detail, and shadow detail at the same time.

Bart
 

John Sheehy

New member
I disagree. When we define DR as the ratio of maximum signal to noise floor (AKA engineering definition), then with a given noise floor the capacity of an individual sensel to store maximum charge is also meaningful. When we assume that sensels (currently) store some 1500 electrons per square micron, it becomes obvious that size matters to the maximum charge that can be stored, per pixel.

The larger the maximum charge, the greater the added noise in electrons for you. There's no holding noise still as you raise the capacity!

The fact is, the best ~2 micron pixels only have about 2x as much read noise, relative to maximum signal at base ISO, compared to the best big-pixel DSLRs and MF sensors, but are far less than 1/2 the pixel pitch, so they have more DR per equal sensor area.
 

Asher Kelman

OPF Owner/Editor-in-Chief
One could e.g. compare the DR of 2 modern camera sensors from the same manufacturer as quantified by the DXO Mark site. Try the dynamic range tab of a comparison between a 5D Mark II and a Powershot G10.

In every day use, it shows as the possiblity to retain highlight detail, and shadow detail at the same time.

Bart
The DXO charts are indeed fascinating and impressive. Yes, that's immediately obvious, but how does DXO define the base for dynamic range? Is it your idea or John's or something different.

Asher
 
The DXO charts are indeed fascinating and impressive. Yes, that's immediately obvious, but how does DXO define the base for dynamic range? Is it your idea or John's or something different.

Hi Asher,

Well they suggest that what they measure is: "Dynamic range or light sensitivity range of a sensor indicates the ratio of light exposure between the highest brightness a camera can capture (saturation) and the lowest brightness it can effectively capture (typically when noise becomes more important than signal, i.e., SNR < 0 dB). This range indicates the maximum contrast that can be effectively captured by the sensor. The larger the dynamic range, the better, as the sensor is able to capture higher-contrast scenes".

They mention the "highest brightness" which presumably is the saturation point, and the "lowest brightness" and "0 dB". That would suggest the engineering definition as described by my Apogee link. Whatever their exact criteria, they were undoubtetly used consistently, and as such indicate a lower DR for a certain small sensel pitch camera based on their measurements.

Bart
 

Doug Kerr

Well-known member
Hi, Bart,
Hi Asher,

Well they suggest that what they measure is: "Dynamic range or light sensitivity range of a sensor indicates the ratio of light exposure between the highest brightness a camera can capture (saturation) and the lowest brightness it can effectively capture (typically when noise becomes more important than signal, i.e., SNR < 0 dB). This range indicates the maximum contrast that can be effectively captured by the sensor. The larger the dynamic range, the better, as the sensor is able to capture higher-contrast scenes".

That again is conceptually consistent with the ISO definition, give-or-take the niggle that at an actual SNR of 0 dB the apparent noise is understated by zero clipping.

Best regards,

Doug
 

John Sheehy

New member
Hi, Bart,


That again is conceptually consistent with the ISO definition, give-or-take the niggle that at an actual SNR of 0 dB the apparent noise is understated by zero clipping.

If there is clipping. Canons, for example, have no black clipping in the RAW data.

All these definitions are still antiquated, though. Monolithic measurements of DR assume 1:1 pixel view at large pixel magnifications. A truly useful measurement of DR would graph DR at various image frequencies. A sensor lacking in low-frequency or 1-dimensional noises and gradients, can have detail at lower frequencies well below the so-called "noise floor". The noise floor only necessarily introduces a minimum local pixel obfuscation, and need not have much power at low frequencies.

A sensor like the 5Dmk2's, with banding noise and amplifier noise (or any other gradients) subtracted, can be used at ISOs into the tens of millions, if you want a tiny image (IOW, you are only interested in low frequency detail).

As we go from digital captures of 320*240 15 years ago, to nearly 100 MP in today's MF sensors, speaking of a monolithic DR at the pixel level to represent the DR of the image gets a bit ridiculous.

I put no stock in any website's DR evaluation. There are so many variables, and so many blind spots in methodology, that there is nothing remotely accurate or meaningful about them.
 

Doug Kerr

Well-known member
Hi,, John,

If there is clipping. Canons, for example, have no black clipping in the RAW data.
Indeed. But the ISO definition is nevertheless predicated on averting the affect of such.

In particular, we note that the ISO definition is predicted on observing the response of the system in the developed image, not on the raw data.

Of course, that certainly weakens the applicability of that definition to the realities of modern workflow. But that is what is says.

Your observations on image frequency are very useful. They really remind us of the fundamental truisms of information theory, from which there is no escape! Shannon and Nyquist, like Puccini, really knew what they were doing!

Best regards,

Doug
 

Cem_Usakligil

Well-known member
off-topic, laid-back reaction

...Your observations on image frequency are very useful. They really remind us of the fundamental truisms of information theory, from which there is no escape! Shannon and Nyquist, like Puccini, really knew what they were doing!
....
What's the frequency Kenneth, err, Harry?

Doug this is so good, my sides are still hurting <grin>

Cheers,
 
Top