PDA

View Full Version : Why is reported resolution anisotropic?


Doug Kerr
January 14th, 2008, 07:08 PM
Why is it that the "absolute" resolution reported for various EOS dSLRs (and many competitive cameras as well) in Digital Photography Review is higher (in terms of lines per picture height) in the horizontal direction than in the vertical direction?

Doug Kerr
January 15th, 2008, 12:03 PM
A colleague on DPR points out that likely, in the column in the DPR reports on resolution labeled "absolute resolution", the values, although apparently in terms of LPH (lines per picture height), may in fact be in terms of lines per picture dimension (width for the horizontal resolution, height for the vertical).

(The DPR glossary defines "absolute resolution" as being resolution in line count, or pixels, or sensels, or whatever, across the relevant dimension of the image, which makes sense.)

Thus we have two possible curious scenarios. The examples are from the DPR review of the EOS 40D.

Scenario 1 - the resolutions are actually in LPH:

H: 2100 (means 2100 LPH)
V: 1800 (means 1800 LPH)

Why different?


Scenario 2 - the resolutions are actually in lines per picture dimension:

H: 2100 (means 1400 LPH)
V: 1800 (means 1800 LPH)

Why different?

Best regards,

Doug

Ray West
January 15th, 2008, 02:40 PM
Hi Doug,

I've no idea, probably due to ill defined, or misunderstood, definitions. I thought the sensor sites were not arranged in lines, and generally there are more green ones, so what do they mean by lines. A diagram may help. and is each site circular, oval, rectangular, square, or what?

Best wishes,

Ray (in awkward mode)

Doug Kerr
January 15th, 2008, 02:53 PM
Hi, Ray,

Hi Doug,

I've no idea, probably due to ill defined, or misunderstood, definitions. I thought the sensor sites were not arranged in lines, and generally there are more green ones, so what do they mean by lines. A diagram may help. and is each site circular, oval, rectangular, square, or what?

"Lines of resolution" of course has nothing to do with the layout of the sensor (although that of course will affect the resolution that is attained).

"Lines" are regions of alternate white and black color on a test chart that are "successfully" resolved in the delivered image.

We measure that with no knowledge of (nor concern with) the layout of the sensor.

But of course the issue is how does the working of the sensor (and/or something else in the imaging process) cause a resolution that is different in the two directions.

Best regards,

Doug

Bart_van_der_Wolf
January 15th, 2008, 03:45 PM
I've no idea, probably due to ill defined, or misunderstood, definitions.

Yes, I've always been likewise puzzled by the sometimes significant differences between Hor/Ver resolution. However, the formal metric of "line widths per picture height" is supposed to be applied to (horizontal lines in) the vertical dimension of landscape oriented images only. This is to take out the variable of aspect ratio (which then only changes the width and not the resulting number of lines), and avoid confusion.

I thought the sensor sites were not arranged in lines, and generally there are more green ones, so what do they mean by lines.

The lines mentioned are each of the individual white and black lines in the test target, and the metric is supposed to report the number of target line widths per picture height that can be 'resolved'.

These hyperbolic 9 line elements in the ISO chart that's used are, by the way, not intended to be used for quantification, they are supposed to be just visual comparison clues! However, the popularity of the DPR reviews made the LWPH metric useful for comparison. Bi-tonal line patterns are unsuitable for proper resolution measurements because of the sharp edges which cannot be reliably rendered by discrete sampling systems, such as sensor arrays, (aliasing will distort the precision). That's why I've been an advocate of sinusoidal line modulations (e.g. in a Siemens star arrangement) for some time, and they will be adopted by the ISO in a new standard.

The actual elements that are supposed to be used for resolution measurements, are the slanted edges that can be found in the same chart. Unfortunately they were too cumbersome to interpret for Phil Askey at the time, due to lack of software support. 'Imatest' picked up on the concept and does a commendable job of taking the tedium out of evaluations.

Bart

Bart_van_der_Wolf
January 15th, 2008, 03:56 PM
But of course the issue is how does the working of the sensor (and/or something else in the imaging process) cause a resolution that is different in the two directions.

I agree, and it's puzzling to say the least. There is a small difference between horizontal/vertical resolution that can be found in the EXIF data (which would indicate a slightly asymmetric aspect ratio, e.g. for imaged circles), but it doesn't fully explain the numbers as reported in the DPR evaluations, unless they are violating the 'per picture height' criterion (as applied to landscape orientation).

Bart

Doug Kerr
January 15th, 2008, 04:23 PM
Hi, Bart,

I agree, and it's puzzling to say the least. There is a small difference between horizontal/vertical resolution that can be found in the EXIF data (which would indicate a slightly asymmetric aspect ratio, e.g. for imaged circles), but it doesn't fully explain the numbers as reported in the DPR evaluations, unless they are violating the 'per picture height' criterion (as applied to landscape orientation)

If we conjecture that they are violating the "lines per picture height" basis, then what basis might they use? If we conjecture that, perhaps, they mean "lines per picture dimension", then the values are even more inconsistent.

I have just looked at the 40D resolution chart image (the downloaded full-resolution version, 3888 x 2592 px).

I understand your point about the "trumpet" patterns not being intended for formal quantification. Nevertheless, I took equal-sized crops of the vertically-oriented pattern and the horizontally-oriented pattern, rotated one until they were in the same orientation on-screen, kept them at the same scale, and placed them side-by-side. I can see only a minuscule difference between the two.

I dodged the question of "at what point do I read the number"? I just looked at several of the scale markings and determined if the appearance of the rendition of the pattern was the same for the two patterns at that point.

What I see certainly doesn't comport with the 21:18 ratio between the quantified "absoiute resolutions" reported in the DPR table.

Interestingly enough, the table reports equal vertical and horizontal values for the "extinction" resolution (which is judged to be about 88% of the "geometric resolution" as would be implied by the sensel spacing or the pixel spacing on the delivered image).

Curious indeed.

Thanks for your input. (By the way, the discussion in your other recent message was very apt.)

Best regards,

Doug

Bart_van_der_Wolf
January 15th, 2008, 05:21 PM
I understand your point about the "trumpet" patterns not being intended for formal quantification. Nevertheless, I took equal-sized crops of the vertically-oriented pattern and the horizontally-oriented pattern, rotated one until they were in the same orientation on-screen, kept them at the same scale, and placed them side-by-side. I can see only a minuscule difference between the two.

Exactly, visual comparison (for which these 9 line structures are originally intended) seems consistent with a more balanced Hor/Ver quantification than given.

I dodged the question of "at what point do I read the number"? I just looked at several of the scale markings and determined if the appearance of the rendition of the pattern was the same for the two patterns at that point.

I wouldn't have called it dodging, but rather an inability to unambiguously pin-point (which is caused by 'unsuitable for quantification' bi-tonal patterns).

What I see certainly doesn't comport with the 21:18 ratio between the quantified "absoiute resolutions" reported in the DPR table.

The issue with the "absolute resolutions" is also caused by the human visual system (HVS). At average levels of brightness, our HVS works in a more logarithmic (base 10) or sometimes a power function type of fashion. A 10% difference is often at the threshold of visual significance.

Interestingly enough, the table reports equal vertical and horizontal values for the "extinction" resolution (which is judged to be about 88% of the "geometric resolution" as would be implied by the sensel spacing or the pixel spacing on the delivered image).

Which for me always suggested an adherence to the intended 'Lines Widths per Picture Height' (in landscape orientation), but thanks for raising the doubt/question.

Bart

Ray West
January 15th, 2008, 07:44 PM
Hi Doug and Bart,

But of course the issue is how does the working of the sensor (and/or something else in the imaging process) cause a resolution that is different in the two directions.If that is what we are talking about, then I will have to do some drawings ;-(

It seems pretty obvious to me that there could be a difference 'twixt the resolutions.

Now, does the raw file report the value of each individual sensor voltage (scaled) or do they combine them in some way. Then, when you show it on screen, what anti-aliasing is taking place - I'm guessing photoshop 'tends to make it look pretty'

Best wishes,

Ray

Doug Kerr
January 16th, 2008, 11:18 AM
Steen Bay, on DPR, has suggested a possible basis for the anomaly. He asked whether, if the actual "effective acceptance dimensions" of the sensels were not tthe same in the two directions (even though the pitch was for all practical purposes identical), this could cause a difference in perceived resolution along the two axes.

I hark to my experience in digital telephony. There, when we think of sampling an audio waveform at a certain interval and then reconstructing it at the distant end, the "ideal" model has us taking an "instantaneous" sample - noting the voltage of the waveform only at the sample instants.

If we do that, then straightforward reconstruction restores the original waveform precisely (with no disturbance in its frequency content). (This assumes that the original waveform had all its frequency components below the Nyquist limit.)

In actual practice (especially in early practice), we actually measure the average voltage of the waveform over a modest finite interval centered on the desired sampling instant(s). One reason is that, were the time width of the "sampling gate" infinitesimal, the energy extracted from the waveform would be infinitesimal as well, and could not be read (or digitized)!

When we use a finite-width sampling window, the reconstituted waveform is not identical to the original waveform, but is rather is that waveform as if passed through a filter with a certain frequency response.

In fact, we then compensate for that by passing the reconstructed waveform through a filter with the complementary frequency response, thus restoring the frequency content of the original waveform - restoring the waveform.

In our situation, the "width" of the effective receptive area of the sensel is closely parallel to the width of the sampling window I mention above. Changes in that, then, will affect the frequency content of the image - the spatial frequency response of the system. If we are thinking in terms of the MTF of the system, (and I mean where the MTF is plotted as a function of spatial frequency), that curve would differ depending on the width of the sampling window.

Our visual perception of the resolution limit comes down essentially to our deciding at what spatial frequency the MTF of the image falls below some non-quantified threshold - essentially, where the contrast is so reduced that we say "we can't see the individual bars".

Thus, if the MTF curve differed for the two directions as a result of the different "sample window widths", we would judge the resolution to be different.

Now, do we know how the "effective receptive dimensions" of the sensel (taking into account the working of the microlenses and such) compares in the two directions?

Just a thought.

Thanks to Steen Bay (and others) for introducing this possible consideration.

Best regards,

Doug

Bart_van_der_Wolf
January 16th, 2008, 06:38 PM
In our situation, the "width" of the effective receptive area of the sensel is closely parallel to the width of the sampling window I mention above. Changes in that, then, will affect the frequency content of the image - the spatial frequency response of the system. If we are thinking in terms of the MTF of the system, (and I mean where the MTF is plotted as a function of spatial frequency), that curve would differ depending on the width of the sampling window.

Our visual perception of the resolution limit comes down essentially to our deciding at what spatial frequency the MTF of the image falls below some non-quantified threshold - essentially, where the contrast is so reduced that we say "we can't see the individual bars".

While the MTF changes, it has mostly to do with the amplitude, the modulation. It makes quite a difference whether one takes a point sample or an area sample, but mostly on the modulation of frequencies beyond Nyquist. The fundamental resolution limit, at Nyquist, will still be determined by the sampling density. Of course an asymmetrical area/window will also affect the visually significant 10% limiting resolution modulation, but I don't think the effect will be as significant as to explain the total resolution difference, amongst others due to microlenses and the AA-filter.

In fact I suspect the AA-filter to contribute to the observed differences. Afterall, the 2 crossed layers of birefringent material have a different distance to the sensels due to their being a layered construction. Part of that can be compensated for by different thicknesses, but there ultimately will be a cumulation of all factors involved.

Bart

Doug Kerr
January 16th, 2008, 07:07 PM
In fact I suspect the AA-filter to contribute to the observed differences. Afterall, the 2 crossed layers of birefringent material have a different distance to the sensels due to their being a layered construction. Part of that can be compensated for by different thicknesses, but there ultimately will be a cumulation of all factors involved.

Indeed, I could imagine that as a cause.

Best regards,

Doug