• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

News: Sinar Added to Leica's Portfolio!

Doug Kerr

Well-known member
Hi, Bart,

As I recall, the sampling aperture MTF (for a "uniform sensitivity" aperture), for an aperture width equal to the sampling pitch, falls to 0.5 at the Nyquist frequency.

For an aperture width twice the sampling pitch, it falls to zero at the Nyquist frequency.

I think.

Best regards,

Doug
 
There are some examples on the Hasselblad web site if people are interested. Here for product photography (a pair of shoes and chronometer) and here for "architecture" photography (the inside of a library).

You'll need to download the following files for a meaningful comparison:
-the H4D-50 as a reference of single shot.
-the H4D-50MS is the same, but with color multishot
-the H4D-200MS improves the resolution by 4 with multishot (and also uses the improved color of the file above).

The samples are tiff files. They need be, since jpeg subsamples the color information and thus removes much of the advantage of multishot systems. Obviously, the file size is huge.

Still Jerome, although this helps, 200MS is using a new technique that isn't as effective as the old 22 Kodak sensor backs (imacon 528c, Hasselblad 22ms, Zinar 54H) it does MS first like 50MS does and then it kind of "applies" extra resolution… it doesn't shoot 200mp in "true colour", the result is great but -believe me- a little inferior than the old backs… I think the Zinar "E-xact" does "real" 192mp MS, but I'm told so, I haven't been able to check if it is correct… My comment on the 200MS also comes from private discussion from another photographer that also does art reproduction in Italy, who also has an Imacon 528c which he finds visibly better… (not that the 200ms is not superb), it's not "own experience" either.
 
Hi, Bart,

As I recall, the sampling aperture MTF (for a "uniform sensitivity" aperture), for an aperture width equal to the sampling pitch, falls to 0.5 at the Nyquist frequency.

For an aperture width twice the sampling pitch, it falls to zero at the Nyquist frequency.

I think.

Best regards,

Doug
You would be correct If it was an "one" or "four shot" capture Doug… In this case it is 4x "four-shot" capture on four image areas that are then combined to a quadrupled one...
 

Doug Kerr

Well-known member
Hi, Theodoros.

Doug, the pixel size IS STILL 9μΜ …not 4.5, it's 9! (I use the actual size of the pixel that I use - I understand you used 10 and 5 as an example)

Nothing in my presentation deals with the "size" of a "pixel".

Two things sort of like that are pertinent:

• The pitch (on the focal plane) of the samples in the compete set of samples.

This affects the Nyquist frequency that pertains to the capture of the whole optical image by sampling.​

• The width of the sampling aperture (and of course, to be rigorous, the distribution of sensitivity across it, but we can get useful insights by thinking of it as of uniform sensitivity).

This affects the MTF of the sampling process, one of several MTF's that are cascaded to give the overall MTF of the digital capture process.​

…It's the imaging area that is multiplied by four in the result, the pixel is not shrunk, each capture is still on a full pixel and at the same image area… when I said earlier that "pixel is split at four true colour ones", it means that you get four new pixels of the same characteristics as if it was one… meaning that each one of them has got the same size (9μM) and is at the same position (only different by half a pixel movement) on the same image area measuring 37x49cm... It's not one area of 37x49 shot on 88mp, it's FOUR areas of 37x49=An area of 74x98cm as a result, but not added together like we do with "merge", instead, it's like one area of 74x98 if it was shot with a lens that would show the same AOV as it does at 37x49 but has four times the resolution to serve an area of 47x98 of the same AOV… I hope this helps.
Not at all. I do not follow your premise.

Now, the fact that we (for example) shoot four times gives us more photometric energy overall than if we only shot once (assume a fixed exposure time), which would seem to be an advantage in noise considerations (not part of the issue here).

But of course that takes four times as long, and we could get that same total photometric energy input with just four times the exposure time in conventional technique.

Best regards,

Doug
 
Hi, Theodoros.



Nothing in my presentation deals with the "size" of a "pixel".

Two things sort of like that are pertinent:

• The pitch (on the focal plane) of the samples in the compete set of samples.

This affects the Nyquist frequency that pertains to the capture of the whole optical image by sampling.​

• The width of the sampling aperture (and of course, to be rigorous, the distribution of sensitivity across it, but we can get useful insights by thinking of it as of uniform sensitivity).

This affects the MTF of the sampling process, one of several MTF's that are cascaded to give the overall MTF of the digital capture process.​


Not at all. I do not follow your premise.

Now, the fact that we (for example) shoot four times gives us more photometric energy overall than if we only shot once (assume a fixed exposure time), which would seem to be an advantage in noise considerations (not part of the issue here).

But of course that takes four times as long, and we could get that same total photometric energy input with just four times the exposure time in conventional technique.

Best regards,

Doug
It's not the same photometric energy Doug… it's four times the photometric Energy… It's like shooting four times the area with a lens that has four times the resolution…. look, it's really nothing I can say further to convince you, look at the Hasselblad samples that Jerome posted, although it's not the "real" thing, it may help. Theory doesn't help here Doug, because it's a theory that applies on a different situation, unless if you understand the genius that is mechanically applied, you won't be able to understand what happens, it doesn't obey the theory you present… most important, it's not of a worth to a photographer what the theory behind it is… It's the result that is important and that, nobody that experiences it can deny…
 

Doug Kerr

Well-known member
Hi, Theodoros,

Theory doesn't help here Doug, because it's a theory that applies on a different situation, unless if you understand the genius that is mechanically applied, you won't be able to understand what happens, it doesn't obey the theory you present… most important, it's not of a worth to a photographer what the theory behind it is… It's the result that is important and that, nobody that experiences it can deny…

What is important is that you get the benefit of this situation in your own photographic work.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Jerome,
There are some examples on the Hasselblad web site if people are interested. Here for product photography (a pair of shoes and chronometer) and here for "architecture" photography (the inside of a library).

Thanks so much.

I'll look at those. They should give a nice idea of what we can get with the multishot and microshift multishot techniques.

Theodoros has commended review of these to me in support of his position that is the subject of such bafflement to me.

But they do not read on that.

That issue is:

If we create a high-pixel-count image with microshift technique and a lower-pixel-count sensor, and wish to attain resolution commensurate with the potential of the pixel count, can we do that with a lower-performance lens than if we had created the high-pixel-count image in the regular way with a sensor having the pixel count we want for the image?​

Thanks again for the nice reference images.

Best regards,

Doug
 
Is the response we see here (if we were to take out the effect of the lens MTF) primarily due to aperture rolloff?

Hi Doug, the responses are mostly sampling aperture induced besides the half sensel pitch offset. I do not know the exact MTF of the lens alone, but empirical tests do suggest that for various sensors (4.88 - 6.4 micron pitch) the combined system MTF (lens + IR/OLPF + sensel-aperture) results in a similar Gaussian blur PSF at an optimal combination of residual lens aberrations plus diffraction and sensel aperture.

Could you remind me of the meaning of describing lens blur in terms of a sigma multiplier?

While lens diffraction undoubtedly has a different PSF shape than diffraction (and defocus), the combination of several PSF shapes (Gaussian+Airy+Box and disc)) will quickly approximate a Gaussian intensity distribution. It's apparently similar to the central limit theorem in probability. Gaussian (Normal) distributions are fortunately relatively simple to calculate with, also in Fourier space.

It seems that in the green case the pitch of the sensor is 3 µm, thus the sample pitch of the complete sample ensemble (under 16-step operation) is 0.75 µm.

I am surprised that the use of the 16 step technique with that sensor results in a worse overall MTF (even though 4x the pixel count).

What am I missing?

The 3 micron pitch 1-step sensor (Red) uses 3 micron apertures, the 3 micron pitch 16-step sensor (Green) uses 6 micron apertures that overlap because the aperture is larger than the pitch, and the 6 micron pitch 1-step sensor (Blue) uses 6 micron apertures.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Hi Doug, the responses are mostly sampling aperture induced besides the half sensel pitch offset. I do not know the exact MTF of the lens alone, but empirical tests do suggest that for various sensors (4.88 - 6.4 micron pitch) the combined system MTF (lens + IR/OLPF + sensel-aperture) results in a similar Gaussian blur PSF at an optimal combination of residual lens aberrations plus diffraction and sensel aperture.

Makes sense.

While lens diffraction undoubtedly has a different PSF shape than diffraction (and defocus), the combination of several PSF shapes (Gaussian+Airy+Box and disc)) will quickly approximate a Gaussian intensity distribution. It's apparently similar to the central limit theorem in probability. Gaussian (Normal) distributions are fortunately relatively simple to calculate with, also in Fourier space.
Sure. But what does it mean when you say that the lens blur is 0.7 sigma? Does that suggest a size of the Airy disk? Do you perhaps mean that its sigma is 0.7 µm? That would make sense.

The 3 micron pitch 1-step sensor (Red) uses 3 micron apertures, the 3 micron pitch 16-step sensor (Green) uses 6 micron apertures that overlap because the aperture is larger than the pitch, and the 6 micron pitch 1-step sensor (Blue) uses 6 micron apertures.

I'm confused. What is the actual photodetector spacing on the green sensor?

Based on what has recently been discussed, I would suspect the green sensor has a 6 µm inter-photodetector pitch, and the resulting sampling pitch in this mode (from consolidation of the four sets of reads) is then 3 µm.

I would call that a 6 µm pitch sensor.​

Then the curves (green vs. red) suggest that getting a 3 µm image sampling pitch with four shifted reads of a 6 µm pitch sensor (microshift multishot technique) is slightly worse (in terms of overall resulting MTF) than getting it with an actual 3 µm pitch sensor (ordinary technique).

Which is exactly what I would suspect. For one thing, the aperture MTF matter would suggest that.

We are like the Three Stooges making beer. "Moe, did you put in the yeast?"

Thanks.

Best regards,

Doug
 
That issue is:

If we create a high-pixel-count image with microshift technique and a lower-pixel-count sensor, and wish to attain resolution commensurate with the potential of the pixel count, can we do that with a lower-performance lens than if we had created the high-pixel-count image in the regular way with a sensor having the pixel count we want for the image?​


Doug
True Doug… that's exactly what the "real" question is! However, we have to also take into account other factors than sensor pixel count, like sensor size, lens uniformity across its image circle, image area to lens mount distance and all other factors… The truth is, that as with with all sensor sizes, (Aps-c, FF etc) with MF sensors, the more the pixel count increases, the result on resolution increase is less visible… (i.e. a P45+ (39mp) is no where near than presenting almost double the resolution than a P25+) while if MS (4x) is used, the difference appears to be huge although there is no increase at all of pixel count… For example, a Hasselblad H3D-39ms used in 4xMS, not only appears to have much more resolving power than if shot at single shot, but also easily outperforms a Hassy H4D-60… Now, if one compares an 80mp back, with a 22mp of the past, with the best possible lens used on both used at its best aperture (say Leaf 12 vs Leaf 22 with the new 120micro at f11 fitted on a Mamyia or P1 6x4.5 camera) there is a clearly visible resolution difference, but on an 111x148cm print, you may say that what you see is maybe 35% of more resolution than the 350% difference that the pixel count suggests.

Now, 16xMS was only available until a couple of years ago on 22mp backs, all the higher resolution ones where only able to do 4x MS… this was because the piezoelectric crystal that was used to do the sensor movements, wasn't mechanically (it was and probably still is technologically restricted) able to accurately control movements of less than 4.5μM… However, if you do the above test (i.e. print 111x148cm) to compare 22mp 4xMS vs. 88mp 16xMS using the same sensor and you use an absolutely superb tripod like my 40 years old FATIF (I was given that masterpiece from a very old now retired photographer) you won't believe your eyes when comparing the two prints… There is clearly four times the resolution present! OTOH, if you use my Manfroto 058 and re-perform the same test, the difference drops to something like 50-80% depending on the shutter speed used!

Which leads us to the following results:
1. Even the best of lenses don't "like" small pixels…
2. MS is far superior (in resolving power only - if the rest of qualities (colour, DR etc) come into account it is simply incomparable) to same resolution single shot and visibly better at even more than double the pixel count.
3. MS is greatly dependant of other variants (movement accuracy, constant lighting, vibration etc), it is not a situation where it either fails or its done right, one may think that it was done well (because he got no "error" or "movement detection" sensed on the software) but still he may be under the wrong impression!
4. 16x is of four times the resolution than 4xMS, but it is far more sensitive to the MS variants (obviously because the magnitude of sensor movement is so small thus much more vulnerable).

I hope I've helped your thoughts... One more thing, ...before I got the Fatif tripod (a couple of years ago) I thought I was doing 16x correctly, in fact I thought that I was mastering it …it was only after I got the Fatif that I realised why so many people think of MS as a PIN to do… If its done correctly it's exactly as easy as doing a long exposure… Pay some more attention to that saying of mine which I will repeat… I have 100% success doing 16x with Fuji GX680… most will tell you that even 4x is impossible with the Fuji!
 

Doug Kerr

Well-known member
Hi, Theodoros,

Thank you for your valuable observations.

This is indeed a very complicated area, in which I have no practical experience.

You have obviously given considerable attention to the details of practice and in perceptively observing the results.

One aspect that I am unable to really take into account, even theoretically, is that of the implications of CFA interpolation and what performance improvements we get when we "finesse it out of the picture" with 4X MS operation (by itself or as a "layer" of 16X MS operation). From what you say, there is enormous leverage to that ploy.

Best regards,

Doug
 
Hi, Theodoros,

Thank you for your valuable observations.

This is indeed a very complicated area, in which I have no practical experience.

You have obviously given considerable attention to the details of practice and in perceptively observing the results.

One aspect that I am unable to really take into account, even theoretically, is that of the implications of CFA interpolation and what performance improvements we get when we "finesse it out of the picture" with 4X MS operation (by itself or as a "layer" of 16X MS operation). From what you say, there is enormous leverage to that ploy.

Best regards,

Doug
I thought that your impression is that 4x and 16x are related Doug… They are not! 16x is a completely different result as far as resolution is concerned, in 4X you shoot on a certain sensor size/pixel size four times, in 16X you do exactly the same (ie. shoot on the same sensor size/pixel size) but with the sensor in four different positions. So, there isn't any "layer" of 4x involved, what is much more relevant, is to think of it as multiplying the image area by four, yet keeping the lens/sensor properties on each of the four areas. The result is that you get 88mp out of a 22mp sensor, but each one pixel has exactly the same information (if the process is done correctly/accurately) as if it was at the same position on the sensor, the same pixel size and was acting as 4x… There isn't any interpolation involved, both 4x and 16x are "pure" true colour methods, where the result is presented using a special dedicated software program which doesn't do "interpolated processing", it rather does "mechanical translation". The difference on 16x is one of mechanical accuracy required, not "optical" characteristics alteration of the 4x technique…

P.S On Hasselblad 200ms it seems that "microstep" (it is done using 6x not 16x there) is related to 4x and there is interpolation of the 4x result involved with two more mechanical movement involved to assist interpolating the result… I'm not sure exactly how it works though, but the technique is surely not the same as doing 16x.
 

Doug Kerr

Well-known member
Hi, Theodoros,

I thought that your impression is that 4x and 16x are related
What I meant is that, as I understand it, in both 4x and 16x MS operation we avoid the need for CFA interpolation, providing an actual "tricolor" analysis of each "sample location" in the optical image that will be represented by a pixel of the digital image.

But, in 16X operation, we have the additional feature of doubling the V and H sampling rates compared to the sensel pitch of the sensor, thus providing an actual tricolor analysis of four times as many sample locations in the optical image as we do in 4X operation, leading to four times as many pixels in the digital image as with 4X operation.

I used the term "layer" as a metaphor to refer to these two benefits of 16X operation.

Perhaps I misunderstand the principles involved.

Best regards,

Doug
 
Hi, Theodoros,


What I meant is that, as I understand it, in both 4x and 16x MS operation we avoid the need for CFA interpolation, providing an actual "tricolor" analysis of each "sample location" in the optical image that will be represented by a pixel of the digital image.

But, in 16X operation, we have the additional feature of doubling the V and H sampling rates compared to the sensel pitch of the sensor, thus providing an actual tricolor analysis of four times as many sample locations in the optical image as we do in 4X operation, leading to four times as many pixels in the digital image as with 4X operation.

I used the term "layer" as a metaphor to refer to these two benefits of 16X operation.

Perhaps I misunderstand the principles involved.

Best regards,

Doug
Hi Doug… you do not misunderstand the principal, you misunderstand the result mechanically. This is because you try to think of it as an area of given size, while in effect the area is quadrupled. In 16x the sensor moves by half a pixel side magnitude, this results on another (completely new) image that is shot in between the pixels. What this means, is that given that lens analysis ability is superior than the sensor's pixel density, you get a new image where there is new effective information (still of the same characteristics as of the sensor's true pixel density, but completely different as far as information recorded is concerned, which is then translated in a quadrupled size image area as if it was shot with the original pixel density, but with the AOV & DOF characteristics of the ordinal image area. This means the final resolution result is of four times the original resolution, without any demand for extra resolution required from the lens. Hence, the resolution advance is real and the result's success is only dependent from mechanical accuracy or error magnitude.
 

Doug Kerr

Well-known member
Hi, Bart,

with regard to this very valuable figure:

SamplingDensity3.png

I am still eager to be clear as to the situation for the green curve. Does it pertain to:

• a CFA sensor with a sensel pitch of 3 µm (and a sampling aperture we can think of as 3 µm square), operated in 16X "microstep" multishot technique to give an overall (true trichromatic) sampling of the image at a pitch of 1.5 µm, or

a CFA sensor with a sensel pitch of 6 µm (and a sampling aperture we can think of as 6 µm square), operated in 16X "microstep" multishot technique to give an overall (true trichromatic) sampling of the image at a pitch of 3 µm, or

• something else?

My bet is on the blue.

Thanks.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Doug… you do not misunderstand the principal, you misunderstand the result mechanically. This is because you try to think of it as an area of given size, while in effect the area is quadrupled. In 16x the sensor moves by half a pixel side magnitude, this results on another (completely new) image that is shot in between the pixels. What this means, is that given that lens analysis ability is superior than the sensor's pixel density, you get a new image where there is new effective information (still of the same characteristics as of the sensor's true pixel density, but completely different as far as information recorded is concerned, which is then translated in a quadrupled size image area as if it was shot with the original pixel density, but with the AOV & DOF characteristics of the ordinal image area. This means the final resolution result is of four times the original resolution, without any demand for extra resolution required from the lens. Hence, the resolution advance is real and the result's success is only dependent from mechanical accuracy or error magnitude.

As I wrote, from what you describe here, you seem to be merely sampling further the photons arriving at the focal plane more accurately, but not creating more detail than it was already delivering.

Asher
 
Hi, Bart,

with regard to this very valuable figure:

SamplingDensity3.png

I am still eager to be clear as to the situation for the green curve.

Hi Doug,

Sorry for the delay, I have several other duties that claim my time, but I'm trying to prepare something to explain it a bit better. The main issue is with the distinction between sampling pitch, and sensel pitch, and I may have to add/change the chart a bit to make it clearer that the green line represents a simulation of a 3 micron sampling pitch, with a 6 micron aperture ,and 6 micron sensel pitch. The simulation is not exact enough, due to the complex interaction of lens+AA-filter (if any) and sampling aperture.

It would clarify a lot to first see and quantize the effect based on an actual recording, e.g. of a slanted edge and star target.

Does it pertain to:

• a CFA sensor with a sensel pitch of 3 µm (and a sampling aperture we can think of as 3 µm square), operated in 16X "microstep" multishot technique to give an overall (true trichromatic) sampling of the image at a pitch of 1.5 µm, or

a CFA sensor with a sensel pitch of 6 µm (and a sampling aperture we can think of as 6 µm square), operated in 16X "microstep" multishot technique to give an overall (true trichromatic) sampling of the image at a pitch of 3 µm, or

• something else

My bet is on the blue.?

Correct, it's more like the second scenario, with some complications. More accurate color can be achieved with the 4 step positions (full sensel offsets, to allow each of the R/G/G/B filters to sample once at each sensel position), and mostly Luminance resolution enhanced by additional half sensel offset steps.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Sorry for the delay, I have several other duties that claim my time, . . .

Well, I would hope!

... but I'm trying to prepare something to explain it a bit better. The main issue is with the distinction between sampling pitch, and sensel pitch, and I may have to add/change the chart a bit to make it clearer that the green line represents a simulation of a 3 micron sampling pitch, with a 6 micron aperture ,and 6 micron sensel pitch.

Well, that is what I thought.

The simulation is not exact enough, due to the complex interaction of lens+AA-filter (if any) and sampling aperture.

Sure. But it gives great insight into the basic issues involved.

Thanks so much.

Best regards,

Doug
 
As I wrote, from what you describe here, you seem to be merely sampling further the photons arriving at the focal plane more accurately, but not creating more detail than it was already delivering.

Asher
I really don't see how you've come on this conclusion Asher, if a pixel receives a different stream of photons, it records different information, a pixel in a different position, does receive a different stream.
 

Asher Kelman

OPF Owner/Editor-in-Chief
I really don't see how you've come on this conclusion Asher, if a pixel receives a different stream of photons, it records different information, a pixel in a different position, does receive a different stream.

Theodoros,

As long as the lens outclasses the sensor, you can indeed sample more and collect more by increasingly tiny sensels and also improve the color by correcting that with shifts.

However, if the lens is used does not have more frequency information than a sensor with small enough sensels, all you do by shifting is improve the color. You cannot also acquire more information than is delivered to the focal plane. However, whenever you have smaller pixels, you sample more accurately until finally the limit reached is that of the distribution and overlap of the blurred perimeters points of focus, made worse by the progressive diffraction circles around each focus point that are caused by the effect of the light waves "scraping" against the aperture of the lens.

Right now, you seem to have lenses which do not limit your sampling capability. with a better back, you would reach that limit before you did any shift and then all you could do is improve color and decrease noise and deconvolute effects of aperture to get improvements.

So here in summary is my assertion for a superb sensor with the tiniest sensels, (ie not one with a lens that outclasses it), moving in a flat plane:


Work done by the lens can only be sampled more densely but no additional detail from the scene can be resolved no matter how the sensor is shifted. Color of course will be truer with appropriate pixel shifting and image reconstruction.



Asher
 

Doug Kerr

Well-known member
Consider two cases. In case B, we sample the optical image formed by the lens in four times as many places as in case A. Then:

• The Nyquist frequency for case B is twice that for case A.

• The upper bound on the attainable resolution in case B is twice that for case A.

• In neither case can the resulting digital image represent frequencies in the scene (remember the scene?) that are not present (with sufficient amplitude) in the optical image (owing to the MTF of the lens).

If the MTF of the lens is sufficient to produce a resolution in the optical image that is comparable to the higher resolution limit of case B, then we will enjoy that higher resolution in the digital image generated in case B. If not, then we will not.

This is not affected at all whether we sample in case B:

B1. With the "case A" sensor, but using microstep multishot technique.

B2. Using (in the ordinary way) a sensor with four times as many sensels as the "case A' sensor.

Now there are differences between cases B1 and B2:

a. Assuming that in either case the sampling aperture of the sensels is of comparable size compared to the sensel pitch, then the overall system MTF (to the digital image) for case B1 will be worse than for case B2. Then from this consideration alone, we might expect lower resolution under case B1 than case B2.

b. For the systems being discussed, the use of microstep technique comes along (in the "16X" technique) with the original basic premise of multishot technique: to provide a bona fide tricolor capture of the color of each pixel. This is known to have many benefits compared to the usual interpolated CFA process, including in some cases an improvement in perceived resolution. Thus, assuming that case B2 is actually done with the "16X" technique, from this factor alone we might expect an improvement in perceived resolution in case B2 compared to case B1.

There has been made an assertion that the use of case B2 allows a desired resolution (say, approaching the case B resolution limit) to be attained with a lens of poorer MTF than would be needed under case B1.

I have yet heard no believable story about how that might come about. Fanciful discussions about an effective image area of four times the actual image size do not lead me to such a conclusion.

Best regards,

Doug
 
Theodoros,

As long as the lens outclasses the sensor, you can indeed sample more and collect more by increasingly tiny sensels and also improve the color by correcting that with shifts.

However, if the lens is used does not have more frequency information than a sensor with small enough sensels, all you do by shifting is improve the color. You cannot also acquire more information than is delivered to the focal plane. However, whenever you have smaller pixels, you sample more accurately until finally the limit reached is that of the distribution and overlap of the blurred perimeters points of focus, made worse by the progressive diffraction circles around each focus point that are caused by the effect of the light waves "scraping" against the aperture of the lens.

Right now, you seem to have lenses which do not limit your sampling capability. with a better back, you would reach that limit before you did any shift and then all you could do is improve color and decrease noise and deconvolute effects of aperture to get improvements.

So here in summary is my assertion for a superb sensor with the tiniest sensels, (ie not one with a lens that outclasses it), moving in a flat plane:


Work done by the lens can only be sampled more densely but no additional detail from the scene can be resolved no matter how the sensor is shifted. Color of course will be truer with appropriate pixel shifting and image reconstruction.



Asher
Asher, you seem like you've read nothing from the posts before… let me ask you a few questions about 16x multishot:

1. Do you shoot four different images that each one one of them has different pixel information than the other? The obvious answer is YES!
2. Do you shoot each and every of those images on a sensor that has the same pixel density and is used with the same lens? The obvious answer is YES!
3. Do you have FOUR results of different information all of them of the same quality, but still totally different between them? The obvious answer is YES!
4. Is each and every one of these shots made on a sensor of certain pixel count with certain pixel size and with photons coming through the same lens? The obvious answer is YES!

Then the result is as if the imaging area is quadrupled with the same quality as if a pixel was shot on the original sensor!

…I don't see what one cannot understand here! I've compared a P65+ against my back's resolution when used on 16x mode with the same lens and the comparison was laughable against the P65+… I was offered a "test" from "Yaya" (Yair Sahar of Leaf) on a trip of his to my country of a Leaf 80mp back same as yours, which I accepted under condition to test it against my back in 16x mode… (Yair was going to bring a Contax fit back from Israel with him)… guess what? Yair refused the comparison!

You think of the 16x result on wrong basis Asher… there is no way that a single shot back of 88mp will resolve as much as a 22mp back shot in16x mode, no matter what lens will be used on that (imaginary) 88mp single shot back… The result out of correctly done 16X multishot is FOUR times the resolution of what the original sensor would provide, the result out of any imaging area of a certain size when compared to another imaging area of the same size, but with four times the pixel count is no where near that… it's not even close to half the resolution increase!
 

Doug Kerr

Well-known member
Hi, Theodoros,

1. Do you shoot four different images that each one one of them has different pixel information than the other? The obvious answer is YES!
2. Do you shoot each and every of those images on a sensor that has the same pixel density and is used with the same lens? The obvious answer is YES!
3. Do you have FOUR results of different information all of them of the same quality, but still totally different between them? The obvious answer is YES!
4. Is each and every one of these shots made on a sensor of certain pixel count with certain pixel size and with photons coming through the same lens? The obvious answer is YES!

All true.

Each of those results indeed almost certainly has a different set of sample values.

However, if the optical image contains no frequencies at or above the Nyquist limit for that sensor's sensel pitch, then each of the four (indeed different) sets of sample values completely describes the very same overall image.

This is a fundamental concept of the theory of representing a continuous variable by sampling.​
If we were to take those four sets of sample values, and reconstruct an actual image from each of them, those four images would be identical.

And for each, that image can contain no frequencies above that Nyquist limit, and can contain no frequencies not present in the optical image.

Now if we take those four (quite different) sets of sample values and consolidate them into a single data set, its Nyquist limit will be twice the Nyquist limit we had for any signal set by itself (since the consolidated data set represents sampling at twice the rate of any individual data set).

Thus the consolidated sample set could carry frequencies up to twice the frequency that could have been carried by any one of them alone,

But in no case can it carry any frequencies not present in the optical image.

The matter of four times the image area

The model of the four "copies" of the optical image is a red herring.

They are of course identical.

We sample each of them at a certain pitch, with the sampling grids offset. Thus we get four different sets of sample values.

If there are no frequencies in the image (and thus in any of its four copies) at or above the Nyquist limit determined by the sampling pitch, then each of those four sets of sample values completely describes the identical image.

We cannot take four identical copies of a photograph at a certain resolution (in terms of cycles per picture height) and and claim that collectively they constitute a photograph twice as high and wide which therefore has twice that resolution in terms of cycles per picture height.

Well, that is, I can't.

Best regards,

Doug
 
Hi, Theodoros,



All true.

Each of those results indeed almost certainly has a different set of sample values.

However, if the optical image contains no frequencies at or above the Nyquist limit for that sensor's sensel pitch, then each of the four (indeed different) sets of sample values completely describes the very same overall image.

Doug
And what is the factor that will restrict the frequencies at or below the Nyquist limit of the sensor's sensel pitch Doug? …the 120mm f4 APO Zeiss micro used at f11 or f8 to provide photons in a 9x9μM pixel?
Now you should get it Doug… in one case (the MS 16X) you have a superb lens and 88milion 9x9μM areas that receive the different photon rays for a total of 88 million pixels, while OTOH you have the same superb lens and 88million 4.5x4.5μM areas that receive photon rays for the same number of pixels… and you expect the resolution to be anywhere near?
 

Doug Kerr

Well-known member
Hi, Theodoros.

And what is the factor that will restrict the frequencies at or below the Nyquist limit of the sensor's sensel pitch Doug? …the 120mm f4 APO Zeiss micro used at f11 or f8 to provide photons in a 9x9μM pixel?

I'm not sure I understand your question.

Best regards,

Doug
 
Hi, Theodoros.



I'm not sure I understand your question.

Best regards,

Doug
Read all of it Doug, I was editing it to make it more understandable. Can't you see that the Nyquist limit of the sensor's sensel pitch is also quadrupled ? Each new pixel out of the 88million one Doug is a combination of FOUR different pixels of 9xμM each! …you see, as the pixel enters into its next by half and then the other from the side also enters the area, you have FOUR pixels in one position at the end… more than that, although the area that all four occupy is of 4.5x45μM size, they all carry the same information as if the pixel was of 9x9μM size!
 

Doug Kerr

Well-known member
Hi, Theodoros,

And what is the factor that will restrict the frequencies at or below the Nyquist limit of the sensor's sensel pitch Doug? …the 120mm f4 APO Zeiss micro used at f11 or f8 to provide photons in a 9x9μM pixel?
Now you should get it Doug… in one case (the MS 16X) you have a superb lens and a 9x9μM area that receives the different photon rays for a total of 88 million pixels, while OTOH you have the same superb lens and a 4.5μM area that receives photon rays for the same number of pixels… and you expect the resolution to be anywhere near?

Ah, I get it.

You seem to think that a larger sampling aperture increases the resolution of the digital image (lets say assuming the same overall effective sample pitch).

Not so.

In fact, the theoretical concept of representation by sampling works best with an infinitesimal sampling aperture.

But of course in our situation, sampling an optical image, a tiny aperture is disadvantageous, since the small number of photons captured for any given photometric exposure leads to poor signal-to-noise ratio at low photometric exposures.

So we enlarge the sample aperture, perhaps to one that has dimensions almost equal to the sensel pitch.

Now what does that do:

a. Gives us more photons into the photodetector for any given photometric exposure, improving the signal-to-noise ratio at low photometric exposures.

b. Does not change the Nyquist limit (which depends only on the sampling pitch); thus it does not change that limit on the range of frequencies from the image that can possibly be represented by the set of samples.

c. Gives the sensor a worse MTF (thus potentially degrading the overall MTF for any given lens MTF.

Now, when we arrange for a desired "higher" pixel count by microstep technique, we enlarge the sampling aperture (compared to the sample pitch) by perhaps a factor of 2:1. What does that do, contrasted to just using a sensor with that larger pixel count?

1. Saves money (as such a sensor would be even more expensive that the one we are planning to use).

2. Makes no change in the Nyquist limit; thus it does not change that limit on the range of frequencies from the image that can possibly be represented by the set of samples.

3. Gives the sensor a worse yet MTF (thus potentially further degrading the overall MTF for any given lens MTF).

Best regards,

Doug
 
Hi, Theodoros,



Ah, I get it.

You seem to think that a larger sampling aperture increases the resolution of the digital image (lets say assuming the same overall effective sample pitch).

Not so.

In fact, the theoretical concept of representation by sampling works best with an infinitesimal sampling aperture.

But of course in our situation, sampling an optical image, a tiny aperture is disadvantageous, since the small number of photons captured for any given photometric exposure leads to poor signal-to-noise ratio at low photometric exposures.

So we enlarge the sample aperture, perhaps to one that has dimensions almost equal to the sensel pitch.

Now what does that do:

a. Gives us more photons into the photodetector for any given photometric exposure, improving the signal-to-noise ratio at low photometric exposures.

b. Does not change the Nyquist limit (which depends only on the sampling pitch); thus it does not change that limit on the range of frequencies from the image that can possibly be represented by the set of samples.

c. Gives the sensor a worse MTF (thus potentially degrading the overall MTF for any given lens MTF.

Now, when we arrange for a desired "higher" pixel count by microstep technique, we enlarge the sampling aperture (compared to the sample pitch) by perhaps a factor of 2:1. What does that do, contrasted to just using a sensor with that larger pixel count?

1. Saves money (as such a sensor would be even more expensive that the one we are planning to use).

2. Makes no change in the Nyquist limit; thus it does not change that limit on the range of frequencies from the image that can possibly be represented by the set of samples.

3. Gives the sensor a worse yet MTF (thus potentially further degrading the overall MTF for any given lens MTF).

Best regards,

Doug
Now I was editing the previous one Doug… let me repeat it to bring things back on order for other people to be able to follow the conversation.

"Can't you see that the Nyquist limit of the sensor's sensel pitch is also quadrupled? Each new pixel out of the 88million one Doug is a combination of FOUR different pixels of 9x9μM each! …you see, as the pixel enters into its next by half and then the other from the side also enters the area, you have FOUR pixels in one position at the end… more than that, although the area that all four that coincide occupy, is of 4.5x4.5μM size, they all carry the same information as if the pixel was of 9x9μM size!"
 

Doug Kerr

Well-known member
Hi, Theodoros,
Read all of it Doug, I was editing it to make it more understandable.

Sorry, I'm bad at shooting at moving targets.

Can't you see that the Nyquist limit of the sensor's sensel pitch is quadrupled too?

You have a persistently condescending style.

Quadrupled between which situation and which other situation?

But let me move forward anyway, guessing what you mean.

If we start with a sensor with a certain sensel pitch, used in the "normal" way, its Nyquist limit is based on the sample pitch, which is the same as the sensel pitch.

If we use it to take four "exposures", with the sensor moved 1/2 the sensel pitch between each, then with regard to each one in isolation, the Nyquist limit is is based on the sample pitch, which is the same as the sensel pitch - just the same as before.

However, if we consolidate all four sets of sample values, to be used for an image with four times the number of pixels as in the first case, then the Nyquist limit that controls the result is based on the resulting sample pitch, which is 1/2 the sensor sensel pitch. That Nyquist limit is twice the one for the first case.

Thus we now have an image with four times the number of pixels as in the first case, and with a Nyquist limit that is twice that of the first case.

None of this says anything about the overall MTF of the system (which we may simplistically describe in terms of digital image "resolution"), other than we must have it drop to zero at or before the applicable Nyquist limit.

None of this is affected by the sampling aperture size.

Best regards,

Doug
 
Hi, Theodoros,


Sorry, I'm bad at shooting at moving targets.



Quadrupled between which situation and which other situation?

But let me move forward anyway, guessing what you mean.

If we start with a sensor with a certain sensel pitch, used in the "normal" way, its Nyquist limit is based on the sample pitch, which is the same as the sensel pitch.

If we use it to take four "exposures", with the sensor moved 1/2 the sensel pitch between each, then with regard to each one in isolation, the Nyquist limit is is based on the sample pitch, which is the same as the sensel pitch - just the same as before.

However, if we consolidate all four sets of sample values, to be used for an image with four times the number of pixels as in the first case, then the Nyquist limit that controls the result is based on the resulting sample pitch, which is 1/2 the sensor sensel pitch. That Nyquist limit is twice the one for the first case.

Thus we now have an image with four times the number of pixels as in the first case, and with a Nyquist limit that is twice that of the first case.

None of this says anything about the overall MTF of the system (which we may simplistically describe in terms of digital image "resolution"), other than we must have it drop to zero at or before the applicable Nyquist limit.

None of this is affected by the sampling aperture size.

Best regards,

Doug
But Doug… you don't have that:
"the Nyquist limit is based on the sample pitch, which is the same as the sensel pitch - just the same as before." ….which you stated above, you also have FOUR different pixels that coincide in each and every one for the 88million positions! In total you have 88million positions of 9x9μM size that coincide with another three 88 million pixel areas! The 88mp you get, is with properties of four times the sensor's area! Exactly like if the original sensor was of 74x98cm of area with 9x9μM of pixel size with ALL its properties "squeezed" into a 37x49cm area…
 
Top