• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

On color filter array (CFA) sensing

Doug Kerr

Well-known member
The preponderance of still camera we discuss here use color filter array (CFA) sensing.

In this technique, we do not through the sensor "measure" the color (and of course by color I mean the combination of luminance and chromaticity) of the optical image at any pixel position.

And, notwithstanding the incredible ingenuity of "CFA interpolation" algorithms, we do not in general have for any pixel of the delivered digital image, a "precise" value of the color of that point in the optical image. The compromise is not only to the accuracy of pixel chromaticity, but the accuracy of pixel luminance as well.

One impact of this inaccuracy can be a degradation of perceived resolution of the image (more on this later).

That notwithstanding, amazingly we attain what is for most of our uses a "really good" rendition of the color distribution of the optical image. But not "perfect".

I have recently been introduced to what is called "multishot" technique, used prominently in high-performance "medium format" digital camera backs.

Thanks to Bart and Theodoros, among others, for illuminating this for me.​

Here, the optical image is sampled four times with a CFA sensor, the sensor being shifted between "exposures" by the sensor pitch, vertically or horizontally. The result is that each pixel location in the optical image is examined in sequence by photodetectors with "R", "B", and (twice) "G" responses (not necessarily in that order). Thus we have created a bona fide "tricolor" measurement of the color of each pixel location of the image (just as we would have, for example, in a "three-chip" studio video camera, or a "Foveon sensor" still camera).

We get some insight into the benefit of this in this article:

http://www.luminous-landscape.com/reviews/cameras/h3d50ii.shtml

Of particular interest is the discussion of Figure #3. Here, we see how (for a subject that is really "gray scale") there is a noticeable improvement in perceived resolution when we move (for the same pixel pitch) from the interpolated CFA mode to the true tricolor sensing mode (done via multishot technique).

Very interesting.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Doug,

I'm pleased to see this subject rediscussed and calibrated back to the excellent Dubovoy LL article. Now it's 2013 and we have Leaf Credo and Phase One systems at 80MP, but with 2010 prices!

Now what would be the math of shifting the camera on a micrometer stage to gather more information from the subject directly? I'd imagine that one would make tinier pixels in Photoshop and then align the stack of pictures and blend the entire stack. But how and with what results?

Asher
 

Doug Kerr

Well-known member
Hi, Asher,

I'm pleased to see this subject rediscussed and calibrated back to the excellent Dubovoy LL article. Now it's 2013 and we have Leaf Credo and Phase One systems at 80MP, but with 2010 prices!

Now what would be the math of shifting the camera on a micrometer stage to gather more information from the subject directly? I'd imagine that one would make tinier pixels in Photoshop and then align the stack of pictures and blend the entire stack. But how and with what results?

There is no more "information" to be gathered than by sampling the image at a pitch that corresponds to the highest (usable) frequency contained in the optical image (as deposited by the lens.) Finer sampling generates no more actual information (that is, does not lead to a digital image, regardless of its bit count, of "higher resolution" than that of the optical image).

If micro manipulation of a certain sensor via a micrometer stage or similar apparatus allows us to sample the optical image at the optimal pitch for the useful frequencies in it (as deposited by the lens in use), then this could lead us to the highest-attainable resolution digital image (but one not having an "excess" bit count - the smallest possible bottle for the wine actually at hand).

It is an interesting prospect.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
We'll, Doug, isn't this how Photoacute works. It relies on the microscopic shifts between successive shots with a DSLR even with mirror lockup!

They describe a gain of 2x resolution!

Asher
 

Doug Kerr

Well-known member
We'll, Doug, isn't this how Photoacute works. It relies on the microscopic shifts between successive shots with a DSLR even with mirror lockup!

They describe a gain of 2x resolution!

This is not sampling a single optical image at a finer pitch than the sensor sensel pitch (by moving the sensor across the image). That cannot extract frequencies in the scene detail not present in the image (at a useful amplitude) owing to rolloff of the camera MTF.

The Photoacute scheme is conceptually sampling the scene itself at a finer pitch (by moving the camera across the scene). Thus higher frequencies in the actual scene detail can be captured (which is of course our real interest).

Because the various sampling "passes" have a random offset (wherever you had the camera aimed when that exposure was taken), the process is not "straightforward". But it should be effective. And it is very clever.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Thanks, Doug,

That's exactly what I was proposing above in post #2 where one would move the camera itself by known increments. Photoacute, as you notice, takes advantage of random movements.

Asher
 

Doug Kerr

Well-known member
Hi Asher,

Thanks, Doug,

That's exactly what I was proposing above in post #2 where one would move the camera itself by known increments.

I had missed that, and (through careless reading) thought you were speaking of moving the sensor.

So my various observations were mis-aimed.

Sorry!

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
So, Doug, (my good friend with tons of patience for my prodding technical questions), if Photoacute can gain so much with random movements about a fixed point, then what would predictable ucro increments in prescribed directions be able to deliver? What would be the path to discovering the potential improvement limit.

I think one might be able to set the camera to say 1600 ISO, deal with noise in the software and then take a sequence of 3-5 pictures at tiny increments in lateral or vertical shift.

Asher
 

Doug Kerr

Well-known member
Hi, Asher,

So, Doug, (my good friend with tons of patience for my prodding technical questions), if Photoacute can gain so much with random movements about a fixed point, then what would predictable ucro increments in prescribed directions be able to deliver? What would be the path to discovering the potential improvement limit.

Well, let me think about that.

I think one might be able to set the camera to say 1600 ISO . .
Not sure there is any need for that for "brassboard" trials, as there is no prospect for a rapid execution of the sequence anyway.

Can one of the nifty automatic panoramic multishot mounts be adapted to this form of combat?

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi, Asher,



Well, let me think about that.


Not sure there is any need for that for "brassboard" trials, as there is no prospect for a rapid execution of the sequence anyway.

Can one of the nifty automatic panoramic multishot mounts be adapted to this form of combat?

Best regards,

Doug


Well, I think the sequence could be rapidly executed with a stepping motor. Why would one need a multishot mount too? I guess one could combine the two, but the advantage of doing this would be to bipass the need to spend so much. One could simply use any modern DSLR and mount it on a suitable stage!

Asher
 

Arthur Haselden

New member
The logical limit of the sensor, per shot, should be what it would capture if it could sample every color at every pixel. With a CFA we get a good guess of what the right color is at the pixel level. It would be better with shift based sampling as done by Haselblad.

As you mention, photoacute adds sub-pixel shifts to create super-resolution. I quess that most of the gain is when the pixels did not line up with the image data.

The limit is the contrast in the diffraction spot size of the wavelengths. You can sample at the diffraction spot size or you can supersample at the very small contrasts within the center diffraction peak. Wouldn't you then get bad color data from the diffraction ring hitting nearby pixels of the same color?

I have some of the best lenses/camera measured on DxO. When I look at my images the color realism is far higher at 50% zoom then at 100% zoom. The resolution data looks good at the pixel level. The color data seems weaker.
 
Top