• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

New color-filter doubles the light sensitivity in digital cameras

Asher Kelman

OPF Owner/Editor-in-Chief
Hello Everyone,
I read this on CNN... I wonder if Canon going to adopt this. I checked to see if this was posted before since I didn't see it here it is. ;)
New filter promises crisper photos

William,

This is what's iimportant.

"When the shutter opens on a digital camera, an image is projected onto the sensor, which converts light into an electric charge. Most sensors use the Bayer mask: Half of the millions of cells on a checkerboard grid are filtered to collect green light and a quarter each are filtered to let through red and blue light. A computer chip then reconstructs a full color signal for each pixel in the final image.

The new method, which has been under development for more than five years, adds "panchromatic" cells that are sensitive to all wavelengths of visible light and collect a larger amount of light striking the sensor.

Tailoring software algorithms to this unique new pattern enables faster shutter speeds, which reduces blurring when capturing a moving subject, McNiffe said."

So they are talking about the equivalent of a Foveon Sensor. ASFIK, we have at least the foolowing in this race:

  1. Foveon as in the Sigma Cameras
  2. Canon as in their patents
  3. The groups in San Francisco : a consortium supporting a Prfessors work in an independant company. Canon is one of the companies along with HP and others, as far as I remember. Development kits were available to bona fide MFRS as of at least 2 years ago.
  4. Now Kodak: is this part of 3 above or like Canon is it independant?
Someone else can add the links hopefully.

From the pattern of application to consumer cameras by HP and Kodak of the Texas Instrument chip that deals with image optimization (their Digic, so to speak) this is could actually appear in the mass produced digicams first. For this, fits in with Kodak's profit center. Kodak is a or the major digicam player in the USA.

Asher
 

Chuck Fry

New member
So one trades mostly color resolution for sensitivity. Color resolution in a Bayer CFA is already somewhat compromised, so this is not going in the right direction.

But the human eye has much poorer resolution for color than for brightness. Here's a piece from an interview with the Kodak researchers:

Q: This new technology seems to work more like human vision - a combination of color pixels and panchromatic pixels, just like rods and cones in the eye.

JC: Actually the human retina has the best color perception in the fovea, the little tight area in the center of the retina. That is where most of the cones are. Elsewhere in the eye, you find mostly rods. And they let you see black & white. Your color vision relies more on black and white. So these sensor patterns are really different in that the color and the panchromatic pixels are distributed uniformly across the sensor.

JH: But in terms of luminance, the human vision systems has better resolution capability than color perception acuity. We take advantage of that in JPEG and we take advantage of that in NTSC. And now we're taking advantage of that in the design of image sensors as well. We see more shades of gray than we see different colors. We're very adaptive to light. It's a logarithmic system and that's why we go through so many different ranges of brightness.
 
Do you mean color resolution or color accuracy is "compromised" ?

This new array could possibly exaggerate the artifacts from interpolation maybe, no?

Also, if you think of using this "pattern" on a 10mpx sensor to produce a 6mpx image, there might not be as much "loss" as you would expect out of the pattern?

All just speculation - I'm not even sure if I understand this stuff correctly, but this is where my thoughts have led me...
 
But the human eye has much poorer resolution for color than for brightness.

That's correct, and that's why the Bayer Color Filter Array (CFA) works so well for human vision. One has only to look at e.g. a Lab or HSV version of most scenes to recognize the relative(!) unimportance of the chromatic components.

However, the new proposed layout(s) of the filters compromises color resolution even further. One could refer to this article to see that we were already pushing the limits back in 2004 enough to call for a solution (although I don't agree with part of the author's analysis of the Foveon concept). The new proposed layout is going to be too much of a compromise IMHO, especially when e.g. Canon has demonstrated great ISO3200 performance in its 1D Mark III with a 'conventional/traditional' layout.

Bart
 
Do you mean color resolution or color accuracy is "compromised" ?

Resolution, because the chroma sampling is more sparse than in the current filter layout.

This new array could possibly exaggerate the artifacts from interpolation maybe, no?

Well, it'll make it harder for the raw converter to hide, that's for sure. How successful such an attempt will be, remains to be seen (I'm not holding my breath).

Also, if you think of using this "pattern" on a 10mpx sensor to produce a 6mpx image, there might not be as much "loss" as you would expect out of the pattern?

All just speculation - I'm not even sure if I understand this stuff correctly, but this is where my thoughts have led me...

It all depends on the intended use of the image of course. However, for many professional uses 10MP is not good enough, due to output size requirements. Restricting the output resolution more than with the current 'compromise' seems to be heading in the wrong direction, IMHO.

Bart
 

John Sheehy

New member
"The result is an array that is one to two stops more sensitive to light. It will presumably yield less real color information that the old Bayer algorithms,..."

So one trades mostly color resolution for sensitivity. Color resolution in a Bayer CFA is already somewhat compromised, so this is not going in the right direction.

The pattern that has come along with the news actually gives 1/2 the color resolution of the one I have envisioned for a while now. The one in the Kodak news is on the left; the one I envision is on the right:

80768447.jpg


The Kodak pattern is clearly optimized for hardware binning; each exclusive 2x2 tile has two filters of the same color, and two panchromatics, so it is easy to bin this data with no real shift of color. For using the full resolution of the sensor, it is lossy for color resolution, compared to my pattern.
 

Steve Saunders

New member
If you read through the hype, this isn't really a breakthrough for all digital cameras, and it definitely isn't a Foveon type sensor. They have effectively compromised colour information for less noise at higher ISO. This won't be good enough for all applications, but for the likes of Astro shooting and low-light shooting where colour accuracy isn't needed it will be a good idea. We might see cameras dedicated specifically for low-light and Astro use with this type sensor, similar to the way Fuji brought out an IR type DSLR by playing with the S3.
 
If you read through the hype, this isn't really a breakthrough for all digital cameras, and it definitely isn't a Foveon type sensor.
This agree with. The given sensor does not have color data for every channel at ever sensel.
They have effectively compromised colour information for less noise at higher ISO.
They have actually possibly increased the resolution and quality of color output by clearly separating luminosity data from chroma/color data at capture. Due to the questionable usage of a standard formula for reconstructing luminosity from Bayer array data, the actual resolution of the luminosity data is higher while the color data is compromised to half the resolution (1/4 the data).

But, by having the luminosity sensels capture more light, one could theoretically halve the sensel size (1/4 the area) and retain the same color resolution while having significantly more luminosity data and resolution (the luminosity resolution is twice that of the color data in terms of the Nyquist Frequency).

And since that level of color inaccuracy is only visible in a print at nose print distance (10 cm or less) it is of negligible value to anyone but print makers and photographers.

This won't be good enough for all applications, but for the likes of Astro shooting and low-light shooting where colour accuracy isn't needed it will be a good idea.

But color is vital to astro shooters! An accurately capture color could help identify a star. Astronomers are very interested in the spectral decomposition of light (think of hydrogen gas line spectra IIRC). Albeit, lower shutter speeds and the ability to take more properly exposed captures in the same time frame means more viable data with less blur and noise.


The noted discussion of the distribution of rods and cones in the retina is of little value in the article. The distribution of rods and cones in the human eye has far more impact on photographic composition (this is what the foolish rule of thirds attempts to wash over the barely complex reality of the retina with lies to children that grossly oversimplify reality and hurt peoples ability to learn to compose images).

enjoy your day,

Sean
 

Asher Kelman

OPF Owner/Editor-in-Chief
.............
The noted discussion of the distribution of rods and cones in the retina is of little value in the article. The distribution of rods and cones in the human eye has far more impact on photographic composition (this is what the foolish rule of thirds attempts to wash over the barely complex reality of the retina with lies to children that grossly oversimplify reality and hurt peoples ability to learn to compose images).
Hi Sean,

Fascinating Discworld you have introduced me to. It provides magic stories that also act as parables to introduce, explain, simplify and show humor towards and sometimes distance from to our concepts of science.

This world has pages of definitions and seem as detailed as laws to be a good Catholic, Muslim, Hindu, Jew, communist or other followers of creeds that explain everything.

If one follows that, and one needs to invest time, one then perhaps falls into the Rosie O'Donnels who assert that the USA blew up their own buildings and 911 was an inside job since it was impossible for the Twin Towers to collaps by the airplane hits or the Pentagon damaged as shown by a civil aitcraft!

Now I could venture into this new wizard's Discworld or else go back to the rods and cones influence on the rule of thirds. What is the science not fable we need to know?

Asher
 
Now I could venture into this new wizard's Discworld or else go back to the rods and cones influence on the rule of thirds. What is the science not fable we need to know?

Hi Asher,

The big one here is that the distribution of cones (color perception) is very high near the center of our eyes (where we focus) while the distribution of rods is primarily in our peripheral vision. Rods are extremely excitable with very little discrimination (light or dark) which makes them acutely good at detecting motion (a survival skill) but very poor at resolving detail. Hence focusing upon a straight line with your peripheral vision is very difficult as the resulting image data is rather blurry.

An interesting way to test this is to go out on a dark night (or a moonlit night in a dark place) and note how as you shift from light to dark the quality of focus fades away. I first really saw this while tracking a flying heron in the dark where I could here it honking but could only see a slightly lighter blurry shape fly by at 30 feet (10 m). But simply shifting from direct moonlight (bright enough for the cones) to being shaded from the moonlight will show a strong transition in the acuteness of ones visual discrimination.

Meanwhile, the cones which have their highest density in the center of the eye (I cannot remember if the arc of sharp vision in the human eye is 2 degrees or 5 degrees) with things gradually blurring as we move away from the center of the eye.

What is important here is that when we focus in to the thirds points in an image at a reasonable viewing distance is that the edge of the picture frame is in our peripheral vision. This allows one to focus on the subject of the image without the image's frame interfering with our suspension of disbelief and allows us to the see the subject of an image as real rather than as a flat two-dimensional representation of reality.

Beyond that, one simply balances the body language of the subjects in an image to achieve a balanced feeling composition. Hence, the distribution of rods and cones in the eye tells us that we want to move the subject of an image away from the edges if we wish to the subject to be viewed as the subject rather than as an element of an image. An example of the latter could be a triptych or diptych with a single subject spread across multiple prints. There is neither a right nor wrong answer here, instead it simply takes a rule and transforms it into effects which can be used additively or subtractively to work towards the photographers narrative intent.

The big thing we get from looking at the distribution of rods and cones is that we can step away from the numbers and start looking at how a composition feels constructively rather than following some blind numerical mantra that prescribes a result that may not work.

This is the whole reason I call the rule of thirds idiotic. It prescribes thing for no good reason without any understanding.

Another powerful subject area that greatly influences composition is the understanding of the 3-D to 2-D transformation and how it influences perceived depth in a flat 2-D image. Consider the classic example of a fence or telephone pole sticking out about the head of a human subject and how squatting can simplify the image.

some thoughts,

Sean
 
Last edited:
They have actually possibly increased the resolution and quality of color output by clearly separating luminosity data from chroma/color data at capture.

Hardly. The genius of Dr. Bryce E. Bayer's patented layout is still underestimated by many (not necessarily by you
icon12.gif
).

Due to the questionable usage of a standard formula for reconstructing luminosity from Bayer array data, the actual resolution of the luminosity data is higher while the color data is compromised to half the resolution (1/4 the data).

In fact, having investigated this myself, the Bayer CFA layout alows very good Luminance reconstruction, at least since 2004 (and has improved since), so I take the "questionable" qualification as "being questioned by some".

But, by having the luminosity sensels capture more light, one could theoretically halve the sensel size (1/4 the area) and retain the same color resolution while having significantly more luminosity data and resolution (the luminosity resolution is twice that of the color data in terms of the Nyquist Frequency).

And since that level of color inaccuracy is only visible in a print at nose print distance (10 cm or less) it is of negligible value to anyone but print makers and photographers.

Thank goodness you mentioned the importance to us Photographers, since I for one wouldn't like being confused with an average snapshot cell-phone camera shooter.

But color is vital to astro shooters! An accurately capture color could help identify a star. Astronomers are very interested in the spectral decomposition of light (think of hydrogen gas line spectra IIRC). Albeit, lower shutter speeds and the ability to take more properly exposed captures in the same time frame means more viable data with less blur and noise.

For those astro shooters into spectral analysis, I'd agree. However, much to my surprise, many of them are out for pretty pictures (which doesn't mean they don't labor the laws of physic to get them).

Bart
 
Top