• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

pixels or PIXELS??

First of all hello for my first post here, I am a long time RG member but it seems extremely slow over there these days and I see a lot of familiar names who used to frequent RG.

I have been curious about the answer to this question for some time now.
1st - General statement - Larger pixels are better (less noisy) as in pro bodies.
When a lower than maximum resolution has been selected on 1DmkIIN, what is going on with the pixels on the chip. Let's use 4 mega pixels as an easy example (as opposed to 8MP). does the camera somehow combine pixels on the chip to actually create "larger pixels" or does it use every other pixel or does it do something else? I am guessing that Chuck probably knows the real answer to this question. There is nothing in any of the manuals (that I can locate) that address this question.
Thank you in advance,
Chris
 

John Sheehy

New member
First of all hello for my first post here, I am a long time RG member but it seems extremely slow over there these days and I see a lot of familiar names who used to frequent RG.

I have been curious about the answer to this question for some time now.
1st - General statement - Larger pixels are better (less noisy) as in pro bodies.
When a lower than maximum resolution has been selected on 1DmkIIN, what is going on with the pixels on the chip. Let's use 4 mega pixels as an easy example (as opposed to 8MP). does the camera somehow combine pixels on the chip to actually create "larger pixels" or does it use every other pixel or does it do something else? I am guessing that Chuck probably knows the real answer to this question. There is nothing in any of the manuals (that I can locate) that address this question.

The images look like they are converted from RAW to 8-bit (similar to an 8-bit TIFF), and then downsampled before writing to JPEG. If you shoot RAW+JPEG, the RAW is always at the full resolution of the camera with Canons.

No combining of sensor pixels is happening. Skipping pixels is not an option on a professional camera; it gives horrible results, and how do you skip camera pixels when a camera has a color array like this, anyway:

RGRGRG
GBGBGB
RGRGRG
GBGBGB

? You'd have all one color.

You could skip every other pixel after conversion, but that can lead to aliasing.

Nothing that Canon is doing in-camera to reduce image size really improves anything about the image, other than the noise at a 100% view, but you don't take shots for the 100% view; you take them for the image, and the image is blockier, more pixelated, printed at the same size from a smaller resolution. Pixelation is a type of noise, too, and it exaggerates the smaller amount of noise a downsample has. Reduction in image size is best saved for the last step - display.
 
1st - General statement - Larger pixels are better (less noisy) as in pro bodies.

In general (everything else being equal) that's correct. However, digital cameras also use dedicated image processing chips (ASICs), Canon's are called DIGIC (I, II, III). These newer supporting circuitry versions also improve the image quality, as do advances in manufacturing processes.

When a lower than maximum resolution has been selected on 1DmkIIN, what is going on with the pixels on the chip.

The new pixels are each a result of 'some' weighted average of the more numerous ones in the Raw data file. The 'some' part is handled by the fast/dedicated/proprietary hardware integrated circuit. Whether the Raw (Bayer CFA) data is first converted to RGB before an average is calculated, is proprietary. However, I can imagine that some shortcuts are taken for speed sake.

Postprocessing/resampling in software will often allow higher quality results, but then speed is less of an issue than in a camera that needs to clear buffers and write to a card to be ready for the next shot.

Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
My real hope is that when Canon finally introduces its Foveon-like sensor, it will have the ability to BIN, ie combine adjacent pixel sites to make super-sized pixels with much greater light sensitivity and lower noise.

The advent of the new CMOS sensors if they are introduced this next time around, which is perhaps too hopeful, would indeed revolutionize photography. Each pixel will be separately addressed an individual camera. This will mean algorithms specific for many particular setups to optimize everything you might imagine.

At present, we are stil finishing the megapixel races. We now have tiny sensors width of a pixel getting down to 1-2 microns and 10 million packed on a digicam's tiny chip for marketing prowess at the expense of picture quality.

This race is so damn destructive and a waste of resources!

Asher
 
My real hope is that when Canon finally introduces its Foveon-like sensor, it will have the ability to BIN, ie combine adjacent pixel sites to make super-sized pixels with much greater light sensitivity and lower noise.

Actually there's no free lunch to binning either. One either gets more sensitivity by adding multiple sensel responses into a new (lower resolution) output pixel, or one averages the response of multiple sensels into a lower resolution but also lower photon shot noise (by a factor of 1/SQRT(#sensels) output pixel. Canon's read noise is already close to as low as physics allows, without using 'deeper' potential wells that can receive longer exposures and thus have lower photon shot noise.

The advent of the new CMOS sensors if they are introduced this next time around, which is perhaps too hopeful, would indeed revolutionize photography.

I'm not holding my breath, because I think we'll need to wait longer for such sensor arrays. It would be interesting to have an RGB per pixel capture capability, but the patents I saw, suggest issues with fast moving subjects.

On the other hand, if Canon were to introduce something like a double exposure with half a pixel shift between exposures mode ... That would cause medium format manufacturers to worry about their market potential. Double resolution with very interesting anti-aliasing characteristics, I could see several fields of usage for that.

At present, we are stil finishing the megapixel races. We now have tiny sensors width of a pixel getting down to 1-2 microns and 10 million packed on a digicam's tiny chip for marketing prowess at the expense of picture quality.

This race is so damn destructive and a waste of resources!

Indeed, for those seeking image quality (with low noise and high dynamic range), small sensels are a dead-end proposition.

Bart
 

John Sheehy

New member
Actually there's no free lunch to binning either. One either gets more sensitivity by adding multiple sensel responses into a new (lower resolution) output pixel, or one averages the response of multiple sensels into a lower resolution but also lower photon shot noise (by a factor of 1/SQRT(#sensels) output pixel. Canon's read noise is already close to as low as physics allows, without using 'deeper' potential wells that can receive longer exposures and thus have lower photon shot noise.

Binning 2x2 on the sensor allegedly gives half the readout noise of binning in software; that's what Dalsa claims for their forthcoming 28/7 MP sensor. That's a real gain, but for most photographs, I'd rather have the resolution, personally. Unfortunately, the decision for on-chip binning must be made at readout time, so you can't exercise the option at a future point, or make a luminance-masked blend of the low-res/low-noise binned image and the higher-res/higher-noise image (unless you use separate exposures).

Indeed, for those seeking image quality (with low noise and high dynamic range), small sensels are a dead-end proposition.

Why? Other than the obstacles of maintaining a high fill factor, and having to read pixels faster to maintain current shot-to-shot times, what is the problem with smaller sensels?

Smaller sensels don't increase *image* noise; they increase pixel noise. The more pixels in the image, the smaller each pixel's voice is in the chorus of noise. Image shot noise only increases when the photon capture per unit of area decreases (for the same size/format). Nine 9,000-electron sensels in the same area that one 81,000-electron sensel would occupy should not increase image shot noise at all. Taken to an extreme, having a *list* of the analog location of every photon that hits the focal plane would be ideal, as it would greatly increase resolution, with its infinite number of infinitessimally small pixels. Binning such information would do nothing to increase potential IQ; it's only a requirement of providing a low-res bitmap of the capture.

The very high noise we see in 7+MP P&S cameras is probably due to factors other than smaller sensels; readout time/quality compromises, poor microlenses, 100% pixels views and aggressive noise reduction geared towards it, sloppy blackpointing, etc, etc.
 
..., for those seeking image quality (with low noise and high dynamic range), small sensels are a dead-end proposition.

Why? Other than the obstacles of maintaining a high fill factor, and having to read pixels faster to maintain current shot-to-shot times, what is the problem with smaller sensels?

If you look at the sensels as apertures, it becomes clear that smaller sensels require longer exposure times to collect the same number of photons. They are therefore 'less sensitive' and photon shot noise is worse. Even if the well depth could be enhanced to improve photon statistics, lower sensitivity will limit its usefulness.

Smaller sensels don't increase *image* noise; they increase pixel noise. The more pixels in the image, the smaller each pixel's voice is in the chorus of noise. Image shot noise only increases when the photon capture per unit of area decreases (for the same size/format). Nine 9,000-electron sensels in the same area that one 81,000-electron sensel would occupy should not increase image shot noise at all.

In a perfect world it wouldn't, but as sensels get smaller, the fill factor gets exponentially worse.

I agree with Roger Clark that quantum efficiency needs to be improved, and that sensels kept as large as possible in a size/resolution trade-off is the way forward.

Bart
 

Erik DeBill

New member
In a perfect world it wouldn't, but as sensels get smaller, the fill factor gets exponentially worse.

I agree with Roger Clark that quantum efficiency needs to be improved, and that sensels kept as large as possible in a size/resolution trade-off is the way forward.

I have faith that they'll find solutions to the fill factor problems. I firmly believe that one day the really high end cameras will have effectively infinite resolution at the sensor level (ignoring optical effects at the lens/aperture) and we'll downsample/combine things to get down to the resolution we actually need.

All the rules about fill factors, noise levels, etc are only really true within a given process used to design and produce the two sensors being compared. That's why sensors from different manufacturers can have different performance with the same size sensels. They all just keep getting better. I know it's slowing down, but they're still making these things with 15-20 year old IC tech (or so says my brother in law who's in the IC fabrication industry). I'm sure there's a lot more performance to squeeze out.

Every time they try to say that hard drive or processor technology is about to hit a technical barrier and stop progressing, something comes along and knocks that barrier down. I'm not going to bet against that happening with sensors.
 

John Sheehy

New member
If you look at the sensels as apertures, it becomes clear that smaller sensels require longer exposure times to collect the same number of photons. They are therefore 'less sensitive' and photon shot noise is worse.

Per pixel. That does not necessarily harm the image. There is no direct correlation between standard deviation and image noise. Standard deviation does not take noise frequency (and noise frequency vs image frequency) within the image into account.

Take an under-exposed, noisy image from a DSLR, and bin it 2x2, 3x3, 4x4, etc. The noise keeps decreasing as a statistic, but it does not decrease much visually, with the image viewed at the same magnification as the original. The image just gets worse and worse, the more you bin. Its pixelation is a type of noise, and the pixelation exaggerates the lower statistical noise that is there.

The same subject from the same distance, with the same ISO setting, Tv and Av values, and focal length on my 10D and my Sony F707 makes the 10D look *very* sad. I bet that the 5D would look sad compared to most of the current small-sensor 7 and 8MP cameras, with the same real world crop, and everything else equal.

Even if the well depth could be enhanced to improve photon statistics, lower sensitivity will limit its usefulness.

There are plenty of situations where the extra well depth would be useful. ISO 100 is overkill with a wide lens on a sunny day.

In a perfect world it wouldn't, but as sensels get smaller, the fill factor gets exponentially worse.

I agree with Roger Clark that quantum efficiency needs to be improved, and that sensels kept as large as possible in a size/resolution trade-off is the way forward.

I don't agree with a lot of Roger's conclusions.

Roger is comparing a low-end P&S to DSLRs, at the pixel level. The 5D and 1DmkII, in any event, do nothing better than the 20D with their bigger pixel pitches, as far as shot noise is concerned, and read noise is only significantly lower on the 1DmkII at ISOs 100 and 200. The only reason they outperform the 20D is because the coarser pixel pitch allows lower lens MTF for the same results. The 5D has more pixels, too, which miniaturizes the noise and increases maximum image resolution.
 

John Sheehy

New member
The same subject from the same distance, with the same ISO setting, Tv and Av values, and focal length on my 10D and my Sony F707 makes the 10D look *very* sad. I bet that the 5D would look sad compared to most of the current small-sensor 7 and 8MP cameras, with the same real world crop, and everything else equal.

I have a finer-pixel-pitch camera than my F707 now; the Panasonic FZ50.

Here's a rough simulation of what it might be like if a 1.6x-crop DSLR had the pixel spacing of the Panasonic FZ50 (and equal electronics):

http://www.pbase.com/jps_photo/image/74020772/original

Now, I understand the Canon 10D has a fairly strong AA filter (takes almost 3 pixel lengths to go from 90% white to 10% black with an optically sharp edge), but this is the closest I can simulate; my largest-pixel-pitch DSLR with a sharp lens (Tamron 90mm Macro), vs my tiny-pixel-pitch FZ50.

I'd rather have the images from the tiny pixels.

Pixelation is a kind of "noise", IMO, and it actually exaggerates the noise that exists at the pixel level.

Obviously, not every lens made for the 35mm format or for APS DSLRs have the resolution to benefit from the smaller pixels, but many do.
 
Top