• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Trade-offs between high speed and image quality versus high resolution

John Sheehy

New member
"even if they have been indoctrinated into a pixel-centric view."

Good Lord, what kind of mumbo-jumbo is that? I'm always amused when someone who engages in mega-pixel-peeping or spec-reading tries to convince the rest us of something we know to be BS. Maybe my kids' Coolpix 2000s do some thing that my Canon equipment finds difficult, like DOF. So what? Keep your Panasonic Zwhatever. If it makes your life happier that's wonderful. I'll still use my inferior Canon bodies!

How do I comment on someone's lack of reading comprehension without coming across as rude? I need lessons.

You clearly did not understand anything I wrote. That sounds rude, but it is the truth.
 

John_Nevill

New member
There's a lot in this debate and hopefully more to come, so let's re-inforce the OPF philosophy of enhancing the discussion and exchange of ideas on photography by keeping it on a level!
 

Asher Kelman

OPF Owner/Editor-in-Chief
I have no issue with Jack's caution in us looking at what to some may be a fanciful jouney into the absurd.

This is an important part of any board meeting BTW. When people discuss the minutiae of putting in a new highway, someone else might interject "Stop his nonsense! There are other more pressing issues to devote our energies to" He may or may not be listened to.

This has to be recognized and is part of OPF.

However, I will say that it's also important to question certain "rules" we go by both in composition, esthetics and choice of equipment.

One of these has been that small pixels become useless because of noise and that the increased resolution per unit area is not delivered because of noise and well depth.

We need to understand the trade offs here.

It is something like talking about vignetting where I personally like it for artistic effect and pulling attention to the center of the image. however, for architecture and documentation, this is not acceptable.

Some trade-off debates seem silly, and perhaps they mostly are. However, for some of us, particular topics are important.

Asher
 
One of these has been that small pixels become useless because of noise and that the increased resolution per unit area is not delivered because of noise and well depth.

We need to understand the trade offs here.

While still involved in finalizing my tax papers, I managed to do a small experiment that might help to keep the thought process going (until I manage to do some shots for a poll on output realism).

I used a Canon Powershot G3 at ISO 100 to shoot some Raws of a featureless grey cloudy sky while covering the lens with a piece of opaline glass. It'll be hard to produce a more boring picture than that, won't it? However, it may help to avoid as much visual/mental distraction, as a basis for seeing and understanding the phenomena at play.

The first example shows a center crop of a single image, and the same for an averaged stack of 17 different Raws, converted to 16-bit/channel Adobe RGB TIFF images without sharpening or noise reduction and with default ACR parameters, but with a linear tone curve. The reported 'Std' is the original conversion's Standard deviation, before the sRGB + JPEG conversions shown here.

Original-size.jpg


The rationale behind it is to mimic the relative difference between the Raw converted shot noise of a sensor array with small sensels, and one with 16x larger sensels (e.g. between a 2 micron vs 8 micron sensel pitch). The 17th shot was an attempt to approximate the slightly better fill factor of larger sensels (where relatively less real estate is needed for transfer gates and such).

The first interesting observation is that while noise is reduced with the 'larger' sensels, it does not follow the 1/Sqrt(17) expectation, presumably because the Raw converter (and Bayer CFA demosaicing) produce noise patterns larger than single pixels. We are also looking at gamma adjusted images which will have some effect as well.

The second example uses the same images as above, but after first binning them to 1 output pixel for each 4x4=16 input pixels.

Images-binned-4x.jpg


Another interesting conclusion is that simple binning seems less effective (Std 1.51 vs 0.93) than larger sensels for noise reduction. Again, part of the reason is noise structures exceeding single pixel boundaries. Another reason, and I rarely hear people explaining it in the context of binning, is that simple binning itself can cause some aliasing artifacts unless properly AA-filtered.

Yet another conclusion therefore is that the smaller sensels will lead to ISO related shooting restrictions faster. The averaging stacking procedure also shows (but that's hardly a surprice) that even a noisy sensor array can give low noise results by shooting multiple images of a (stationary) subject and stacking/averaging those images.

All this doesn't yet answer the question how output realism is affected, but that's something for a later evaluation.

Bart
 

Jack Joseph Jr

New member
Hey John, we got out soft deletes back!

How do I comment on someone's lack of reading comprehension without coming across as rude? I need lessons.

You clearly did not understand anything I wrote. That sounds rude, but it is the truth.

To continue your bashing of me following my bashing of you;

Don't worry about coming across as rude. Just state your case.

Indeed, I clearly don't understand what the point of your original post was. I'm not sure if you're touting a camera as being better than the recognized leaders in the SLR field or whether it was some kind of technical appraisal whose point was lost on me. Please state again what you're really after.
 

John Sheehy

New member
The first example shows a center crop of a single image, and the same for an averaged stack of 17 different Raws, converted to 16-bit/channel Adobe RGB TIFF images without sharpening or noise reduction and with default ACR parameters, but with a linear tone curve. The reported 'Std' is the original conversion's Standard deviation, before the sRGB + JPEG conversions shown here.

That's not a test of the pixels, then. The RAWs should be stacked or averaged in the mosaiced state.

The rationale behind it is to mimic the relative difference between the Raw converted shot noise of a sensor array with small sensels, and one with 16x larger sensels (e.g. between a 2 micron vs 8 micron sensel pitch). The 17th shot was an attempt to approximate the slightly better fill factor of larger sensels (where relatively less real estate is needed for transfer gates and such).

The first interesting observation is that while noise is reduced with the 'larger' sensels, it does not follow the 1/Sqrt(17) expectation, presumably because the Raw converter (and Bayer CFA demosaicing) produce noise patterns larger than single pixels. We are also looking at gamma adjusted images which will have some effect as well.

Obviously. Manipulating data that is no longer RAW does not emulate pixel effects.

The second example uses the same images as above, but after first binning them to 1 output pixel for each 4x4=16 input pixels.

Images-binned-4x.jpg

Did you notice the large gradient in the image on the right in the second pair of crops? Where did that come from?

Perhaps that's why standard deviation isn't reducing.

Do a histogram equalize on each square, and you'll see what I mean.
 

John Sheehy

New member
To continue your bashing of me following my bashing of you;

Don't worry about coming across as rude. Just state your case.

Indeed, I clearly don't understand what the point of your original post was. I'm not sure if you're touting a camera as being better than the recognized leaders in the SLR field or whether it was some kind of technical appraisal whose point was lost on me. Please state again what you're really after.

Big pixels are a temporary stopgap; that's my point. With the same true focal length lens, same ISO, same shutter speed, same f-stop, a camera like the FZ50 outperforms any crop from a DSLR with the same physical size as the FZ50 sensor, meaning that the popular notion that pixels should be big for maximum image quality is nonsense. The idea that using smaller pixels means loss of total photons is also false; the FZ50 collects as many photons per unit of area as the 1DmkII, full well, and does it at a higher ISO (is more sensitive). Again, per unit of area.

The images I included are actually a *WORST CASE SCENARIO* for the FZ50; the images are at ISO 1600, and ISO 1600 has no optimization on the FZ50, in fact, ISO 1600 is anti-optimized. When I get a chance, I will make similar comparisons at ISO 100, and DSLR at 1600 against FZ50 at ISO 100 pushed to 1600.
 

Jack Joseph Jr

New member
But John, I'm a photographer. I don't really give a rat's behind whether my cameras imaging pixels are 8 microns wide or just .5 microns. Nor do I care how many photons it takes to fill the well. It's what the pictures look like that counts.

I replaced a 1D2 with a 1D2N because I shoot tons of JPGs for work and the 1D2N JPGs just look better right out of the camera. Does it matter to me whether it's because of Picture Styles or maybe just improved processing technology? It's what the pictures look like that counts.

Given the limitations of the weak focusing system my 5D shoots better looking pictures than does my 1D2N. Is it because of improvements in the sensor, better demosaicing or just because there are so many more pixels that the spots of color noise just get relatively smaller on my monitor? Don't care, it's what the pictures look like that counts.

I'll leave the details and all of that camera engineering stuff to the actual engineers with MAs and PhDs who actively work in the digital photography R&D and manufacturing business. I like to talk tech occasionally but enough is enough already. Photographers know what they want in terms of IQ, DR and all that stuff. Let the engineers satisfy our needs by what ever technology they choose to use.
 
Big pixels are a temporary stopgap; that's my point. With the same true focal length lens, same ISO, same shutter speed, same f-stop, a camera like the FZ50 outperforms any crop from a DSLR with the same physical size as the FZ50 sensor, meaning that the popular notion that pixels should be big for maximum image quality is nonsense.

But then why would one want to compare with a crop of 7.18x5.32 mm from a 36x24 mm sensor? That there is a resolution difference between a 2 micron pitch sensor array and dedicated optics (by the way visually diffraction limited at approx. f/3 because of the small sensels) and e.g. a 7.2 micron pitch sensor array is rather obvious, the lens projected image is sampled at a higher density. How one factors resolution and noise characteristics in into quality, is another matter and (to me anyway) it's the real topic of this thread.

The idea that using smaller pixels means loss of total photons is also false; the FZ50 collects as many photons per unit of area as the 1DmkII, full well, and does it at a higher ISO (is more sensitive). Again, per unit of area.

But it would take e.g. the sum of 16 or 17 smaller sensels to come close to capturing the same amount of photons from the same image projection. TANSTAAFL. Which is why the DPreview review of the FZ50 states:
"Our biggest complaint about the FZ30 was noise (and noise reduction), and there's no getting round the fact that it is also the FZ50's achilles' heel too. The Venus III noise reduction is undoubtedly effective at reducing measurable noise, but it does so in a way that produces a soft, smeared appearance even at ISO 100 if you view actual pixels on-screen. The excessive chroma noise reduction is particularly evident with low contrast detail such as distant foliage or hair."

Bart
 

John Sheehy

New member
But then why would one want to compare with a crop of 7.18x5.32 mm from a 36x24 mm sensor? That there is a resolution difference between a 2 micron pitch sensor array and dedicated optics (by the way visually diffraction limited at approx. f/3 because of the small sensels) and e.g. a 7.2 micron pitch sensor array is rather obvious, the lens projected image is sampled at a higher density. How one factors resolution and noise characteristics in into quality, is another matter and (to me anyway) it's the real topic of this thread.

Yes, it is. I'm still waiting to see anything but statements of faith from those who think that the pixel-level noise of the smaller pixels is an issue in the actual image. Where are all the tests like mine? The ones that really matter? I go to Roger's website, and he compares an 8MP P&S with an 8MP large-sensor DSLR, full-frame each, and conclusions that "big pixels are better" when the real conclusion should be "big sensors are better". I meet many people who argue against my viewpoint who quote Roger as scripture, and it is getting very annoying. He may have good intetntions, but he is popularizing many falsehoods, IMO. "Unity gain" is another one. DSLRs limited by bit depth at low ISOs is another (they are actually limited by analog noise in and/or before the ADC).

But it would take e.g. the sum of 16 or 17 smaller sensels to come close to capturing the same amount of photons from the same image projection. TANSTAAFL.

At ISO 100, you need about 10.9 FZ50 pixels to make one 1DmkII pixel, photon-wise. It takes 16.5 FZ50 pixels to cover the same area as one 1DmkII pixel. The FZ50 collects more photons per unit of area at ISO 100 by a factor of about 1.5x. At full well (requires ISO 50 on the 1DmkII), they are about the same.

Which is why the DPreview review of the FZ50 states:
"Our biggest complaint about the FZ30 was noise (and noise reduction), and there's no getting round the fact that it is also the FZ50's achilles' heel too. The Venus III noise reduction is undoubtedly effective at reducing measurable noise, but it does so in a way that produces a soft, smeared appearance even at ISO 100 if you view actual pixels on-screen. The excessive chroma noise reduction is particularly evident with low contrast detail such as distant foliage or hair."

What does the Venus III noise reduction engine have to do with anything we are discussing? I thought we were talking about pixels; not JPEG rendering.

The RAW data from the FZ50 is sharp enough that it actually aliases to a small degree. The crops I demonstrated are very raw interpolations of the RAW data. Look at the whiskers in the FZ50 100% crop. No sharpening there at all. Each color plane is simply interpolated to fill in the CFA "holes".
 

Jack Joseph Jr

New member
I'm still waiting to see anything but statements of faith from those who think that the pixel-level noise of the smaller pixels is an issue in the actual image.

---

Sorry, I just can't stop chiming in. That "pixel-level noise of the smaller pixels is an issue"? Again, who cares? Are you trying to engineer a new line of cameras or promote a company's product line? Technological advance might someday make .25 micron imaging pixels have better DR than the current large ones. If so a future camera might take better pictures than current ones.

It's still the picture that counts. I have yet to buy a camera because of any particular technology. It's only the results that really matter.
 

John Sheehy

New member
Sorry, I just can't stop chiming in. That "pixel-level noise of the smaller pixels is an issue"? Again, who cares? Are you trying to engineer a new line of cameras or promote a company's product line? Technological advance might someday make .25 micron imaging pixels have better DR than the current large ones. If so a future camera might take better pictures than current ones.

It's still the picture that counts. I have yet to buy a camera because of any particular technology. It's only the results that really matter.

You're not making any sense. If you like the results, then you like the technology. You just don't know the details of it. Why do you post to this thread if you are not interested in its contents?
 
That's not a test of the pixels, then. The RAWs should be stacked or averaged in the mosaiced state.

That's correct. Unfortunately the (ImagesPlus) software for processing truely Raw (still mosaiced) data doesn't process the G3's CRW files yet. That functionality will probably be added by the end of the year.

I could have done the experiment with a Canon DSLR, but their noise is already quite low so the improvement would be hard to actually see (unless perhaps pushed to ISO 1600, but then that couldn't be compared with the G3 or even smaller sensel pitch cameras). I would have had to resort to data only, and show the results after full calibration with blackframes, offset/bias frames, etc. I'm afraid that would have lost the entire readership, because they can't relate that to an image anymore.

Did you notice the large gradient in the image on the right in the second pair of crops? Where did that come from?

Perhaps that's why standard deviation isn't reducing.

Yes good catch, thanks for pointing it out. Apparently the sensor signal was not uniform over a very large area. When binning, a much larger number of pixels is needed for the resulting output pixel dimensions. That will increase visibility of the same non-uniformity in the first example, but which was cropped and thus represents a smaller physical area than in the binned version.

Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Guys,

Let me give you my take.

This discussion is not people who need to go ut and shoot a wedding or model today, next week or in the next several years. The scope of this thread is highly limited and the subject mostly pointless to us all.

Post if you care to look at and consider data concerning tiny pixels.

Particular value are tests or discussion that add or evaluate concerete information on tiny pixels. Then what might be show potential for practical use.

Trade-off of aperture, sensor size and total light collecting surface area (large chip or scanning back), need for multiple exposiure and summation etc could be of value.

Under these restriction this discussion is interesting, at least to me.



Asher
 
"John, please stick to "your" subject."

You made a statement regarding the FZ50 versus the 1DmkII. What examples do you have to substantiate your claim?

Regards,

Joe Kurkjian
 
Last edited by a moderator:

John Sheehy

New member
You made a statement regarding the FZ50 versus the 1DmkII. What examples do you have to substantiate your claim?

I assume you're replying to me (a little quote would help people who don't read in threaded view; I find threaded view useless when it doesn't permanently mark the posts you've read, and when some are hidden because they're too deep).

I made a statement regarding the same physical area from both sensors viewed at the same size. There is an example in this thread, using a Canon 10D (the biggest-pixel camera I own); an embedded image of 4 crops. The example is a worst-case scenario for the FZ50's sensor; an unoptimized ISO 1600.

Bart was talking about subtle texture, so I'll do another suite when I get a chance; ISO 100 vs ISO 100, and ISO 1600 on a DSLR compared to ISO 200 (which has the lowest read noise in electrons on the FZ50) pushed to 1600. I'll use the 20D next time, as its ISO 1600 has about the same level of noise at the pixel level as the 1DmkII.

I'm not going to buy a 1DmkII to do the test, and no one else has one and a 2 micron P&S to compare to, and has offered to do so. The 1DmkII, however, has known characteristics, as measured by Roger Clark and others, and it clearly does not collect as many photons per square mm as other cameras, at the same ISO. The 1DmkIII apparently will "catch up" some in this area.
 
... ISO 200 (which has the lowest read noise in electrons on the FZ50)

Do you mean it's lower at ISO 200 than at ISO 100? I'd find that suspicious when referring to electrons, but I could understand it when referring to Digital Numbers (and assume some type of noise reduction processing going on).

Bart
 

John Sheehy

New member
Do you mean it's lower at ISO 200 than at ISO 100? I'd find that suspicious when referring to electrons, but I could understand it when referring to Digital Numbers (and assume some type of noise reduction processing going on).

IIRC, the sensitivity of the camera (of the RAW data) at ISO 100 might be slightly different than at the other ISOs, adjusted for the ISO value, so that may be the reason, in which case ISO 100 isn't ISO 100 exactly (relative to the others). It is also possible that an amplifier is used after the initial amplification (very likely, as the blackframe noise in electrons is almost the same at all ISOs), and it is optimized more for the ISO 200 gain level. The difference is small, however; assuming exactly 1/2 the sensitivity at ISO 100 as 200 (4800 and 2400 photons, respectively), the blackframe noise is 2.7 electrons at ISO 100, and 2.55 electrons at ISO 200. If it were noise reduction, I would expect more at higher ISOs, but it isn't there. ISO 200 is the lowest, ISO 800 is the highest (with ISO 1600 just a hair less noise, in electrons).

In any event, the camera doesn't make the best use of the sensor at high ISOs for its RAW files.

ISO 1600 could just be an 8-bit linear RAW format using ISO 100 amplification, without significant increase in noise due to posterization. It is stored in an uncompressed format with 16-bits of precision (0000 for the LSBs), and the RAW files are all more than 20 MB, making the camera less than a joy to use in RAW mode. You have to wait about 6 seconds between shots.
 
D

Doug Kerr

Guest
And the answer is?

Wow! It's fascinating to have the great complexities of this whole field laid out (dead or alive). I just ran into this thread, and I haven't yet had the energy to follow the whole thing.

John, I was intrigued by Asher's simple sounding question, which doesn't at all address the bottom line of the whole area, but does get to the bottom line of one of your specific points. It is:

"However, could you give some practical examples where the FZ50 could replace the 1DII?"

You could help me by pointing to where in this long thread that question was answered.

Thanks
 

John Sheehy

New member
Wow! It's fascinating to have the great complexities of this whole field laid out (dead or alive). I just ran into this thread, and I haven't yet had the energy to follow the whole thing.

John, I was intrigued by Asher's simple sounding question, which doesn't at all address the bottom line of the whole area, but does get to the bottom line of one of your specific points. It is:

"However, could you give some practical examples where the FZ50 could replace the 1DII?"

You could help me by pointing to where in this long thread that question was answered.

Thanks

It was never my intention to present the FZ50 camera as an alternative to the 1DmkII camera. My point in comparing them is to show how much room there is to miniaturize pixels, before the horror stories that we hear about small pixels start to come true. My basic premise is that a crop from a DSLR, the same size as the 7.x * 5.x mm frame of an FZ50, with its 1.97 micron pixels, is vastly inferior to it in almost every way, and the ways in which it isn't are addressable with better electronics and a different RAW conversion approach (the FZ50 has no optimizations for high ISO; total read noise in electrons is higher at higher ISOs; when high-frequency details are not needed, the small-pixel RAW data can be downsampled or binned for smooth areas like blue sky).

The most interesting fact, to me, is that the FZ50 has 16.5 photosites in the space of a single 1DmkII photosite, and yet it captures 55% more photons in the same area, in the same exposure. It stands to reason that the FZ50 pixel pitch could be even finer, with better or equal efficiency of photon capture, compared to the 1DmkII. Now, one of the reasons that the 1DmkII has low efficiency is that the on-photosite transistors that make Canons perform with low read noise at high ISOs take up so much space. If, however, the pixel pitch were to decrease, then the camera would not need to have an 80,000-photon amplification mode. In fact, just halving the pixel pitch would mean that a significant number of transistors could probably be dropped. What was previously ISO 1600 amplification would now be ISO 400 amplification; 800 would be 200, 400 would be 100, etc. 6 different levels of optimized amplification would not be needed, and the low-ISO amplifiers for 80,000, 52000, and maybe 26000 photons could be eliminated. The ideal thing to do, it would seem to me, is start with a base ISO that uses almost full-well, just below any non-linear level, and build a series of optimized ISOs from that point. So, if you had a full well capacity of 21000 photons, you could have a 20000-photon ISO 70, a 10000-photon ISO 140, a 5000-photon ISO 280, a 2500-photon ISO 560, etc, choosing the cut-off based on space needs and performance of the amplifications. The gaps between optimized ISOs needn't be 1 stop, either, that's just a convention, so you could save some space by using strategically chosen amplifications (for very small capacities, I've been told that read noise can drop as low as about 1 electron). Enough rambling.

Another way of looking at this is that there is no value in putting all your photons in a few big baskets; they are more accurately counted (and require less equipment to count) when the counts are small.

We all know some of the advantages of the smaller camera; less conspicuous (makes people less self-conscious), no mirror slap, articulating LCD allows shooting from the waist, on the ground, or overhead. DOF is tremendous. Optics can be excellent with such a small focal plane to design for.

My biggest complaints about the FZ50 are that ISO 100 pushed to 1600 in RAW mode is better than 1600 (but the review is dark!), the JPEGs are over-noise-reduced, and the RAW write to the SD card takes about 5 or 6 seconds (20MB uncompressed files; a cruel joke on the users). Also, there is no independence between M mode and Av and Tv; whatever you set the shutter speed to in M becomes the shutter speed in Tv, too, and visa-versa. Same with Av in Av and M modes.

I generally carry the FZ50 with me now, to fill in for all the FOVs not obtainable with the lens on my DSLR (I don't like changing lenses). Also, when the articulating LCD comes in handy. I find I can pretty much shoot in daylight at ISO 100 exclusively, and ISO 100 RAW data from the Panasonic is pretty good for the sensor size; much better than I expected.
 

Alan T. Price

New member
Sounds like a flawed argument to me because it lacks some info. Notably, any camera at all can be made to produce a low noise image, but the difficulty (and dare I say impossibility when using small pixels) is doing it without sacrificing the fine image detail.

Furthermore, what dynamic range are you talking about ? 8-bit jpeg images are by definition restricted to 8 ev of DR, but not all of it is useful when printed and often it includes the noise. A DSLR sensor can handle 8 or 9 ev of DR in raw mode between the noise floor and maxing out.


The physics do not support the argument that tiny sensors are just as clean as larger sensors, and common sense says it won't be true if you retain all available image detail.

However, it is fairly evident to anyone with good eyes that most cameras produce good clean images of uncomplicated scenes in good lighting. This covers most of the images taken by most photographers. To that end there is very little visible advantage in having larger pixels. It's when the scene gets complicated or the lighting is poor that the larger-pixel sensors do a better job than smaller-pixel sensors of the same vintage or level of technology.
 
Top