• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Are 16 megapixels too many?

Jane Auburn

New member
In examining files from a 1DS Mark II and comparing those to files from a 1DS (original), I feel the 1DS produces a nicer picture quality: more depth, more character, more three-dimensionalality, perhaps less "digital."

Is this because the 1DS had the right balance of pixel size vs sensor size, and the 1DS Mark II had to cram smaller pixels onto a same-sized sensor, thereby lowering dynamic range and thus compromising one important aspect of picture quality?
 
I might think so, however I note that the pictures from my D60 look better then the pictures from my 10D and they have the same number of pixels (give or take a couple). My 1DsMkII in terms of overall quality if I ignore the resolution benefits comes in close to the D60, but still I like the D60's "Look" better at ISO 100 which is where I usually shoot at. This doesn't stop me from using the 1DsMkII because the final outcome almost always looks better from my 1DsMkII.

I can't speak directly between the 1Ds and 1DsMkII because I only have the MkII, but having both the D60 and 10D and seeing the difference indicates that it might not be the number of pixels, although I am sure that has an effect. I suspect instead the possibility some of the in-camera processing algorithms or the materials used in the growing of the sensor array may be having a greater effect then just pixel size. It is probably a combination.

The D60 came out before the 1Ds which came out before the 10D so some change that occurred between the 1Ds and the 10D may be more to blame then the pixel size.

Meanwhile, I have faith (backed by science) that up to between 22 and 25MP in a FF 35mm sensor the quality should meet my requirements, after that the quality will start to suffer greatly.

Just my opinion and experience,
 
Nill Toulme said:
Are you guys talking about RAW files?

Nill
~~
www.toulme.net

For me, Raw as I generally use the raw for comparison of final capability to go for web or print.

Jpeg is just a quick look product although it shows same differences between cameras frequently.

And both DPP and PS (7.0-CS2) show the differences as well, although I note the my 10D results looked better in PS 7.0 then DPP. (although I have used PS CS2 with 10D shots, have not done comparison, but I have noted the 10D shots looked even better in lightroom, which should be using same algorithm I would think and again I have not done a comparison in lightroom, just played around.

If someone were to do a comparison I would recommend doing it using multiple softwares (or at a minimum both PS CS2 and latest version of DPP).

I just can't think of how to do a good comparison given the subjective value to the comparison.
 

Victor Hoyt

New member
Jane Auburn said:
In examining files from a 1DS Mark II and comparing those to files from a 1DS (original), I feel the 1DS produces a nicer picture quality: more depth, more character, more three-dimensionalality, perhaps less "digital."

Is this because the 1DS had the right balance of pixel size vs sensor size, and the 1DS Mark II had to cram smaller pixels onto a same-sized sensor, thereby lowering dynamic range and thus compromising one important aspect of picture quality?

I must say, I've had opposite experience, but it's kind of subjective. Were the shots you compared taken under the same conditions and settings? I am quite satisfied with the 1DS, but I will upgrade to the MkII when I can. It's really less a matter of 16mp, than processing, and other upgrades.

Vic
 

John Sheehy

New member
Jane Auburn said:
In examining files from a 1DS Mark II and comparing those to files from a 1DS (original), I feel the 1DS produces a nicer picture quality: more depth, more character, more three-dimensionalality, perhaps less "digital."

How are you looking at them? Large prints at the same size? 100% zooms on the monitor?

Sounds like you could be a fan of mosaic effects.

Take a full-res image from the 1DSmkII, and apply a 50% nearest neighbor resize to it, and then compare it to full-res 1DS, and the original of itself.

If the rezing made it look better to you, think about what just happened.
 

Mike Spinak

pro member
Originally posted by Jane Auburn

...more depth, more character, more three-dimensionalality, perhaps less "digital."

Those terms leave a lot open to interpretation, making it hard for me to meaningfully respond (despite that fact that I have extensive experience with both the 1Ds and the 1Ds2).

Originally posted by Jane Auburn

Is this because the 1DS had the right balance of pixel size vs sensor size, and the 1DS Mark II had to cram smaller pixels onto a same-sized sensor, thereby lowering dynamic range and thus compromising one important aspect of picture quality?

The 1Ds Mark II has much more dynamic range than the orignal 1Ds, so... no.


Be careful not to confuse pixel size with pixel pitch. There are spaces between the inidividual photosensitive sites on a sensor, and it is possible to make the spaces between them smaller (making for a denser, higher megapixel sensor), while making the actual size of the individual photosensitive sites larger.

In any case, what matters is not their size, but the amount of light that falls on them, and their light sensitivity. Even with smaller pixels, it is possible to make them gather more light, through better designed microlenses in front of them, and it is possible to make them more light sensitive, through the use of more efficient materials, designs that allow for better signal to noise ratios, etc.

If all else were equal, than smaller pixels would be worse. However, all else is not equal. Lots of progress has been made between the original 1Ds and its next generation, and the "smaller pixels" on the next generation gather more light, more efficiently, and are more light sensitive than the pixels on the original 1Ds did.

Mike

www.mikespinak.com
 

Stan Jirman

New member
I find the scientific quality of the MK2 much better, even if we just stay at ISO 100. The original Ds had wonderful images but you had to absolutely nail it, and any deviation from the norm and it got pretty difficult. There was banding, more limited DR, waxy look at higher ISOs - all the good stuff. And if you ever cared for in-camera JPGs (I don't) then the MK2 is miles ahead - almost usable :)

However, I do like the look of the MK1 images. Whenever I see a good one I am pleased, and subjectively like it at least as much as the MK2 image. While the individual attributes of the MK1 are all inferior, if you got everything perfectly right you were rewarded with something that looks stunning. The MK2 makes it much easier to get the 90% image all the time, so I guess the 100% image doesn't stand out that much...
 

Asher Kelman

OPF Owner/Editor-in-Chief
Will Thompson said:
And outstandingly sharp too!

Did you really mean 64 MP reduced to 16 MP or rahter 64 BIT to 16 BIT?

The former option implies minute pixels with more noise and it would make no sense, AFAIK to do that when one design 16MP with the greater dynamic range instead!

That in fact would make a marvelous camera and better images. One might even challenge the printer and monitors even more than we do already.

If you mean accurate 64 BIT files, that I guess, is more like something from the dreams of nerds in CIA LOL!

Asher
 

Will Thompson

Well Known Member
64 MP reduced to 16 MP?

Asher Kelman said:
Did you really mean 64 MP reduced to 16 MP

YES! Well almost.


Asher Kelman said:
implies minute pixels with more noise and it would make no sense, AFAIK to do that when one design 16MP with the greater dynamic range instead!

NO! True color at all pixels no Bayer interpolation i.e. 1R+2G+1B=4 pixels = 1 output pixel. If the MF backs can use a CCD at 39 MP I do not see any problem for Canon to make a low noise CMOS 64 MP 35mm full frame sensor.

Currently the 1DsMKII only has true output (no or low anti-alias filter blurring effect and color) at 4 MP or 25% on screen in PhotoShop. This is why all Canon DSLR photos (and others) seem to pop at 25% view!

Similar to Canon making a CMOS Foveon sensor at 16MP but with a 4 = 1 sensor reading in firmware/Digic for speed and saving memory card space.
 
Will Thompson said:
NO! True color at all pixels no Bayer interpolation i.e. 1R+2G+1B=4 pixels = 1 output pixel.

While there is nothing wrong with getting a full RGB sample for each output pixel position, it seems a huge waste to do it by cutting the spatial Luminance resolution in half. While having both is nice, Luminance resolution is much more important for human vision than Chrominance resolution.

Currently the 1DsMKII only has true output (no or low anti-alias filter blurring effect and color) at 4 MP or 25% on screen in PhotoShop. This is why all Canon DSLR photos (and others) seem to pop at 25% view!

Are you suggesting that they 'pop' more than a down-sampled 16.7MP version? That seems odd to me, since a downsampled version has a higher resolution, and yet has less aliasing potential (because it can be controlled in down-sampling) than an improperly bandwidth limited RGGB pixel has. What's more, you don't have to downsample to 50% per dimension to potentially improve the Chrominance resolution, 70% would probably be enough to get the full potential.

Of course the potential improvement is only an improvement at that file size. Downsampling further (e.g. for Web publishing) would waste any chrominance resolution gain, and up-sampling would suffer from lost Luminance resolution. To me that sounds like a poor trade-off.

Similar to Canon making a CMOS Foveon sensor at 16MP but with a 4 = 1 sensor reading in firmware/Digic for speed and saving memory card space.

In-camera 'binning' would have some merit if it was known in advance that the image would never need enlargement.

It is my impression that the effects of proper sharpening are underestimated, and that the 1Ds2 has a very good balance in AA-filtering versus resolution. The potential for aliasing artifacts is significantly reduced (not eliminated) while providing a per pixel resolution that allows to be sharpened (or even 'restored') quite well, as shown in this example.

There would be even much more resolution improvement possible for stationary subjects, if two images with a half pixel displacement were processed into a super-resolution image, as is done in astrophotography (a process also known as drizzling or dithering). That would potentially get the actual chrominance resolution, and the Luminance resolution, to twice of that of the single capture.

Bart
 

Ray West

New member
Hi Bart,

re drizzling , iirc maybe ten or so years ago, there was a system that took a number of images from a video camera and made them higher resolution, I guess using the natural displacement caused by the tape wind motor, etc. I would think, that using a long lens on a tripod, the same technique could be used, or maybe, if hand held you'd have to use another technique, maybe entitled 'downpour'...

Best wishes,

Ray
 

John Sheehy

New member
Will Thompson said:
True color at all pixels no Bayer interpolation i.e. 1R+2G+1B=4 pixels = 1 output pixel.

You can't combine those neighboring color channels together and get the highest IQ. What happens is that the virtual sampling point of the green is dead center in your new pixels, but the red and blue are offset by 0.71 pixels on a diagonal, in opposite directions. The result is mild color fringing. For greyscale, it is even less noticeable, and I have used RAW data binned 2x2 greyscale for quick-and-dirty HDR B&W. There is a stop difference in sensitivity between the red and green channels, so for a white highlight, the red channel will still have some detail where the green has clipped, and on the opposite end, the green has more signal where the red is mostly noise, for dark grey shadows. I extend the concept further with a green filter over the camera, which increases the difference a couple stops more.

If the MF backs can use a CCD at 39 MP I do not see any problem for Canon to make a low noise CMOS 64 MP 35mm full frame sensor.

The difficulty in cramming so many pixels in a large sensor is that you are forced to read more pixels in the same or lesser amount of time as the older models, forcing a reduction in quality and you lose photosite real estate (loss of total capacity of all wells) to the borders between the pixels (resulting in higher shot noise, regardless of how you trade noise for grain size), which can not scale down as much as the wells themselves. There is some loss, even if you bin.

Currently the 1DsMKII only has true output (no or low anti-alias filter blurring effect and color) at 4 MP or 25% on screen in PhotoShop. This is why all Canon DSLR photos (and others) seem to pop at 25% view!

"Seem" is a good word to use here, because this is only happening due to the fact that you have high contrast at the highest frequency your monitor is displaying. Print both at the same size, or upsample both to the same size and stand far from the monitor, and "seems" seems to do otherwise.

Similar to Canon making a CMOS Foveon sensor at 16MP but with a 4 = 1 sensor reading in firmware/Digic for speed and saving memory card space.

For RAW, what you could wind up with is a file that doesn't need to be de-mosaiced (such as you can do with the DNG converter for full-res images), but your memory savings really aren't going to be big until you start dropping below 50% (0.25x MP) output, because only then do you start losing much in the way of total information, and losing noise (noise limits compressibility).
 

Ray West

New member
Hi John,

Is there any pc based software that can show the individual rgb pixels in the raw image. The raw processors, dpp, whatever, only seem to zoom in about 4x, whatever that may mean, and I would have thought it would be possible to go in further. It could be useful, maybe. I suspect there is no raw file that represents the actual values per sensor pixel site, it being processed somewhat by the in built camera processor. Maybe I've not explained it too well, but I hope you understand.

Best wishes,

Ray
 
Ray West said:
Is there any pc based software that can show the individual rgb pixels in the raw image.

Ray,

I am using ImagesPlus for that. It allows to extract the non-whitebalanced, non-demosaiced, linear gamma, Bayer CFA sensor data as a grayscale file. It also allows to extract only the Red or Green or Blue data in seperate channel layers from that file.

As far as I know, the Raw data is extracted with the Canon and Nikon SDKs, but maybe he just uses the offset to the actual data and decompresses it. I'll verify that with the author of ImagesPlus, and ask if he can confirm the 'rawness' of that CFA data.

I assume that the DCRAW program source files could be modified to also extract only the CFA data, without setting blackpoints.

Another possibility could be DNG, but I didn't study the SDK specifications.

Bart
 
Ray West said:
Hi Bart,

re drizzling , iirc maybe ten or so years ago, there was a system that took a number of images from a video camera and made them higher resolution, I guess using the natural displacement caused by the tape wind motor, etc. ...

http://registax.astronomy.net/ (PC Freeware last I checked)

What you can do is register the images (align them) and then stack them so that noise can be averaged away.

My understanding is that a 100 MB AVI from a cheap webcam yields better results than 5 RAW files due to greater averaging.

There are probably many more tools out there, but this is what I know of off the top of my head. Please note this is astronomy software and you will have to deal with certain limiting factors caused by the limited photographic field it is aimed at.

enjoy,

Sean
 

Ray West

New member
Hi Sean and Bart,

thanks for the info. The video drizzling system I mentioned, iirc was a pc card, (probably isa, it was that long ago) with I think an svideo input, plus some software. If I could fix it in my memory a bit more, I could possibly find some information on it. At the time it gave sharp high resolution images, but iirc the only demonstartion images I saw were still life, - bowls of fruit, bottles of wine, that sort of thing.

Best wishes,

Ray
 
Ray West said:
The video drizzling system I mentioned, iirc was a pc card, (probably isa, it was that long ago) with I think an svideo input, plus some software.

A hardware solution would help a lot. A software implementation of similar techniques will take its time executing the large number calculations. One can experiment with ALE. Not recommended for command-line phobics though.

At the time it gave sharp high resolution images, but iirc the only demonstartion images I saw were still life, - bowls of fruit, bottles of wine, that sort of thing.

Just to clarify and prevent some potential confusion, there are 2 methods of image improvement being mentioned. One method reduces noise by averaging multiple images, but that requires perfectly aligned images. The other method increases resolution by comparing multiple unaligned images.

I assume you saw an implementation of the second method.

Bart
 

John Sheehy

New member
Ray West said:
Hi John,

Is there any pc based software that can show the individual rgb pixels in the raw image.

I use IRIS:

http://www.astrosurf.com/buil

It is a Windows program only.

It will load the exact greyscale RAW data, and interpolate it (if you set it for the right camera) to color, and let you zoom in quite a ways, but it doesn't do green pixels green-only, red pixels red-only kind of view. I do that in Photoshop when I want to see it that way.

The raw processors, dpp, whatever, only seem to zoom in about 4x, whatever that may mean, and I would have thought it would be possible to go in further. It could be useful, maybe. I suspect there is no raw file that represents the actual values per sensor pixel site, it being processed somewhat by the in built camera processor. Maybe I've not explained it too well, but I hope you understand.

Well, the sensor sees photons and dark current noise (not always significant). The RAW data is just a digitization of them, with a little noise added.
 

John Sheehy

New member
Bart_van_der_Wolf said:
Just to clarify and prevent some potential confusion, there are 2 methods of image improvement being mentioned. One method reduces noise by averaging multiple images, but that requires perfectly aligned images. The other method increases resolution by comparing multiple unaligned images.

Unaligned should reduce noise, too, but the heavy sharpening necessary may bring it back up a bit.
 

John Sheehy

New member
Sean DeMerchant said:
http://registax.astronomy.net/
There are probably many more tools out there, but this is what I know of off the top of my head. Please note this is astronomy software and you will have to deal with certain limiting factors caused by the limited photographic field it is aimed at.

IMO, the Astro programs should really look into exporting stacked/altered/drizzled files as DNGs, so that they can be properly converted for normal color spaces.

I assume the DNG format allows you to keep all the parameters for the specific camera, except adjust the white point for a higher bit depth, at which you would output. Even 12-bit linear images stacked into a higher S/N ratio are better than the originals; noise is a much bigger obstacle to DR than bit depth, with current technology.
 

Ray West

New member
Hi Bart,

It was the resolution improvement one, I suspect, although if they were clever, it could probably do both at the same time. Its sort of related to the awful Sony image stabalisation they had, maybe still have on prosumer video cams.

I think we have wandered/wondered off the original question posed by Jane, Perhaps a new thread should be started.

Best wishes,

Ray
 

Asher Kelman

OPF Owner/Editor-in-Chief
John Sheehy said:
IMO, the Astro programs should really look into exporting stacked/altered/drizzled files as DNGs, so that they can be properly converted for normal color spaces.

I assume the DNG format allows you to keep all the parameters for the specific camera, except adjust the white point for a higher bit depth, at which you would output. Even 12-bit linear images stacked into a higher S/N ratio are better than the originals; noise is a much bigger obstacle to DR than bit depth, with current technology.
The simpler Astro programs are using random sub pixel size X Y displacements to end up with more resolution when using multiple images of the same exposure.

Repeating that with different exposure sets would also give an increase in dynamic range.

One can get out of the programs LRGB or almost anything you can think of.

I have, BTW, never used the software, although I plan to!

Asher
 
Top