• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Downsizing in-camera and out

Doug Kerr

Well-known member
Almost all of my current photography is destined for release in images intended for use in blogs, on social media, in this forum, in e-mails, and the like.

Generally I deliver the images sized to a largest dimension of 800 px.

Most of my shooting these days is done with either my Canon G16 or my Panasonic FZ200. I have been using the largest pixel dimension outputs, 4000 px × 3000 px, from both. After I finish editing, I make a copy and downsize it to a maximum dimension of 800 px in my photo editor, Picture Publisher 10.

I have begun to wonder whether there would be an advantage in delivered picture quality were I to use one of the smaller outputs from the cameras. Both cameras offer, for example, a 1600 px × 1200 px output. I could then do a 1:2 downsizing in post.

I plan of course to make some actual tests soon, but I am at the moment slightly dysabled by an orthopedic problem (evidently arising from lumbar spinal stenosis) so its hard to do photographic outings.

What do any of you know about the quality of the in-camera downsizing as compared to what we can easily do in post?

Thanks.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Citizen Protection Value and File Size - Why one should shoot full size images!

Almost all of my current photography is destined for release in images intended for use in blogs, on social media, in this forum, in e-mails, and the like.

Generally I deliver the images sized to a largest dimension of 800 px.

Most of my shooting these days is done with either my Canon G16 or my Panasonic FZ200. I have been using the largest pixel dimension outputs, 4000 px × 3000 px, from both. After I finish editing, I make a copy and downsize it to a maximum dimension of 800 px in my photo editor, Picture Publisher 10.

I have begun to wonder whether there would be an advantage in delivered picture quality were I to use one of the smaller outputs from the cameras. Both cameras offer, for example, a 1600 px × 1200 px output. I could then do a 1:2 downsizing in post.

I plan of course to make some actual tests soon, but I am at the moment slightly dysabled by an orthopedic problem (evidently arising from lumbar spinal stenosis) so its hard to do photographic outings.

What do any of you know about the quality of the in-camera downsizing as compared to what we can easily do in post?



Doug


I don't think one is going to see any differences in the sizes you are considering. I just hate the idea of not being able to actually leverage the 1 in 1000 superb image that happens in my camera! Also, there's a great social value to having a file capable of being enlarged and scrutinized. As citizens we can be honest witnesses.

One of the advantages of Phase One Media Pro is that one can simply select pictures and email them at a predetermined size, quality, sharpening and color space. Although I shoot in RAW plus large ultra fine JPG, I leave it to the software to do the conversion for my emails. Still, I want to have the largest file possible. One never knows when one is going to get that killer "money shot"!

I'd go further, and here forgive me for altering the direction of your most innocent post! Just that small files in the camera lose so much of their "citizen protection value"!

Imagine if your camera could actually save an innocent fellow stopped for a "cracked taillight" being accused of "assaulting the officers", who actually beat him up out of savagery or simple race hatred!
If everyone photographed incidents at high resolution, then the number of police shootings deemed justifiable would plummet like a rock from a cliff! To me, the citizen with a camera, is the 5th Estate, impartial, irrefutable witnesses that defend "Freedom" we love to praise so much.

For that, alone, everyone should by law have to carry a decent camera! Then we would have a more civil society!

Asher
 

Jerome Marot

Well-known member
I just hate the idea of not being able to actually leverage the 1 in 1000 superb image that happens in my camera!

I do not know about the Canon G16 or Panasonic FZ200 specifically, but some camera allow one to save raw + jpeg, where the jpeg would be downsampled and the raw file would maintain the full resolution.

I should probably also point out that it should be relatively easy to set up a batch file or something similar to downsample a directory full of jpeg pictures to a second directory if one needed to do so.
 

Doug Kerr

Well-known member
Hi, Jerome,

I do not know about the Canon G16 or Panasonic FZ200 specifically, but some camera allow one to save raw + jpeg, where the jpeg would be downsampled and the raw file would maintain the full resolution.

One or the other of the cameras may provide that option. I'll look into it.

I should probably also point out that it should be relatively easy to set up a batch file or something similar to downsample a directory full of jpeg pictures to a second directory if one needed to do so.

Yes, I need to look into that.

Thanks for the suggestions.

Best regards,

Doug
 

Sam Hames

New member
Almost all of my current photography is destined for release in images intended for use in blogs, on social media, in this forum, in e-mails, and the like.

Generally I deliver the images sized to a largest dimension of 800 px.

Most of my shooting these days is done with either my Canon G16 or my Panasonic FZ200. I have been using the largest pixel dimension outputs, 4000 px × 3000 px, from both. After I finish editing, I make a copy and downsize it to a maximum dimension of 800 px in my photo editor, Picture Publisher 10.

I have begun to wonder whether there would be an advantage in delivered picture quality were I to use one of the smaller outputs from the cameras. Both cameras offer, for example, a 1600 px × 1200 px output. I could then do a 1:2 downsizing in post.

I plan of course to make some actual tests soon, but I am at the moment slightly dysabled by an orthopedic problem (evidently arising from lumbar spinal stenosis) so its hard to do photographic outings.

What do any of you know about the quality of the in-camera downsizing as compared to what we can easily do in post?

Thanks.

Best regards,

Doug

I think you'd be better off working with the original resolution and downsampling only at the end - more for flexibility than any other reason. Especially with respect to cropping.

For setting up an automated process (at least if you're not working with raw files) imagemagick is the command line standard. Assuming you have copied your edited files to an output directory, running the following on the copies in that output directory will resize all of the images to a maximum side length of 800 pixels and with a jpeg quality of 85.

mogrify -geometry 800x800 -quality 85 *JPG

*mogrify works in place, so if you don't have a copy the originals will be gone after running this.
 

Doug Kerr

Well-known member
Hi, Sam,

I think you'd be better off working with the original resolution and downsampling only at the end - more for flexibility than any other reason. Especially with respect to cropping.

I'm sure you're right.

For setting up an automated process (at least if you're not working with raw files) imagemagick is the command line standard. Assuming you have copied your edited files to an output directory, running the following on the copies in that output directory will resize all of the images to a maximum side length of 800 pixels and with a jpeg quality of 85.

mogrify -geometry 800x800 -quality 85 *JPG

*mogrify works in place, so if you don't have a copy the originals will be gone after running this.

Thank you so much for that guidance, including the good warning at the end. I'll try it.

Best regards,

Doug
 
I think you'd be better off working with the original resolution and downsampling only at the end - more for flexibility than any other reason. Especially with respect to cropping.

I agree with Sam. One cannot upsample and achieve the original full size quality, but one can down-sample to a (potentially higher quality than the camera provided) smaller size.

mogrify -geometry 800x800 -quality 85 *JPG

*mogrify works in place, so if you don't have a copy the originals will be gone after running this.

Using ImageMagick's 'convert' instead of 'mogrify' will preserve the original file, and by renaming the output file name at the end of the command will make it easier to identify it as a modified version. There are also much better resampling algorithms available in than 'geometry'.

BTW, my resampling script (Windows version) will allow to right mouse click on a file in the file explorer, and 'send' it to the script. It does both upsampling (by using a state-of-the-art combination of techniques) and down-sampling, with optional sharpening of the results.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

I agree with Sam. One cannot upsample and achieve the original full size quality, but one can down-sample to a (potentially higher quality than the camera provided) smaller size.

Using ImageMagick's 'convert' instead of 'mogrify' will preserve the original file, and by renaming the output file name at the end of the command will make it easier to identify it as a modified version. There are also much better resampling algorithms available in than 'geometry'.

BTW, my resampling script (Windows version) will allow to right mouse click on a file in the file explorer, and 'send' it to the script. It does both upsampling (by using a state-of-the-art combination of techniques) and down-sampling, with optional sharpening of the results.

Thank you so much for all that.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Bart,

I went to the thread you referenced and downloaded the zip file with your script(s) in it.

WinZip reports it is not a valid ZIP archive.

What do I need to do?

Thanks.

Best regards,

Doug
 
Hi, Bart,

I went to the thread you referenced and downloaded the zip file with your script(s) in it.

WinZip reports it is not a valid ZIP archive.

What do I need to do?

Hi Doug,

I've tried downloading and unzipping it, without problems. Maybe you could try a new download?

If that still doesn't work, we'll figure out another way.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Hi Doug,

I've tried downloading and unzipping it, without problems. Maybe you could try a new download?

Well, I tried downloading it several times, with the same result.

Can you give me the best link to download it? Maybe I was using a link to a stale version.

Thanks.

Best regards,

Doug
 

Doug Kerr

Well-known member
I have just put into play Bart's script for using ImageMagick to upsize or downsize Images.

This screen shot from my image editor (Picture Publisher 10) shows a zoom (about 165 px × 160 px) into the results of taking one of Carla's Canon G16 shots from her recent mountain adventure (slightly cropped - 3665 px wide) and resizing it to 800 px wide:

• on the left, using Bart's script running ImageMagick (50% sharpening)
• on the right, using the (only available non-trivial) resizing algorithm in Picture Publisher 10

Downsizing_comparison_01.jpg

Woof!

Best regards,

Doug
 
I have just put into play Bart's script for using ImageMagick to upsize or downsize Images.

This screen shot from my image editor (Picture Publisher 10) shows a zoom (about 165 px × 160 px) into the results of taking one of Carla's Canon G16 shots from her recent mountain adventure (slightly cropped - 3665 px wide) and resizing it to 800 px wide:

• on the left, using Bart's script running ImageMagick (50% sharpening)
• on the right, using the (only available non-trivial) resizing algorithm in Picture Publisher 10

Hi Doug,

Thanks for the feedback. It's amazing how much better (even at this extreme display zoom level) ImageMagick retains subtle detail in folds and seams of the shirt and details in the hat. Edges are clean and still pixel sharp. Of course it is possible to boost the sharpening a bit more if one prefers crunchier results, without risk of amplifying artifacts.

Even at this zoom level there is nothing distracting in the ImageMagic result, the pixels look very 'natural'. It is the result of several techniques that are combined in the scripted operations.

The resampling method and resampling 'filter' attempt to preserve detail without creating aliasing, blocking, and ringing artifacts. That also avoids issues with predictable things like architecture with repetitive detail like bricks or (roof) tiles. Then there is a simple deconvolution sharpening applied to restore some of the inevitable resampling blur. All is done while preserving accurate color blends.

Of course, at this scale we are able to spot a lot more issues if there are any.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Thanks for the feedback. It's amazing how much better (even at this extreme display zoom level) ImageMagick retains subtle detail in folds and seams of the shirt and details in the hat. Edges are clean and still pixel sharp. Of course it is possible to boost the sharpening a bit more if one prefers crunchier results, without risk of amplifying artifacts.

Yes, it is quite wondrous. Of course Picture Publisher 10 is rather old (2001!) so it is not an important "competitor". (Still it is what I use for editing. for many reasons.)

And as for this particular shot, it is so good that the hat did so well, since this was taken at an outing of the Red Hat Society!

Even at this zoom level there is nothing distracting in the ImageMagic result, the pixels look very 'natural'. It is the result of several techniques that are combined in the scripted operations.

The resampling method and resampling 'filter' attempt to preserve detail without creating aliasing, blocking, and ringing artifacts. That also avoids issues with predictable things like architecture with repetitive detail like bricks or (roof) tiles. Then there is a simple deconvolution sharpening applied to restore some of the inevitable resampling blur. All is done while preserving accurate color blends.

Yes, it all seems very nice.

Thanks for making this tool accessible to us.

It would be really nice if you could find the time to prepare a tutorial on the use of the batch file. I think that is not at all obvious.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Marvelous wotk!

Is this still command line work in Windows or has some good souls packed it for Mac?

Asher
 
Thanks for making this tool accessible to us.

You're welcome.

It would be really nice if you could find the time to prepare a tutorial on the use of the batch file. I think that is not at all obvious.

Time is a bit in short supply.

In essence, the script basically uses two different resampling algorithm/filter combinations from all the various choices that ImageMagick has to offer. One approach is generic but does best for upsampling, and another is better for down-sampling, but for experimentation purposes the user is allowed to use either.

That's why, after choosing the required size (specified by ImageMagick parameters, such as percentage (e.g. 25% or 200%, or width x height in pixels), the user is still offered a choice between up or downsampling.

In addition, optional sharpening is offered because interpolation tends to reduce (micro-)contrast.

There are also defaults that can be used by just hitting the Enter key, Down-sampling is default, and common sharpening settings are selected. I tried to indicate those defaults in the dialog text.

Under the hood, the input file is gamma adjusted to approx. linear gamma, and to another/different gamma, these are resampled and blended together to reduce halo effects, and then the composite is restored to the original gamma. Color profiles are also maintained, and files are saved at maximum quality, with 'lossy' compression turned off or to a minimum. This could be changed/tweaked in the script itself for specific requirements and smaller file size, but was beyond the purpose of experimentation (and to make sure everybody used the same settings, for more predictable comparison of results).

For more detailed background info about the development, and technical discussion with (at the time Prof.) Nicolas Robidoux, you can have a look here.

Cheers,
Bart
 
Marvelous wotk!

Is this still command line work in Windows or has some good souls packed it for Mac?

Hi Asher,

My version is a Windows dialect of the Batch/Script command line language. The differences with the Unix/Mac style are not that big, but I cannot test if everything works without a Mac. The main differences are a backslash '\' instead of a caret '^' at the end of long lines, and the use of single quotes instead of double quotes and single % signs instead or double ones, but there may be a few other specific differences with Macs, like single or double backslash in some places that specify directory paths.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

You're welcome.

Time is a bit in short supply.

I understand. I'm afraid I believe that, "If you want something done right, ask a busy person."

In essence, the script basically uses two different resampling algorithm/filter combinations from all the various choices that ImageMagick has to offer. One approach is generic but does best for upsampling, and another is better for down-sampling, but for experimentation purposes the user is allowed to use either.

That's why, after choosing the required size (specified by ImageMagick parameters, such as percentage (e.g. 25% or 200%, or width x height in pixels), the user is still offered a choice between up or downsampling.

In addition, optional sharpening is offered because interpolation tends to reduce (micro-)contrast.

There are also defaults that can be used by just hitting the Enter key, Down-sampling is default, and common sharpening settings are selected. I tried to indicate those defaults in the dialog text.

Under the hood, the input file is gamma adjusted to approx. linear gamma, and to another/different gamma, these are resampled and blended together to reduce halo effects, and then the composite is restored to the original gamma. Color profiles are also maintained, and files are saved at maximum quality, with 'lossy' compression turned off or to a minimum. This could be changed/tweaked in the script itself for specific requirements and smaller file size, but was beyond the purpose of experimentation (and to make sure everybody used the same settings, for more predictable comparison of results).

For more detailed background info about the development, and technical discussion with (at the time Prof.) Nicolas Robidoux, you can have a look here.

Thanks for that insight in what goes on "under the hood", and for the useful reference to the work of Prof. Robidoux.

Best regards,

Doug
 

Doug Kerr

Well-known member
Here is another example of different downsizing approaches.

Again here, the image on the left is a crop of an image of Carla that was downsized using Bart's batch file (actually, a version with my common parameters "hard wired"), to a maximum size of 800 px (with sharpening set to 100%), while on the right is a corresponding crop of the same image downsized to the same dimensions using the non-trivial resize algorithm in Picture Publisher 10:

Downsizing_comparison_02.jpg

Here we see the two entire downsized images on display in the editor:

Downsizing_comparison_03.jpg

Very interesting.

Best regards,

Doug
 

nicolas claris

OPF Co-founder/Administrator
Thank you Doug!
You just make me regret that Bart doesn't use a Mac!
Who will be able to translate that script for Mac??? !!!
I need that to downsize my 645D and 645Z files when posting on the web!
 
Thank you Doug!
You just make me regret that Bart doesn't use a Mac!
Who will be able to translate that script for Mac??? !!!
I need that to downsize my 645D and 645Z files when posting on the web!

Hi Nicolas,

By using a simple text editor you could at least try the following changes (without the ' quotes) :
1. Replace the single caret '^' character at the end of lines by a single backslash '\'
2. Replace brackets '(' or ')' by the double characters '\(' or '\)'
3. Replace the double percentage '%%' signs by a single '%' sign. Only double quantities, are to be replaced not single ones, thus triple signs become double signs and should remain doubles.
Additionally,
4. Things between two double quotation " ... " marks, so literally textual quotations, may need to be placed between single standard ' ... ' quotation marks, I'm not sure.

Spaces at the beginning of wrapping lines need to be preserved (almost every character has a function). Of course literal quoted text can be adjusted to any language or wording one prefers.

Lines that start with ECHO are just text that is fed back to display on your screen, a kind of user feedback.

Some of the SET commands use some presets after the '=' sign, those can be changed, but I'd leave that kind of tweaking until the converted script runs.

Then the lines that reads: -quality 100 -sampling-factor 4:4:4 ^
(the ending caret would become a backslash for Macs) can be changed to allow more JPEG compression (quality e.g. 70) in case of Web display, and the sampling factor can allow more chroma sub-sampling (e.g. 4:2:2, or even 4:2:1) which is perceptually hardly noticeable but does allow more compression and faster web display, but would be risky for JPEGs that are sent to off-site printing facilities and who may do additional profile conversions.

I have no possibility to check it on a Mac, but those are the things I'd have a look at and see if something happens, besides displaying error messages. JPEGs should convert without issues, TIFFs may cause issues because there are many ways a TIFF can be used to store info, and multiple images.

As a consolation for Nicolas, Lightroom does a pretty decent job of down-sampling, it may be very hard to see the minor additional improvement that ImageMagick can squeeze out of most image content, but there are always cases where it pays to go the extra mile, so it would be nice if we could get it running on Macs as well.

Cheers,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Bart,

This is great stuff! Worth having windows! Should work under Parallels....perhaps?

Does Photoshop CC 2014 have the same downsizing as Lightroom?

Asher
 
Bart,

This is great stuff! Worth having windows! Should work under Parallels....perhaps?

Hi Asher,

I suppose it should run under Parallels when it emulates a Windows environment. Installing ImageMagick there, the Windows (x64 DLL for 16-bit/channel) version, would maybe be the bigger challenge, but from there it should be a piece of cake, since the script doesn't use anything but plain command line code to invoke several ImageMagick functions in sequence.

Does Photoshop CC 2014 have the same downsizing as Lightroom?

I've seen no benefit to upgrading since CS6, so I can't tell from experience. I do know they've added a slightly modified option for resampling in Photoshop CC and CC 2014, but I do not know if its close to what Lightroom does by default (LR has been better at resampling for a long time already). Lightroom also offers an option to do (different levels of) additional sharpening after resampling, e.g. for display or for print to different media, which would be a separate activity in Photoshop. Adobe has different people working on Lightroom and Photoshop, so it wouldn't surprise me if they are different developments by different engineers and with different results.

Cheers,
Bart
 

Doug Kerr

Well-known member
I took a quick browse though the Luminous Landscape thread Bart referenced, which deals with the matter of the use of ImageMagick to perform up- and downsizing. The important messages are from Bart and Professor Robidoux.

One can see from just a glance at the thread the enormous complexity and sophistication of this matter, and how nicely Bart has tamed it for us.

I just want to express my gratitude that we have Bart among us as our principal scientist laureate, and a wonderful friend as well.

Best regards,

Doug
Just an old telephone engineer
 

Doug Kerr

Well-known member
Hi, Bart,

I note that in your batch file that, whether or not deconvolution sharpening is applied, the output file is developed using lossless JPEG encoding.

I assume you have decided that there is a benefit from this choice. It would be nice to hear about that. Is it a real advantage after resizing given that the source file might well not have been encoded in lossless JPEG?

Will any JPEG decoder likely encountered today be able to decode that form?

Is there a substantial file size penalty from the use of this mode? Of course, when downsizing, modest difference in file size is usually not a critical issue.

Thanks.

Best regards,

Doug
 
Hi, Bart,

I note that in your batch file that, whether or not deconvolution sharpening is applied, the output file is developed using lossless JPEG encoding.

I assume you have decided that there is a benefit from this choice. It would be nice to hear about that. Is it a real advantage after resizing given that the source file might well not have been encoded in lossless JPEG?

Hi Doug,

That's correct, the script saves in the same format as the original file, and JPEGs get saved as JPEGs. However, since the script was developed as a unified method to test all sorts of scenarios that might lead to better results, I made the choice (to allow comparisons) to always save in the highest quality possible for the specific file format. That allowed to concentrate on resampling quality instead of file settings.

Now that the chosen methods have reached a more stable state of 'best-in-class' for standard methods (thus excluding exotic mixes of vectorized content and pixel resampling, and/or computed super resolution), one can tweak the settings for reduced quality and file size.

The benefit of the current JPEG settings are that the files can be safely used as material for off-site printing services, which might still want to do a small amount of additional conversion, e.g. for output profiles. But for Web publishing purposes one can reduce the quality a bit to achieve lower bandwidth requirements and faster loading times. How much of the quality can be dropped is a personal choice, and may also differ a bit depending on image content.

Even for a given 'quality' setting, the actual resulting file size will differ due to some content (e.g. with lots of smooth gradients) allowing more compression than other (e.g. lots of fine detail or a lot of noise) content.

Will any JPEG decoder likely encountered today be able to decode that form?

The encoder is using a non-standard JPEG library patch/function, so standard decoders could have a problem. I have not been informed of any issues, but it is theoretically possible. Removing the "-compress Lossless" instruction would solve that for JPEGs, but may have an negative effect on other file formats.

EDIT: One could of course add some code to the script to optionally resave the losslessly saved JPEG but now in a more 'lossy' fashion, or ask earlier on in the script if Lossless should be used. Alternatively, one could set the compression to -quality 99 , which would disable the Lossless setting for JPEGs but possibly retain it for other formats.

But even then, some users still prefer specific setting over other common settings. ImageMagick can be set to add much more refined details about a file, like PPI tags and EXIF data, but there are too many possibilities to cover them all in this post, but here is a list of options that can be set when writing JPEGs.

Is there a substantial file size penalty from the use of this mode? Of course, when downsizing, modest difference in file size is usually not a critical issue.

For some image content the file size differences/savings can be considerable, like an order of magnitude or more. ImageMagick also has an option to produce an approx. file size in bytes. It achieves that by iterative trial and error until it finds a 'quality' setting that achieves the desired size. But besides it being relatively slow, it will also create variable image quality due to some content being more compressible than other content.

So the actual 'quality' setting that will be used can vary a lot. A bit of trial and error on the side of the user, and the typical type of image content one generates will suggest what file size will be generated on average (e.g. an 800x533 pixel image with a size of 100-200 kB is often reached by e.g. a quality setting of 60-80, but it can be much lower on noisy very detailed subject matter, which could create artifacts).

Cheers,
Bart
 
Last edited:
Top