• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

PhotoAcute; How to downsample SuperResolution images to 50% size

Hi folks,

In another thread I was asked a question, and I think the answer deserves a thread of its own. Understanding the issues explained in this thread also helps with finding solutions for other recurring downsampling procedures.

The question was: "A question to Bart and offcourse anybody else: Do you mind having a look at that "downrez from PhotoAcute-action? It's here
and used to automatate best the downrez of the 380 MB tiffs, being generated out of PhotoAcute's big DNGs (with 2 x size, aka superresolution)".

PhotoAcute allows to increase the single image resolution, by combining multiple under-sampled images. In that 'SuperResolution' procedure the resulting image is enlarged 2x (to ensure under-sampled images), with a higher resolution than would be possible from enlarging a single image. While that is useful for large format output, it also poses a burden for other uses of such images such as (panoramic) stitching. Hence the need for down-sampling to 50% of the ' SuperResolution' size.

As I've explained in one of my webpages, proper downsampling requires precautions to do it without introducing disturbing artifacts. In short, one needs to apply a 'low-pass filter' prior to actual downsampling, to avoid problems.

Low-pass filters come in many different varieties, and they are all trade-offs between avoiding artifacts, and retaining resolution. The original question above, can be answered by applying the Photoshop downsampling action to my test image as introduced on my webpage, mentioned above. I have to repeat a warning from my webpage, the downloaded test target (1000x1000 pixel size (1294 KB)) needs to be converted to RGB mode in Photoshop, before using it!

Applying Michael Fontana's action (the downsampling part) to the test image produces the following result:
Rings1_MF_PA-action.png
.

To rephrase the actual question at hand; Is this result good (enough) or not?

The answer; Well, it could be better... As an example of what could be considered to be an optimal compromise between resolution and artifacts, here is the IMHO better result from the ImageMagick application (as mentioned on my webpage) with 'Sinc' Low-pass pre-filtering:
rings1_50pct_sinc.png


One may ask; In what sense it that a better result?

Okay, what we need to look at, is the central square area of 50% size (250x250 pixels). In that central 50% square we will find spatial frequencies that can be reliably reproduced, and anything outside that square area should ideally be featureless gray because it cannot be reliably reproduced (it will show various low spatial frequency aliases of the original frequencies). The first image shows several aliasing artifacts inside the area that should ideally be flawless. However, the artifacts are of relatively low modulation, so could be acceptable due to the simpler workflow which can be kept 100% inside Photoshop.

Given the fact that the task at hand, resampling to 50% size while avoiding aliasing artifacts, is pretty well defined, it may be possible to create 'a better mousetrap' while offering the workflow benefits of staying in Photoshop.

I'll give it a try myself, but I hereby also challenge other OPF members to improve on the action provided by Michael and provide a better 50% downsampling action.
Remember, the goal is to produce an optimal 50% downsampling action which retains spatial frequency integrity inside the central 250x250 pixel area, and as low as possible modulation of aliasing outside that area. Don't forget to convert the GIF image target to RGB mode before you let Photoshop do its/your thing!

A side effect of such a Photoshop action can be that it is universally applicable to downsampling, e.g. from original size down to Web publishing size.

Good luck,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Bart,

I like the project, however, objectively evaluating such an image without planning self hypnosis or a trance is tough, especially on an LCD screen, both with and without ETOH!

Do you have a way of extracting a diagonal line of density on each image so we can have a graph and therefore see the images as a line graph. that way, several methods can be closely compared. IOW, I'm looking for a more objective way of looking at the pictures.

I'd also like to have a graph of the first and second derivatives of the initial curve.

I could do the same with a piece of film and a densitometer or spectrophotometer, but I'd have to set that up. Not a bad idea, but one should be able to so this with one key stroke!

Asher
 
Do you have a way of extracting a diagonal line of density on each image so we can have a graph and therefore see the images as a line graph.

There are possibilities offered by several applications to do that, but it is not always the diagonal that's most telling. Sometimes the aliasing is more prominent in the horizontal/vertical dimension.

I'll wait for downsampled input to see how we could get an objective figure of merit (FOM) that fits the samples best. What I at least expect as an improvement is a central 250x250 pixel crop area without visible artifacts and decent sinusoidal modulation. Perhaps the rest of the image could be judged by standard deviation (0=best).

Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
There are possibilities offered by several applications to do that, but it is not always the diagonal that's most telling. Sometimes the aliasing is more prominent in the horizontal/vertical dimension.

I'll wait for downsampled input to see how we could get an objective figure of merit (FOM) that fits the samples best. What I at least expect as an improvement is a central 250x250 pixel crop area without visible artifacts and decent sinusoidal modulation. Perhaps the rest of the image could be judged by standard deviation (0=best).
Thanks Bart,

I like this work and will try to extend it if I can see how to do the graphs I'm looking for.

Asher
 

Michael Fontana

pro member
Good evening!

thanks Bart for starting that thread!
Still prior to discuss, I 'd like to ask a basic question:

we' re talking about a downsample-ratio of 50%; maybe with that ratio at 75% = factor 1:4, the result would be any better?

I don't think that the stitching results - the blender uses Lanczos-algorythmes - will be necessairly worse, with a 1:4 downsampling, prior to stitch.

I still remember that display-law, that says not to look at 75%, etc, but 50% or 25%, when judging images. Does this law works on downsampling as well, or is it display-related, only?


As I wanted to have a wider comparison; I tried the PS' native possibilities:

Adobe recommends bicubic sharper for downsampling, no good:



bicubic-sharper.jpg





okay, let's use bicubic-only:

bicubic-only.jpg



and finally bicubic smoother:


bicubic-smoother.jpg
 

Michael Fontana

pro member
Now, if I change in the provided action the final selectiv sharpening only, from 100/0.3 to 100/0.1 (amount/radius), but still keep the initial bluring routine and after the downsize the Blend-if as shown at Bart's size:


blur_bicubic_100_01.jpg




BTW: Bart, I'm glad you accept the workflow-reason, of staying in PS. When stitching, all these operations are multiplicated by the number of singleshots, so a efficient workflow is crucial.

Getting better results with the Photoacute than before, it means to accumulate image data, with 380 MB-DNGs and tiffs at the same size. Yes, there is no substitute for real image data - inventing is another strategy - but the entire handling of these semi-finished images has to be fast and smooth.
 
Good evening!

thanks Bart for starting that thread!
Still prior to discuss, I 'd like to ask a basic question:

we' re talking about a downsample-ratio of 50%; maybe with that ratio at 75% = factor 1:4, the result would be any better?

Not necessarily better, just different aliasing risks. I also realise that the SuperResolution process, while effective in creating real resolution enhancement, doesn't quite double the resolution. Another question for a different thread would be, how much resolution do we really gain? It might well turn out that we shouldn't downsample to more than 71% (factor 1/Sqrt(2)) to preserve as much real resolution as available, we'll see (I have an idea how to test it, but I have to see if it works).

I still remember that display-law, that says not to look at 75%, etc, but 50% or 25%, when judging images. Does this law works on downsampling as well, or is it display-related, only?

It is mainly related to the quick-and-dirty display algorithms used in Photoshop. Real downsampling takes more processing time to do right, that's why most software does it wrong ...

As I wanted to have a wider comparison; I tried the PS' native possibilities:

Adobe recommends bicubic sharper for downsampling, no good:

Yes, I've already exposed that method as damaging. When I did my web page on downsampling it became already very clear that Bicubic Sharper doesn't improve image quality at all, on the contrary.

okay, let's use bicubic-only:

Actually, at such a modest downsampling percentage, it doesn't do all that bad.

and finally bicubic smoother:

That one probably loses too much fine detail, but then it is originally intended to be used for upsampling. It could be useful on very high resolution architecture (e.g. already sharpened) to prevent any aliasing from occurring during the downsampling.

I haven't been sitting still, and I've made a Custom pre-blur filter that has characteristics that are very similar to a 'Sinc' or 'Lanczos' filter (better than a Gaussian blur, for our purpose). The Low-Pass filter can be created in Photoshop with the Filter>Other>Custom... dialog with the following settings:
PS_Custom_LowPass_50pct.png


After filtering and bicubic downsampling to 50%, the resulting image can optionally be Smart Sharpened with a Lens blur setting of Amount=30-50 and Radius=0.1, preferably on a luminosity blending layer (as suggested in earlier threads). Depending on the subject matter (not all subjects are as critical as the test target), the sharpening can be increased a bit until aliasing becomes apparent.

Bart
 

Michael Fontana

pro member
>hat the SuperResolution process, while effective in creating real resolution enhancement, doesn't quite double the resolution<

I agree from from my rather non-scietific experience; my guess is about 40 - 60%.

bicubic smoother: >That one probably loses too much fine detail, b<
There seem to me to be trade-offs when downsampling between light-details and 3/4-tone-details. That's why I added the Pmatixplugin at the end..

Did you realised the big influence of changing the SmartSharpen's radius-settings after the downsize? --> Image post 7

Thanks for the Custom-preblur, I' ll give it a try on some images, not targets, as the later is critical, which is good!
 

Asher Kelman

OPF Owner/Editor-in-Chief
Bart I'm concerned about the residual radial artifacts. These might be more clearly represented could track and plot the width and intensity of each circumference made into a straight line. We would see periodic peaks of various heights.

That's the way I could look at this more objectively. So one could look at say 5 circles from the first outwards to the fifth.

Looking at the psychedelic mesmerizing circle pattern after downsizing is subjectively difficult. However, I do like your latest low pass filter.


Asher
 
Did you realised the big influence of changing the SmartSharpen's radius-settings after the downsize? --> Image post 7

Yes, that's why it helps to get the best possible Low-pass filter. A good filter requires less post-sharpening and thus fewer potential artifacts in the well resolved spatial frequencies.

Thanks for the Custom-preblur, I' ll give it a try on some images, not targets, as the later is critical, which is good!

Photoshop's Custom filter is rather crude, it is limited to integers in the -999 to 999 range. If you like the simple Custom filter approach, there is a free plugin available that is much better, and it shows in the results (and allows more post-sharpening). I don't know if Photoshop requires to load the saved setting before each use (may differ between Operating Systems), which would negatively impact the workflow speed.

If you decide to use that better plugin, then the following values should be pasted in the middle 9 cells:
2.29529 625.077 2.29529
625.077 7490.51 625.077
2.29529 625.077 2.29529
the rest remains 0.0, scale factor=1.00000, Add Constant=0.0, and the Autoscale box should be ticked (after which it can be saved for future use).

I'll try some other filters as well, but it will probably not change the results much.

Bart
 

Michael Fontana

pro member
Good mormimg

I still can't get through; time-out of the server resp. it takes to long for him to send a repy. Maybe a DNS-problem? You might find my mailadresse at the website, as for spam reasons, I don't want to set the adresse here. I'm on Mac, thanks!

Sidenote: Yesterday, I did see some prints from the lab, done a while ago, with
Bart's Gaussian Blur/Blend-if-methode; including downsize of A-3/300 to about A-5/300: these artworks were from 80% black to 98 % black, with very fine traces of white flashing through. Not to easy at all....

Even your Gaussian Blur/Blend-if-methode already kept these fine details very well, whithout creating artefacts. Pretty amazing, how a 180 cm-large painting still showed all these details in the downsized print! Today, I had a look at all the steps done, and found that decent curve, which boosted contrast in the 85%-tones:

curve.jpg


Off course, I don't mind having a improved downsizing at all!
 

Ray West

New member
Hi Bart & Michael,

Is there the chance, that in solving the downsizing for the specific image shown here, that the resulting technique will not be the best for a more normal/random image? In other words, I suspect a formula/algorithm could be generated for perfectly downsizing concentric circles, but it could produce peculiar results with parallel straight lines (but I think vice-versa would be more likely.;-)

Have you looked at what goes on at the pixel level, for the bicubic and other sampling algorithm's? It is interesting to see how the blurring (well, anti- aliasing) just pushes the interference sort of further away. I suspect that then, depending on whatever display/print resolution you use, it could generate annoying harmonics. I'm not sure if the circles were coloured, if that may give a more irritating result. I think that if the fringing is a problem, then whatever the downsizing method you come up with, may have to be the last thing you do to the image. Is there some perfect, hand made rules, that can be applied, assuming that there was some way of getting them into a piece of software? Personally, I think it is not possible for every case, but perhaps it is good enough to do the best you can do. (that probably reads wrong)

Obviously I have not followed all of your threads on this, and maybe the foregoing is not necessary, but it was something I have been wondering about.

Best wishes,

Ray
 

John Sheehy

New member
Yes, there is no substitute for real image data - inventing is another strategy - but the entire handling of these semi-finished images has to be fast and smooth.

The biggest obstacle to good imaging is the fact that all of our display media, and most of our captures, are of extremely low resolution. The common beliefs that cameras have too many pixels, and that captures should have almost 100% contrast for neighboring pixels is ruining the fight for truly better imaging. Ideally, cameras should oversample the optics by a good margin, such that 100% views on a 96PPI monitor *should* be soft and noisy (but we really should be looking at 400PPI monitors). The noise reduces as you view the image as a whole, if all the processing is done properly.

The thinking on these matters is getting more and more backwards, as people flock to the aliased look and whine about all the noise they see at 100% pixel view and with nearest-neighbor downsizes and its hybrids.
 

Michael Fontana

pro member
Bart, the plugin you did send me - thanks - is for the older OS-9, only; it doesn't works on my OS-X-box.


The ISP changed the server a few weeks ago; but they deny any misbehaviour on their side.
 
Hi Bart & Michael,

Is there the chance, that in solving the downsizing for the specific image shown here, that the resulting technique will not be the best for a more normal/random image? In other words, I suspect a formula/algorithm could be generated for perfectly downsizing concentric circles, but it could produce peculiar results with parallel straight lines (but I think vice-versa would be more likely.;-)

Hi Ray,

The issue we are facing is that proper ways of downsampling exist, but they require using external (to Photoshop) applications/routines, and as such produce a sub-optimal workflow.
Since Michael had a specific requirement, downsample to 50%, it is possible to find a solution within Photoshop that can be used in a downsample action.

My initial 'generic' suggestion was to use a Gaussian blur prior to the resizing step, but its effect is known to be not as good as can be. The benefit of a Gaussian blur is that it's easily adaptable to a variety of resize ratios, but it either loses too much resolution, or is not effective enough in suppressing the spatial frequencies that can cause aliasing artifacts. It is caused by the long tails of a Gaussian. Hence the search for a better compromise, given the simpler task of resizing by a fixed amount.

Have you looked at what goes on at the pixel level, for the bicubic and other sampling algorithm's?

Yes, all the built-in downsampling methods produce potentially uglier results than with a Custom filter. Some of the bicubic versions may look not too bad ... , until you sharpen a bit. The Adobe recommendation of Bicubic sharper is a disaster, to put it mildly.

Is there some perfect, hand made rules, that can be applied, assuming that there was some way of getting them into a piece of software?

Yes, it's the basic rule of how to avoid aliasing in regularly sampled images; remove the spatial detail that will be smaller than 1 pixel in the end result, before downsampling. But that is not as easy as it sounds, because we don't want to attenuate the modulation of all other detail.

Bart
 
Last edited:

Ray West

New member
Hi Bart,

thanks for the summary. I think it is probably impossible within photoshop, maybe outside. I may have a play ;-). It is the age old problem of doing anything in the digital domain, to sort of fail gracefully. I'll try to keep on topic, so I'll leave you to it for the time being.

Best wishes,

Ray
 

Asher Kelman

OPF Owner/Editor-in-Chief
Ray,

So we can see that the 50% solution is only the beginning. Moving artifacts away may deliver them elsewhere and different shapes and periods are likely to behave in some other manner.

However, this, to me at least, is an important line of investigation. Especially when more and more money is spent on expensive cameras, we need to ask and answer simple practical questions .

Asher
 

Ray West

New member
Hi Mike,

No solution so far....

Back in the early B&W scanner days, it was a sort of rule of thumb to scan at a resolution about 1.5 times the required final print resolution. This image, has the aliasing artefacts at the edge, even in full size. These are caused by the 'mechanical' representation of the circles, being mathematically converted, so to speak, to a digital format/display - giving the usual 'jaggies'. Whatever you do in downsizing this image, will just result in a repetition of the same effect, but closer to the centre, since you can't get half a pixel, or half a bit. The half a bit is sort of achieved by the anti-aliasing shading, but this will cause exactly the same sort of artefacts as a plain black and white, but perhaps not as noticeable.

Bicubic and the other quick resizing algorithms are too simple/mechanical. Lanczos and others will likely give better results (One reason why Qimage is so popular, it has a wider choice of resizing options.)

My version of photoshop - cs2- is rather limited in its resizing algorithms. Bicubic is about as good as it goes. My first attempts have been to upsize to 2500 by 2500, and then downsize to 500 by 500 by bicubic trying to get a wider spread for it to downsize from- failure. For fun, upsize to a prime number 1787 by 1787, say, -worse. But, there may be a magic number for this image.

So far, the only satisfactory solution to avoid the artefacts, that I have found, is to apply a Gaussian blur of 1 pixel radius to the image, and then resize. (I think this was mentioned before) Of course, just the problem areas could be blurred, but that becomes specific to this image. A lesser blur allows the artefacts to bleed through. It then becomes a trade off between blurred or artefacts. I think we are all going round in circles ;-)

This image is a very, very good cure for 'pixel peeping', try it in yellow and mauve, say...

Best wishes,

Ray
 
Bicubic and the other quick resizing algorithms are too simple/mechanical.

Exactly, which is why the Photoshop implementation creates such a deception. Adobe should have improved on the mere simple implementation.

Lanczos and others will likely give better results (One reason why Qimage is so popular, it has a wider choice of resizing options.)

And for downsampling it also offers a choice in the amount of aliasing suppression. In fact it was reluctantly added by my request, although I've not been able to persuade Mike Chaney to also implement 16-bit/channel output (due to limitations in printer drivers).

My version of photoshop - cs2- is rather limited in its resizing algorithms. Bicubic is about as good as it goes. My first attempts have been to upsize to 2500 by 2500, and then downsize to 500 by 500 by bicubic trying to get a wider spread for it to downsize from- failure. For fun, upsize to a prime number 1787 by 1787, say, -worse. But, there may be a magic number for this image.

Yes, good thinking. Unfortunately, Photoshop doesn't use regular bicubic resampling, there is more going on under the hood. So this will be a hit or miss approach, but a hit is possible at 'some' combination (most likely Bicubic smooth upsampling, and Bicubic downsampling).

So far, the only satisfactory solution to avoid the artefacts, that I have found, is to apply a Gaussian blur of 1 pixel radius to the image, and then resize. (I think this was mentioned before)

Gaussian blur, as implemented in Photoshop in particular and because of the long tail of Gaussians in general, affects more pixels than the radius setting suggests. My advice is to limit the radius to 0.5, which will affect no more than a diameter of 3 pixels (remember the long tail), or 0.6 - 0.8 which affects a diameter of 5 pixels (but that will affect visible detail as well).

A lesser blur allows the artefacts to bleed through. It then becomes a trade off between blurred or artefacts. I think we are all going round in circles ;-)

LOL, 'circles' indeed. It's a trade-off without winners, but then the target is more taxing than some images, so we may get away with exceeding the boundaries a little.

This image is a very, very good cure for 'pixel peeping', try it in yellow and mauve, say...

Or as the joke goes:
"Hit me, hit me till it hurts!", said the masochist.

"Noooo", said the sadist.


Bart (who is neither)
 

Michael Fontana

pro member
Thanks to all, trying to find a solution.

Definatly, there's no custom plugin for Mac-OS-X from reindeergraphics anymore.
I' ve been spending a bit of time, to be sure.

Ok, so it seems, we' ve to deal with the PS-custom ones, which btw, "protects" the highlights. I agree, a better downsize algorythm would have resolved the problem at its source; though, not all the time, this world is ideal, but we have to live within it ;-)

So coming back to the point of the smartsharpening after the downsize, as it influences the result pretty much as well:

What happens, if we adjust - in that downsize-smartsharpening - the shadows, mid-tones and highlights differently?

As a idea, look here
 

Ray West

New member
Hi Michael,

A very quick glance at the link, - it is a bit specific to the image. I think that then gives a whole raft of choices. If you wish, post a small image area (again 1000pixels square) of the sort of thing you are dealing with. We could try the results of what Michael has discovered with the circles sample, a more general solution, and see if it is that much different from a specific method.

As with many things, much like playing the piano - you get reward for effort.

Best wishes,

Ray
 
Ok, so it seems, we' ve to deal with the PS-custom ones, which btw, "protects" the highlights.

Let's go with that, and see how far we can take it.

So coming back to the point of the smartsharpening after the downsize, as it influences the result pretty much as well:

What happens, if we adjust - in that downsize-smartsharpening - the shadows, mid-tones and highlights differently?

I think I understand what you are trying to address, contrast loss in the spatial frequencies we do want to keep, but a contrast boost isn't going to help because the amount of contrast loss is spatial frequency dependent. What's needed is a High-Pass filter boost after downsampling. It can, sort of, be implemented as an 'Overlay' blending layer.

Bart
 

StuartRae

New member
How about this one?

Resizing done in stages with bicubic sharper, no filter applied. I don't like the way the center square has gone too bright.

rings1b.jpg


Regards,

Stuart
 
How about that one?

custom+blendiff+KESS1: Not perfect yet

It looks okay, but I don't know that Katrin Eismann action/setting, so I hope it works on any image, not just this one.

Another thanks to Katrin Eismann!

custom"blendiff+KEpenfuerlom, you might notive the different center vs post 27:

That one is not as good for low spatial frequencies, IMHO.

How about this one?

Resizing done in stages with bicubic sharper, no filter applied. I don't like the way the center square has gone too bright.

Actually, Stuart, the center needs to maintain contrast (although not be boosted) and it doesn't look too bad. Which/how many steps did you use? There is an automation plugin available that can be used for stepped resolution changes.

Or this one using a small amount of radial blur rather than Gaussian?

While Radial blur might work for this particular image, it's not universal enough, too image specific.

Bart
 
Top