• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Resolution issues in external lab printing

Doug Kerr

Well-known member
Recently I replied to an inquiry by Kevin Carter in the ProPhoto Home forum regarding the significance of the resolution indicator(s) in an image file. The thread morphed into a discussion of the requirements/recommendations for file image resolution presented by various printing labs as part of their guidelines for clients.

At one time, these were (although not always clearly stated) cast in terms of the resolution indicator(s), a complete bum steer.

Today, though, the information is usually (although not always clearly stated) in terms of resolution (pixels/in) of the image file as reckoned at the size of the print to be made. The "recommendations" often include one or more of the following:

a. A desirable general resolution value, evidently conditioned on the assumption that a file of lower resolution will not produce a visually-good print of the size of interest. (Often 300 px/in is cited.)

b. An optimal resolution value, generally corresponding to the "native input resolution" of the printer chain to be used for prints in that size range. (MpixPro gives 250 px/in for this for all print sizes they offer.)

c. A "minimum" resolution, evidently conditioned on the assumption that a file of lower resolution will not produce a visually-acceptable print of the size of interest. (Perhaps 100 px/in is cited.)

It would at first seem that criterion (b) makes sense insofar as absolutely avoiding any need for interpolation to be done at the lab, rendering moot any issues about what interpolation algorithm might be used.

But there may be a wrinkle. MpixPro, for example, points out that because of small uncertainty in the lateral tracking of the paper web in their printers, they prepare the image for printing at an inch size about 1.5%-1.7% greater than the dimensions of the ordered print (and it "bleeds").

So if the user interested in an 8" x 10" print diligently resizes it to 2000 x 2500 px, at the lab it will be upsized to about 2030 x 2538 px (probably by part of the lab's pre-press processing system) before presenting it to the "native resolution" interface of the printer chain. Now I don't know whether that kind of upsizing is especially trivial or especially challenging - my guess is the latter.

A fundamental issue of discussion in this area compares two approaches to a situation in which the "original" image file (perhaps an edited crop of the camera original, presumably with an aspect ratio matching that of the print to be made) has pixel dimensions substantially smaller than those implied by the "recommended" resolution as applicable to the desired print size:

a. The user upsizes the image (using his choice of processing software, perhaps choosing among various offered interpolation algorithms and/or parameters) to the pixel dimensions implied by the "recommended" resolution as applicable to the desired print size, and sends that image file to the printing lab.

There, because the recommended resolution may not precisely correspond to the native resolution of the printing system to be used, or perhaps because of the "expansion" issue I spoke to above, the image may be resized before being presented to the printing chain's "native" interface, this probably being done by software that is part of the lab's pre-press processing system.​

b. The user sends the image file as is to the printing lab.

There, the image will be resized before being presented to the printing chain's "native" interface, this probably being done by software that is part of the lab's pre-press processing system.​

It is often heard that (a) is the superior workflow in terms of getting the "best" print.

My question is, why?

Possibilities would seem to include:

1. The software used by the commenter is more sophisticated than that built into the pre-press processing system of the specific (or typical) lab, and thus generally or often results, under scenario (b), in a "better" (more visually-appealing?) print.

2. The software used by the commenter offers the user the opportunity to choose among different interpolation algorithms, and/or to adjust various of their parameters, so that the user can optimize the process to the nature of the scene, or the use to which the print will be put, thus leading, under scenario (b), to an especially "good and appropriate" print.

3. Something I never thought of.

I have almost no personal experience in this matter, When I have an image printed in a "lab", it is usually because I need a large "poster". I always follow scenario (b).

So what can you guys who deal with this matter regularly and in a demanding context tell me about this matter? If you do (a), what software do you use? And do you indeed sometimes switch interpolation algorithms or parameters to best suit the specific task at hand?

Thanks.

Best regards,

Doug
 

Cem_Usakligil

Well-known member
....
So what can you guys who deal with this matter regularly and in a demanding context tell me about this matter? If you do (a), what software do you use? And do you indeed sometimes switch interpolation algorithms or parameters to best suit the specific task at hand?...
Hi Doug,

The timing of your inquiry is perfect. Coincidentally, Bart has been conducting tests to compare various free or commercial resizing algorithms and I have been his sounding board, lol. I think that he shall respond soon.
 
So what can you guys who deal with this matter regularly and in a demanding context tell me about this matter? If you do (a), what software do you use? And do you indeed sometimes switch interpolation algorithms or parameters to best suit the specific task at hand?

Hi Doug,

I'm in the middle of some resampling for print tests, although specifically geared at a somewhat different scenario. Nevertheless I can offer some experiences.

The ideal scenario is to find out which printing resolution, usually expressed as PPI, for a given output size is needed. That can be quite a task, because printing Labs usually do not comment on which equipment they use, and certainly not which software (might be equipment manufacturer default or proprietary). Then, assuming a competent photographer, one can prepare the output files to be as good as possible, and request that there will be no further software 'enhancements' let loose on the file. For this purpose it helps to establish a contact person with a help desk, or better, in the lab itself. One would typically ask for the possibility of having the file printed unaltered, and at which PPI resolution. If that cannot be answered, then the brand/model of the printer may be helpful (although some can do different resolutions). It also helps if one can find info about which colorprofile their printprocess uses, some will even mention that at the FAQ section of their website and offer a download.

Assuming that all fails, one could assume that they know what they're doing, which would call for delivering the file at something like 300 PPI at the desired output size converted to sRGB colorspace. Do note that the listed nominal output sizes may deviate from what they actually produce. Prints are then cut from the rolls they're printed on and will typically lose a fraction of a millimeter on all sides.

When the Lab does it's own pre-processing on your file, it becomes a crap shoot. When they employ the wrong algorithms, then a smal additional resize can wreak havoc, but when they use an algorithm tuned for that task, it may not hurt your data much. One could assume that if they use the wrong algorithm, they'll get complaints about unsharp prints, so they're more likely to use something compatible with their own resampling requirements.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

The ideal scenario is to find out which printing resolution, usually expressed as PPI, for a given output size is needed. That can be quite a task, because printing Labs usually do not comment on which equipment they use, and certainly not which software (might be equipment manufacturer default or proprietary).
Yes, I note that many labs express a resolution in terms of "at least 300 px/in [at print size]". I've generally assumed that this number is not (necessarily) the native input resolution of their printer chain, but rather comes from a rule of thumb concerning resultion of the finished print from a visual standpoint. That is, even if the native input resultion of their printer were 720 px/in, they would still likely recommend that submitted images have a resolution at print size of "at least 300 px/in".

On the other hand, the MpixPro lab tells us that the native input resultion of their printers is 250 px/in, and recommends that submitted images have that resolution (not "at least" that resolution).

Do note that the listed nominal output sizes may deviate from what they actually produce. Prints are then cut from the rolls they're printed on and will typically lose a fraction of a millimeter on all sides.
Indeed. MpixPro discusses this in fair detail in their "FAQ". They say that they will typically map the image over an area perhaps 1.5-1.7% larger than the "ordered' print size. On an 8" x 10" print, that could result in the "loss" of almost 2 mm on each of the short edges.

When the Lab does it's own pre-processing on your file, it becomes a crap shoot. When they employ the wrong algorithms, then a small additional resize can wreak havoc, but when they use an algorithm tuned for that task, it may not hurt your data much. One could assume that if they use the wrong algorithm, they'll get complaints about unsharp prints, so they're more likely to use something compatible with their own resampling requirements.
Then basically is the reason we should resample our images before submitting to the lab is that then we can be sure that the interpolation algorithm used is a "good" one, perhaps more assuredly better than the one that might be part of the lab's pre-press process?

And do we have evidence of this being so?

Thanks so much with your help in my understanding this.

As a matter of reference, when I print locally, I do so with Qimage, and always give it the image as I have it (of course the highest resolution form I have withroutn having done any resampling in an editor or whataver). I rely on the resampling algorithms in Qimage to mediate (by resampling) between the pixel dimensions of the image file and the actual native input resolution of the printer to be used (which Qimage determines by interrogation of the driver, I believe - 720 x 720 px for my Epson Stylus Photo R1900). If I have a preference among the different algorithms that are available, I can make it. But I have never deviated from the defaults (which differ by range of image size).

Now, is that just as naïve of me as, when having a print made at a lab, assuming that their algorithms are "about as good as is available"? Or is Mike Chaney smarter than the guys that design photo printing lab software?

Best regards,

Doug
 
Yes, I note that many labs express a resolution in terms of "at least 300 px/in [at print size]". I've generally assumed that this number is not (necessarily) the native input resolution of their printer chain, but rather comes from a rule of thumb concerning resultion of the finished print from a visual standpoint. That is, even if the native input resultion of their printer were 720 px/in, they would still likely recommend that submitted images have a resolution at print size of "at least 300 px/in".

AFAIK one factor is the acceptable output quality as a lower limit, which is basically where they expect the complaints to go up. Another consideration is upload time. On the other end there is the overkill limit, where more pixels will only require downsampling at their end, and is a waste of upload time.

On the other hand, the MpixPro lab tells us that the native input resultion of their printers is 250 px/in, and recommends that submitted images have that resolution (not "at least" that resolution).

They give mixed information. The 5 samples they request upon registering are supposed to be 300 PPI. The 250 PPI mentioned is not a resolution that I remember being native to any printer. Depending on the specific printer model used, I know of printers for traditional photochemical prints with 254 PPI, 300 PPI, 320 PPI, and 400 PPI. There are printers with higher resolution, but they may use inkjet technology, not CRT or Laser exposure of photosensitive paper. There are also electrostatic printers, but they are more commonly used in Book printing.

Interestingly, one of the profiles they make available (for softpoofing only, images need to be uploaded in sRGB colorspace) is for a Noritsu QSS-31 PRO, which is 300PPI ...

Indeed. MpixPro discusses this in fair detail in their "FAQ". They say that they will typically map the image over an area perhaps 1.5-1.7% larger than the "ordered' print size. On an 8" x 10" print, that could result in the "loss" of almost 2 mm on each of the short edges.

I have some difficulty imagining that it's a fixed percentage regardless of size. Typically, nominal 8x10 inch prints are printed on 8" wide rolls that are almost exactly that, 8 inches (within a very small fraction of a millimetre). All 10 or 12 inch long prints can be printed on such a roll, usually without space between them, and are cut very accurately. The only real tolerance is at the moment of exposure, but I cannot imagine that there is a 2 millimetre tolerance in the positioning on all sides due to some sort of tolerance in the paper path. The little tolerance there is would be sideways. I can imagine that the image is exposed slightly larger to accomodate for the cutting tolerances, mostly in the length, but it is done with the native PPI resolution of the printer. I imagine that the e.g. 300 PPI data is projected at a slightly larger dimension to accomodate for that, without resampling.

Again, 250 PPI seems low, 300 PPI seems more plausible (even if it would mean they have to resize/downsample it a bit).

Then basically is the reason we should resample our images before submitting to the lab is that then we can be sure that the interpolation algorithm used is a "good" one, perhaps more assuredly better than the one that might be part of the lab's pre-press process?

And do we have evidence of this being so?

These printers are built for speed. The software driving them is not going to use the latest/greatest advances in technology when that slows the operation down. I have enough confidence in my own insights/skills to know that it will be hard to improve the technical quality, and I am willing (if not under a deadline pressure) to wait for the software to optimize my image if necessary.

Having said that, I do know that there are printers, e.g. Durst Lambda, that have excellent resampling algorithms in their software, and that it would be hard to beat that result.

A small anecdote. A 'local' Pro Lab (using a Lambda Epsilon) I occasionally use made a mistake on a particular order of mine. I ordered a print that had to fit seamlessly in a given frame. The print size was slightly larger than a standard paper size they offer, so I ordered the next larger paper size and made a border around my image which would leave the image area at exactly the correct dimensions. I discussed if they could make me a deal because I had to order a lot of empty space for an image that was just marginally larger than the smaller paper size. Somewhere in that discussion they misinterpreted my intentions and they printed the image without edge on the next larger paper size, thus having to interpolate my pixel accurate data.

When I picked up the print in person, I found out it was too large, and explained what was wrong. They made a reprint to my earlier specifications, and I offered them to take the enlarged print for half price instead of them throwing it away. So I ended up with 2 prints, 1 correct and one enlarged, both based on the same (254 PPI) data file. I had to admit, the difference was practically imperceptible.

Thanks so much with your help in my understanding this.

You're welcome. I know this is a recurring frustrating search for reliable feedback from the printing service. We're not the only ones looking for good answers, so I hope others will chime in if they have good info.

As a matter of reference, when I print locally, I do so with Qimage, and always give it the image as I have it (of course the highest resolution form I have withroutn having done any resampling in an editor or whataver). I rely on the resampling algorithms in Qimage to mediate (by resampling) between the pixel dimensions of the image file and the actual native input resolution of the printer to be used (which Qimage determines by interrogation of the driver, I believe - 720 x 720 px for my Epson Stylus Photo R1900). If I have a preference among the different algorithms that are available, I can make it. But I have never deviated from the defaults (which differ by range of image size).

Qimage offers some of the best output quality because it interrogates the printer driver and offers it exactly the resolution it expects for a given paper/ink combination. The interpolation algorithms (I prefer the Hybrid SE method) are very good. I use Qimage to print to file, for work that has to be sent off-site for printing. It also allows to use my preferred sharpening technique on the final output dimensions, which makes it important to know the PPI requirements of the specific printer that a Lab uses. It also allows to make a collage of smaller prints to be printed on a larger paper size if that offers a financial benefit. I can cut out a couple of prints myself very accurately. Only when the quantities get unpractically large I'll have to depend on the Lab's cutting tolerances.

Now, is that just as naïve of me as, when having a print made at a lab, assuming that their algorithms are "about as good as is available"? Or is Mike Chaney smarter than the guys that design photo printing lab software?

Mike Chaney is at least as smart as the better ones, but he doesn't write software for a specific hardware solution. Qimage has to adjust to a huge variety of output devices. He has been educating lots of printer users and other software makers. He does admit that there can be better interpolation algorithms out there, but his goal is for an optimal trade-off between quality and speed.

Cheers,
Bart
 
Once we know what native PPI the output device uses, we can resize our output to the exact pixel dimensions required, and sharpen at that resolution. To avoid double sharpening, we should be able and switch off further 'enhancements' at the printer's end.

Qimage offers excellent output quality and, given the correct PPI and output size, can print to file. Lightroom 3.6 also produces excellent output, although the print path is optimized for inkjet. Lightroom can also produce print output as files. For average size output, they leave little to be desired in quality.

However, for large format output there is still room for improvement. Both programs do a good job in balancing upsampled sharpness and suppression of artifacts, better than e.g. Photoshop's Bicubic Smoother resizing.

Another option is to use ImageMagick, which has been expanded with some improvements in upsampling algorithms. It already used a very good 'Mitchell Netravali' upsampling filter for resizing which offers a good balance between (blocking and ringing) artifacts and sharpness. It attempts to avoid jaggies, stairstepping in diagonal edges, but the regular "-Resize" command is limited by the speed enhancements that are inherent in any approach that uses 2 orthogonal resampling passes.

A new resampling command for resizing was added, "-distort Resize ", which uses a circular interpolation mask in a single pass. Due to the inherently lower encoding efficiency, it is much slower but it also gives higher quality with even fewer artifacts. It defaults to using a "-filter Robidoux" setting, but I also achieve excellent (fractionally sharper) results with a "-filter Mitchell" setting.

ImageMagick allows to use a Batch/Script file which can process a single image or a series of images (a whole folder) and produce excellent output files once the various parameters are figured out. Image files can be dropped on a shortcut and produce the output automatically, if required in a specific output folder.

For extreme enlargements, my recent tests have reconfirmed that Benvista's Photozoom Pro (current version is 4) offers real benefits for subjects with sharp edge details, because it effectively invents higher resolution data where possible. The earlier mentioned programs do not increase resolution, they only increase size, but Photozoom does increase resolution.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Thank you so much for those two very enlightening essays. They have helped me a lot in understanding what is really going on here.

There are indeed many conundrums and paradoxes in this area! And I encounter much "doctrinaire" thinking when I engage many folks in discussions. Often I am urged to "just provide the image at the resolution requested by the lab - they certainly know what's best as input to their printing system and workflow." Of course, I don't plan to send anything to a lab - I'm just trying to understand what is going on.

The MpixPro 250 px/in recommendation is especially curious, in light of the information you gave!

By the way, what they say about "expansion" does not suggest a fixed percentage. I can well imagine that the actual amount depends on many tactical parameters, including the "native" resolution of the printer to be used (as well as its specific "lateral wander" properties) and the dimensional properties of the specific input image being processed..

Thanks again. You continue to be a great resource to us here. I so much enjoy working with you.

Best regards,

Doug
 
Top