• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

What is your workflow for upressing, sharpening and printing?

Asher Kelman

OPF Owner/Editor-in-Chief
Hi, Asher,




I'm afraid that is misleading.

The reason we need more than one dot per pixel is not (just) because we may use six different kinds of ink, although that of course contributes to the actual dot count..

It is also because the printer cannot control the density of the ink in the dots (of any given color) at all, and can only control the size of the dots to a limited degree, if at all. Thus we must use (at least in part) a scheme in which for each pixel, for each color) we pepper the pixel's real estate with a pattern of dots and no dots, more dots for a "denser" application of that color of ink, fewer dots for a "less dense" application of that ink.

I think the dot density described in the manufacturers' admittedly-cloudy specs is for each color.



Doug,

That was from Michael Reichman in the referenced LL article. But, in a way he's right if one considers the radial increase in saturation of colors in a gamut plot. The printer builds up density, hence saturation by using more of the high fixed saturation droplets of fixed size. Here "density" in the final print, relates directly to how much "saturation" of the high saturation droplets masks out the white paper substrate.

Asher
 
Heres' a question: when is the capability of the printer saturated with the resolution of the input file in pixels/inch? Can a printer continue to represent more and more data, or is it fixed, as Norman Koren suggests to about 4-5 cycles per mm or ~ 25.4x5 cycles/inch or ~ 254 marks/inch. That would suggest that it might be pointless to send to a printer more than the lowest natural complete "set" for that printer, of 360 pixels/inch, (Epson) and 300 pixels/inch, (Canon)

What we may run into, is a file format size limitation. For example, the current TIFF file size is limited to 2 or 4 GB (people are working on a Big TIFF standard). This is due to the file's internal tags that are offset pointers to locations in the file. Some libraries use smaller (signed integer) pointers leading to a 2GB limit, others use (unsigned integer) pointers that allow to define larger offsets leading to a 4GB file size limit. This is not a printer limitation, but a file format limitation. Qimage solves that by upsampling and sending a constant stream of print data in chunks, instead of first saving a file that then gets sent. This will allow to print much larger files (e.g. huge panoramas of many metres length, at the maximum native resolution (600/720 PPI) that the printer driver can handle). The question then becomes one of time, and is there enough input resolution to warrant that maximum resolution.

Sometimes printer drivers are programmed poorly, and they may still choke on huge amounts of data, but Qimage tries to avoid that. There may be a printer driver limitation on the number of pixels it can deal with on the total width of its paper roll, but the length of paper should only be limited by the length of the roll of print that medium.

One can also ask other questions, like how large will the output become physically, and can the printing outfit even mount it in one piece? If not, then printing it as file segments that will be separately mounted (and transported) may work better. That will also alleviate printer driver stress.

Some limitations:

1. QImage won't output to file an image of 600 pixels per inch 30"x40" in size, although it can send it in chinks to a waiting printer.

2. At high file sizes, (30,000 pixels width in CS3, for example), Topaz is not available and grayed out in Photoshop. In Ohotoshop CC 2014 the same happens with 26"x38" files at 1200 dpi, but I will look for the lower limit.

Again, not necessarily an issue of these applications but more likely a file format size limitation. I do know that there may be some artificial limitations on file size in pixels, Lightroom e.g. doesn't like panoramas that are too long. One question though, is if the original image detail warrants printing at the maximum resolution. This may also be approached from the angle of viewing distance; does the viewer get close enough to even resolve the detail? 600 or 720 PPI may be stunning at reading distance, but at some 3 feet distance we only need a third for the same effect, and we could use the printer's lower native 300/360 PPI resolution setting, thus cutting the file size into a quarter.

And we only need 8-bit per channel data if the output is already converted to the output modality's colorspace. We can even send it a high quality JPEG in that case to further reduce the file size's number of bytes to transfer.

Qimage even (optionally) adds some dithering (after profile conversions and Smart output sharpening), to reduce the risk of posterization in smooth gradients. Life would become a lot easier if print services would use Qimage to drive their large format printers ;) .

Cheers,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Thanks Bart.

For "size" I mean increasing real resolution of the image file to get a better print, of course using one of the printers stated pringing resolutions in dpi but delivering that amount in pixels per inch. At what point does the printer fail to give an even more detailed print? We know that the MTF of the printer at optimal print settings is limited. So what is the maximum pixels per inch that a Canon printer can usefully handle where total file size is not a limitation.

Asher
 
Thanks Bart.

For "size" I mean increasing real resolution of the image file to get a better print, of course using one of the printers stated pringing resolutions in dpi but delivering that amount in pixels per inch. At what point does the printer fail to give an even more detailed print? We know that the MTF of the printer at optimal print settings is limited. So what is the maximum pixels per inch that a Canon printer can usefully handle where total file size is not a limitation.

Hi Asher,

Well, I can print and see detail on my Canon of 600 PPI (1 pixel line spacing) and others have confirmed that they can see detail of 720 PPI on their Epson printers. So the printers really do what they promise.

You can print my Star resolution target at that native printer resolution (no resampling required, gives 130x130mm resulting target size). It has maximum resolution in all directions, and a few 1 pixel wide test features near the corners of the central medium grey/gray square to verify that it was printed at the maximum (and paper transport functions well) that the printer has to offer.

Some print media will diffuse the ink before/while it is fully absorbed in the receiving coating, others will remain sharp droplet by droplet. Even some matte media can resolve very fine detail at native resolution, so it's a myth that only glossy media can do that.

One can measure the blurred center of the target with a loupe and use e.g. an accurate caliper to determine the effective resolution (which might be affected by distance between printhead nozzles and the medium itself, and the accuracy of the printhead placement and paper feeding mechanism). Also make sure you do a head alignment with the same paper thickness as the final print first.

The target's resolution in cycles is 144 / pi / diameter of central blur disc, I can measure something like 3.5 mm blur disc diameter on a print, which equates to 144 / pi / 3.5mm =~ 13.1 line pairs/mm or 26.19 lines/mm, which is 26.19 * 25.4 =~ 665 PPI, which is close enough to the advertized 600 PPI.

Cheers,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Wonderful guide you are, Bart! Thanks again ? !

Is there a reason why your target for testing the printer must be just 130mm in diameter? It seems that it would be easier to measure the blur diameter with a larger version of the pattern. Or Is that 130 mm a fixed size?

I was thinking that if we had had 4x larger version, so that we can measure the blurred diameter with the unaided eye, and then divide the resulting diameter by 4 before calculating the cycles per mm?

Since you didn't do this in the first place, I guess you had a very good reason!

I guess I can just buy a 10X magnifier with a reticule.


Gage-line 270 degree radius disc-gage.jpg


I found this one (catalog #100576), that covers 0.1 to 6.4 mm that should work.


Asher
 

Doug Kerr

Well-known member
I ran a resolution test with the star target on my Epson SP R1900, using Epson Premium Photo Paper Glossy..

I used Qimage to do the print. I set the print size to 5.118" × 5.118". At 3685 px × 3685, the overall dimensions of the target, that would correspond to 720.00 px/in.

I set the printer driver controls to the greatest quality setting. The printer reported a native resolution of 720 px/in to Qimage.

Thus I would expect that Qimage would do no resampling.

On the print, the diameter of the blurred area was approximately 0.130" as measured with a B&L loupe-type optical comparator (7X).
The boundary was not precisely circular, and the situation was such that I could not read the comparator with great precision. So we might think in terms of an error of perhaps ±0.005".​
That would indicate a resolution of 352 cy/in. Simplistically, that would correspond to 704 px/in.

Very nice.

Thanks, Bart, for this nice tool.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Asher,

Is there a reason why your target for testing the printer must be just 130mm in diameter?

It is the whole test target image frame that is is 130 mm × 130 mm if considered at the respective assumed pixel resolution (720 px/in for the "Epson" version).

[quote}It seems that it would be easier to measure the blur diameter with a larger version of the pattern. Or Is that 130 mm a fixed size?

I was thinking that if we had had 4x larger version, so that we can measure the blurred diameter with the unaided eye, and then divide the resulting diameter by 4 before calculating the cycles per mm?[/quote]

No. So long as the number of radials is the same (i.e., 144), the distance from the center to where they have a certain spacing (thus corresponding to a certain actual resolution in the print) does not depend on the overall size of the test image.

If the image image were larger in pixels, and we still sent it to the printer scaled so the pixel resolution of the image is the native resolution of the printer, then for a certain actual pointer resolution, the blur circle will still be the same size.

If we made it print twice the size with the same pixel dimensions, the effective pixel resolution at the size of the print will be 360 px/in,]. Then we would of course see a smaller resultion when measuring the blur circulate, since the printer resolution is "throttled" by the pixel resolution of the presented imge.

The only way to make the blur circle bigger so we can measure it with a tape measure is to have (many) more than 144 radials in the test target.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi, Asher,



It is the whole test target image frame that is is 130 mm × 130 mm if considered at the respective assumed pixel resolution (720 px/in for the "Epson" version)....................
The only way to make the blur circle bigger so we can measure it with a tape measure is to have (many) more than 144 radials in the test target.


Doug,

Intuitively I thought it would come out to something like this answer. Makes sense, or else Bart would have given us that option to use a tape measure, LOL!


Asher
 
So long as the number of radials is the same (i.e., 144), the distance from the center to where they have a certain spacing (thus corresponding to a certain actual resolution in the print) does not depend on the overall size of the test image.

Doug is correct.

The size of the target itself doesn't change the angular distance between the radials, they stay at a period of 144 cycles, and it is the angle between them that determines the separation distance which is constant at a certain distance from the center. So it is the distance from the center (or the diameter of the central blur disc, which can be measured more accurately because it is larger, that can tell us how close the features are that can still be resolved.

Therefore one of the nice features for its intended use, testing lens+camera resolution, is that shooting distance (= projected magnification on sensor) is not critical.

But the target also gives us several clues about our printing process. We can determine if the target was printed with the correct (native resolution) settings, and e.g. if the paper feeding mechanism and the printhead positioning in the perpendicular direction are the same (should produce a square target with a circular star). It also shows us how the ink diffusion, which varies between used media, will darken the central star region, and the gradients in the corners.

The size of 130mm square was chosen to not waste too much ink and paper, yet be large enough to measure when shot at a certain distance magnification. However, that also allowed to optimize the target with some exactly known size line features and make sure that the star was pixel perfect close to it's center. It also avoids quality issues due to substandard resampling algorithms, hence 2 versions, one for 600 PPI and one for 720 PPI.

The only way to make the blur circle bigger so we can measure it with a tape measure is to have (many) more than 144 radials in the test target.

Indeed, and it's not that hard to create a star with more cycles (I have obviously made an automatic procedure for that), but that would potentially also create more issues with ink diffusion near its center. For those who want to try, here are links to a 798 cycles star (only) version to more accurately measure effective resolution:


The calculation then becomes: Cycles = 798 / pi / diameter.
Multiply by 2 (or start with 1596 / pi) for lines instead of line pairs.

This produces a more than 5.5x larger central blur disc than with the original camera resolution version, which should allow to determine even more accurately what resolution the printer+medium are capable of resolving.

Cheers,
Bart

P.S. I've updated the files with an indicator for the two most likely boundary resolutions, depending on the driver settings. It should now be possible to see, without having to measure, roughly how much resolution is generated between the lower and higher bounds. If one cannot get equal or better results than the lower boundary, i.e. 300/360 PPI, then the driver settings are not optimal (assuming one wants to print images with enough detail).
 
Last edited:

Doug Kerr

Well-known member
Hi, Bart,

Indeed, and it's not that hard to create a star with more cycles (I have obviously made an automatic procedure for that), but that would potentially also create more issues with ink diffusion near its center. For those who want to try, here are links to a 798 cycles star (only) version to more accurately measure effective resolution:


The calculation then becomes: Cycles = 798 / pi / diameter.
Multiply by 2 (or start with 1596 / pi) for lines instead of line pairs.

This produces a more than 5.5x larger central blur disc than with the original camera resolution version, which should allow to determine even more accurately what resolution the printer+medium are capable of resolving.

Thank you so much.

If printing the "Epson" (720 px/in) target with software that works in terms of the size of the print in inches, the proper print size (to more decimal places than the software will probably pay any attention to) is 3.9375" × 3.9375".

I just tested this target on my Epson SP R1900 printer. I must say that, even though the "indication" is scaled up, it is correspondingly more difficult to conclude just where the radial lines are no longer resolvable (among other things, there is a great deal of fine-structure moiré in that neighborhood). There is no free lunch in mensuration!

The result is that my first attempt produced an indicated resolution higher than 720 lines/in! (A measured circle diameter of 0.706" would correspond to the Nyquist resolution, 720 lines/in.)

I don't really have a good instrument to measure the diameter of the blur circle. So far I have been using a B&L 7X measuring magnifier ("pocket comparator"). But its graticule is black-on-transparent, while the subject print, in the region of interest, is mostly in shades of gray (!), so it is very difficult to read the graticule.

It may be that transilluminating the print would help. but I no longer have a competent light box (a result of "downsizing" of the establishment here), so just now I am holding the entire project up to a window.

I am also going to experiment with the use of a machinist's scale, observed through a conventional loupe.

Of course what would be the best is to use a real optical comparator.

Thanks so much, Bart, for all your good help in this matter.

Best regards,

Doug
 
Hi, Bart,



Thank you so much.

If printing the "Epson" (720 px/in) target with software that works in terms of the size of the print in inches, the proper print size (to more decimal places than the software will probably pay any attention to) is 3.9375" × 3.9375".

I just tested this target on my Epson SP R1900 printer. I must say that, even though the "indication" is scaled up, it is correspondingly more difficult to conclude just where the radial lines are no longer resolvable (among other things, there is a great deal of fine-structure moiré in that neighborhood). There is no free lunch in mensuration!

Hi Doug,

Yes, the print itself is nudging up to (and slightly above) the Nyquist frequency, because a single pixel must be aligned with the sampling/printing grid. The aliasing will suggest detail beyond the Nyquist frequency but that is just aliasing that manifests itself as wider lines than the source actually would be.

But the contrast will also be close to zero (because some features are supposed to be printed between two pixel positions, which is impossible, so they are printed in both), which makes it difficult to determine where the blur disc edge really is. As Harry Nyquist implicitly suggested, we need more than 2 pixels to reliably reconstruct one line or pixel.

I could just draw a circle at the Nyquist frequency in the target, to show where the limit should be. That way one wouldn't really have to measure it, but then it won't tell you either what the resolution is when the blur diameter is wider. I suppose I could draw multiple circles to assist with determining the ink diffusion or head misalignment effect on resolution.

Thanks for trying it.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

I could just draw a circle at the Nyquist frequency in the target, to show where the limit should be. That way one wouldn't really have to measure it, but then it won't tell you either what the resolution is when the blur diameter is wider.
Actually I did that here (on the print, in ink).

I suppose I could draw multiple circles to assist with determining the ink diffusion or head misalignment effect on resolution.

That in fact might be helpful.

Thanks.

Best regards,

Doug
 

Doug Kerr

Well-known member
I find that a very practical way to measure the diameter of the "blur circle" on a test print using Bart's original target (144 radials), to the precision that is meaningful, is to lay a 6" machinist's scale (with decimal markings) across the print and observe the scene with an ordinary Agfa 8X loupe.

The scale I use (a Starrett C303R) has a lovely light satin finish (the same is true of most good scales of this genre).

The minor divisions on its finer decimal scale are at intervals of 0.020", and it can easily be read through the loupe to a precision of 0.010". That is far better than our ability to conclude just where the separate radials are no longer discernible!

Here is a Mitutoyo scale that I suspect is comparable:

http://www.all-spec.com/products/182-203.html?gclid=COHk34-P1MMCFZSPfgodrIQAow

I think the same model is available elsewhere, perhaps at a better price.

Best regards,

Doug
 

Doug Kerr

Well-known member
I am actually quite gratified to learn that for my Epson SP R1900 ink jet printer, the exhibited resultion (based on the star test) is not much lower than the Nyquist limit for its native input resolution (720 px/in).

Best regards,

Doug Kerr
 

Asher Kelman

OPF Owner/Editor-in-Chief
Bart and Doug,

What dpi printer setting do you use to optimally express the resolution in the 720 pixel per inch star target? It would be a reasonable assumption that one would go to 1440 dpi, so that the print would be more accurate?


Another clarification: Nyquist and printer dpi?

It occurs to me that one might ask why the printer is able to get practically the specified dpi as the input pixels/inch resolution. After all, the light from the lens is sampled at the sensor and one only gets something approaching half the maximum detail that the intervals of pixels sampling can achieve in pixels/mm. On rethinking however, the printer doesn't sample the entire ocean of wavelets in going from pixels to dots. Rather it builds a dot from and array of 4 micron spherical droplets, each attempting to reproduce just one corresponding pixel, (except that in fact, the inject head is able to address all the paper under the print head and keep passing over it where fine droplets are built up in a dithered pattern to be equivalent in a "set" of marks to one pixel). So there actually is no sampling here that would bring a Nyquist limit into play for the printer resolution, except in terms of degradation of the sensor Nyquist result!

Am I correct in my thinking?

Asher
 

Doug Kerr

Well-known member
Hi, Asher,

Bart and Doug,

What dpi printer setting do you use to optimally express the resolution in the 720 pixel per inch star target? It would be a reasonable assumption that one would go to 1440 dpi, so that the print would be more accurate?

Well, the first thing is that I have no way to deal with dots.

In any case, 1440 dpi is not an image resolution setting. It is description of a dot pitch used by the printer engine and its head as it uses several dots to make a pixel.

My printer (Epson SP R1900), in all combinations of settings (except Draft mode with toilet paper as the medium) reports to Qimage Ultimate that it wants to have social intercourse on the basis of 720 pixels/inch. I assume ti uses dots at the pitch of 1440 dots/inch in one direction (and I think 2880 in the other) as its "paintbrush" to paint pixels, 720 of 'em per inch.

Another clarification: Nyquist and printer dpi?

It occurs to me that one might ask why the printer is able to get practically the specified dpi as the input pixels/inch resolution.

I will respond with a brief (!) tutorial on the concept of the representation of a continuous phenomenon by sampling, which is heavily involved any time we work with discrete pixels.

Shannon and Nyquist teach us that if the have the"value" of a continuous phenomenon (for example, the color of the image on the focal plane) at intervals whose rate is Fs (the sampling frequency), and if all frequency components that make up the variation of that the phenomenon have frequencies less than Fs/2 (which we call Fn, the Nyquist frequency), then that set of values completely describes the entirety of the variation of the phenomenon (that is, the color at every point - not just at every pixel location - on the image on the focal plane.

Woof!

This is at the center of the story, and although it may be hard to grasp, or accept, it is in fact so.

Now from that set of values (in our case,. the color representation at every pixel location), how to we fulfill Shannon and Nyquist's promise - how to we get back a complete image, corresponding to the image on the focal plane?

Well, imagine this conceptually (think for the moment in terms of reconstruction on a display, not on a printed page).

Suppose that on the display screen, at every pixel location (think in terms of the center of the pixel) we made a very tiny dot of light whose color was that indicated by the color code for that pixel.

Then we put in front of that an optical low-pass filter (spatial), whose cutoff frequency (spatial) is the Nyquist frequency.

What comes out of that filter (what we see) is an exact reproduction of the variation of color at every point on the original image on the focal plane.

Woof!

This may at first be hard to grasp, or accept, but believe me - it is so.

But how could we make such a spatial low-pass filter?. Well it turns out if, at each pixel location, the spot of light created from the pixel color code was not of infinitesimal size, but rather was spread out following a certain profile. That has the same as having dots of infinitesimal size behind a low-pass filter.

Another woof!

Now it turns out for this story to work precisely, that" profile" of intensity for each pixel spot would have to go negative at certain ranges of distance from the center, which of course cannot happen, so we could only hope to approximate this scenario. But what is important is not if we can do it "exactly" but rather for you to understand the underlying principle.

Now lets move to a printer. In the "first blackboard model", the printer would deposit a dot of ink of infinitesimal diameter for each pixel. Well, that has to be a dot whose color is that indicated by the color code for the pixel, so already we have a complication, having to make that "tiny dot" out of maybe six or eight tiny dots.

Now, having done this, we must "view" this array of tiny dots, one for each pixel, at the center of the pixel's real estate, through an optical low-pass filter (spatial), whose cutoff frequency is Fn.Then what we would see is an exact reproduction (larger of course) of the entire image on the focal plane (at every point, not just at the pixel centers.

This is the wonder of what Shannon and Nyquist talk about.

But of course we don't want our viewer to lave to look at the pint through an optical low-pass filter. And in fact, having such would so spread out the color of each of those infinitesimal dots that the image seen by the viewer would be very feeble.

So instead we do something that parallels what I described above for the display. For each pixel color value, we deposit ink whose profile of intensity is such that it creates the low-pass filter function. It will have to spread over more than the area of that pixel to come close to performing the ideal concept.

So at each exact pixel on the print (to make an easy story), we will have ink that represents the color of that pixel, plus the color of each adjacent pixel scaled by the value of its profile at its distance from the point we are considering, and so forth.

And of course this is what all the computing power inside our printer does, among other things.

Now to circle back to your original pondering:

Suppose that we have a set of data (the image data sent to the printer) that gives the color of the image at every pixel location, suppose at a pitch of 720 per inch.

Then that should be able to "completely describe" the color all over the image insofar as the variations do not contain frequencies at or above the Nyquist frequency.

So in this case, that data would be able to, for example, convey a regular pattern with a frequency of almost 360 cycles/inch. We can think in terms of that having a line frequency of almost 720 lines/inc, or in term of pixels, of almost 720 pixels/inch.

And the printer should be able to paint that,. following the discussion above, painting the color encoded for each pixel as a pattern, centered on that pixel center, with a certain profile.

Now why might we not get very nearly 720 pixels/inch of resultion? Well, because various real things spoil my blackboard story. The printer may not be able to create that ideal profile (and in fact the really ideal profile is impossible to do physically, as it would require negative amounts of ink here and there).

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Asher,

Just before I got your most recent note on this topic,. I was going to give the "really short answer" to your question:

It occurs to me that one might ask why the printer is able to get practically the specified dpi as the input pixels/inch resolution.

Really short answer:
As Nyquist and Shannon teach us, values of the property of interest (the color of places in the image) at a rate of 720 values per inch are sufficient to describe exactly sinusoidal variations in that property at any frequency less than 360 cycles/inch.​

We note that the color variation across the radials of Bart's chart is (at least nominally) sinusoidal.

But, while I'm up:

One is tempted to say, "why 'less than' "? If we have 720 values per inch, is that not enough to describe a sinusoidal variation at exactly 360 cycles per inch?"

The answer is that such a series of values could seem to describe a sinusoidal variation at 360 cycles per inch, but it turns out that it could equally be taken to describe many sinusoidal variations at that frequency, all having different combinations of phase (the spatial alignment of their zero points) and amplitude. So no, we cannot describe a variation at 360 cycles/inc with 720 values per inch.

But if the series of values seems to describe a sinusoidal variation with a frequency of 359 cycles per second, with a certain phase and amplitude, that is unambiguous except for one thing: it equally describes a sinusoidal variation at a frequency of 361 cycles per second, with a certain phase and amplitude.

So how do we know which of those it is meant to describe, which of those should we generate in the "reconstruction" process?

The answer is, we "know" there could be no variation in the phenomenon at a frequency of 361 cycles per inch. So the 359 cycles/inch variation implied by the sequence of values must be the intended one.

How do we "know" that?

Well, having chosen a "sampling rate" of 720 cycles per inch, we then use a low pass filter at the "originating" end to eliminate any variation with a frequency higher than, for example, 359.5 Hz.

And that is the famous "antialising filter".

Suppose we don't bother with that, maybe because we think it causes autism?

Then in fact if our actual image contains a variation at 355 Hz, the reconstructed image will contain variations at both 355 and 365 cycles/inch (the latter one not in fact having been present in the original image). The "beat" between these will cause a "modulation" in the color of the image at a frequency of 10 cycles/inch, the infamous moiré pattern.

Best regards,

Doug
 
Indeed, and it's not that hard to create a star with more cycles (I have obviously made an automatic procedure for that), but that would potentially also create more issues with ink diffusion near its center. For those who want to try, here are links to a 798 cycles star (only) version to more accurately measure effective resolution:


The calculation then becomes: Cycles = 798 / pi / diameter.
Multiply by 2 (or start with 1596 / pi) for lines instead of line pairs.

I've updated the files with an indicator for the two most likely boundary resolutions, depending on the driver settings. It should now be possible to see, without having to measure, roughly how much resolution is generated between the lower and higher bounds. If one cannot get equal or better results than the lower boundary, i.e. 300/360 PPI, then the driver settings are not optimal (assuming one wants to print images with enough detail).

There is some aliasing visible (manifests itself as hyperbolic divergence of the radials and increasing width), mostly near the horizontal/vertical direction. That's what you get when pushing against the boundaries, and then some. The diagonal directions have some 41% higher resolution in a regular discrete sampling grid, and therefore the aliasing will be negligible.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

I've updated the files with an indicator for the two most likely boundary resolutions, depending on the driver settings. It should now be possible to see, without having to measure, roughly how much resolution is generated between the lower and higher bounds. If one cannot get equal or better results than the lower boundary, i.e. 300/360 PPI, then the driver settings are not optimal (assuming one wants to print images with enough detail).

Lovely! Thanks so much. I think this will be a good practical help to the users.

There is some aliasing visible (manifests itself as hyperbolic divergence of the radials and increasing width), mostly near the horizontal/vertical direction. That's what you get when pushing against the boundaries, and then some. The diagonal directions have some 41% higher resolution in a regular discrete sampling grid, and therefore the aliasing will be negligible.

Indeed.

Thanks again so much.

Best regards,

Doug
 

Doug Kerr

Well-known member
I thought I would expand a little on the "hyperbolic" false images Bart speaks of.

Suppose we have a pixel grid whose pitch is 720 px/in, and whose Nyquist frequency is thus 360 cy/in.

Suppose we have a pattern whose variation of luminance is sinusoidal with a basic frequency of 350 cy/in. Since the variation is sinusoidal, in fact the only spatial frequency it contains is 350 cy/in.

We "sample" it into our 720 px/in pixel grid. This collection of samples "completely describes" a 350 cy/in sinusoidal variation in luminance, and under the right conditions, when the image is "reconstructed" from the pixel values we will see that variation with a frequency of 350 cy/in.

Now in a later test, we have a pattern whose variation of luminance is sinusoidal with a basic frequency of 380 cy/in. Since the variation is sinusoidal, in fact the only spatial frequency it contains is 380 cy/in.

We "sample" it into our 720 px/pixel grid. This collection of samples "completely describes" a 340 cy/in sinusoidal variation in luminance, not a 380 cy/in variation.

Why? Well because a 720 px/in grid, since it has a Nyquist frequency of 360 cy/in, cannot describe any variation with a frequency of 360 cy/in or higher. If we present it with a pattern of frequency 360+x, it describes a pattern with a frequency of 360-x. You can see why this phenomenon is often described as "foldover distortion": the frequency of the variation described by the set of pixel values has a frequency that "folds over" the Nyquist frequency from the frequency of the actual variation that was sampled.

And of course another name for this phenomenon is "aliasing". An "alias" is a false name under which someone operates. Here, in the set of pixel values, the 380 cy/in pattern travels with the "biometrics" of a variation with a frequency of 340 cy/in, an imposter for the real variation.

And under the right conditions, when the image is "reconstructed" from the pixel values we will see that variation at a frequency of 340 cy/in.

Now lets look at this actually happening. Here is a small region from Bart's latest star test target, at 4x actual size:

Star_100mm_720PPI_798cy-hyperbolas-4X-01.png

The region lies generally along the horizontal axis to the right of center, surrounding the location of Bart's green circle, which indicates just where the frequency of the variation across the radials is 360 cy/in, the Nyquist frequency. (It is labeled "720 PPI", which is the same thing.)

Recall that when I speak of quantities with "inch" in their name, this is based on the frame of reference of this image being given physical existence at a scale of 720 px/in.​

Consider, for example, two or four of the radials nearest the horizontal axis. Follow them from the right edge as the go toward the left, toward the center of the target (not sene here). Note that they get thinner and closer together as we follow them. (Yes, they also suffer some other funny things, because the "reconstruction" of the image is not perfect. Just ignore that.)

As they cross the green line, the pitch between radials is exactly 2 px, which means that the fundamental frequency of the variation there as we traverse across the radials is 360 cy/in, the Nyquist frequency.

Now consider the cluster of radials as we go further to the left. Their pitch is now less than 2 px, their fundamental frequency greater than 360 cy/in, above the Nyquist limit.

Now the pixel values no longer describe the frequency of the actual radial pattern but rather that frequency "folded" about the Nyquist limit. So if the actual pitch between two radials is 1.8 px, so the frequency is 400 cy/in (40 cy/in above the Nyquist frequency), what the pixel values seem to describe is a frequency of 320 cy/in (40 cy/in below the Nyquist frequency). That would mean a pitch between radials of 2.25 px.

Thus the "radials" we see (imposters for the real ones at this point) would have, at that point inside the green circle, a spacing larger than they had as the radials crossed the green circle. As we go further to the left, and the pitch between the actual radials further decreases, the pitch of their "imposters" further increases.

We can easily see this on the figure. As the radials cross the green circle to the left, what we see are no longer converging straight lines, but rather diverging hyperbolic curves.

How about that!

Best regards,

Doug
 

Doug Kerr

Well-known member
By the way, when I spoke of the curves that we see for our radials, inside the Nyquist limit circle, as "hyperbolic", that may well be metaphorical. I'm not at all sure that they are actual hyperbolas, although my work to find out analytically has not gone too well!

Bart, you may know whether they are theoretically actual hyperbolas or not.

It looks as if their equation is:

y = 1/(4fn-(1/mx))

where fn is the Nyquist frequency and m is the slope of the midline of the black part of the radial.

Maybe.

Best regards,

Doug
 
By the way, when I spoke of the curves that we see for our radials, inside the Nyquist limit circle, as "hyperbolic", that may well be metaphorical. I'm not at all sure that they are actual hyperbolas, although my work to find out analytically has not gone too well!

Bart, you may know whether they are theoretically actual hyperbolas or not.

Hi Doug,

I don't know if they formally can be defined as hyperbolas, but they sure look like it.

It looks as if their equation is:

y = 1/(4fn-(1/mx))

where fn is the Nyquist frequency and m is the slope of the midline of the black part of the radial.

I haven't tried fitting a curve, after all they are artifacts that have both the wrong phase and frequency.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Hi Doug,

I don't know if they formally can be defined as hyperbolas, but they sure look like it.
Indeed.

I haven't tried fitting a curve, after all they are artifacts that have both the wrong phase and frequency.
True, and my simplistic work only treated frequency (as manifested by "period").

And of course it really doesn't matter - just a curiosity!

Thanks.

Best regards,

Doug
 
Top