• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

DoF planning for electronic viewing

Doug Kerr

Well-known member
Most of my photographic images are "delivered" for electronic viewing, as components of blog posts, on forums (such as this one), or embedded in e-mail messages.

I thought it would be interesting to see how we might do "depth of field planning" in that context.

The various parameters I will assume match my common practice. Others may want to follow the template using parameters that fit their practice. Your mileage may vary.

I will assume that the shot will be taken by my new (not yet in hand) Canon PowerShot G16, but we can imagine the use oif a G15, for which the pertinent parameters are identical.

Its sensor size is 7.44 x 5.58 mm. Its sensel layout is 4000 x 3000. I assume I will take the maximum resolution output, so the pixel dimensions are also 4000 x 3000. The pixel pitch is thus 0.0186 mm (1.86 µm).

The images I use for electronic viewing are almost always cropped, so for this exercise I will assume a 75% x 75% crop, a cropped image of 3000 x 2250 pixels.

My practice is to deliver the images for electronic viewing with a maximum pixel dimensions of 800 px, so in this case the image would be resized to 800 x 600 px.

I will start by considering the use of the human-acuity basis for determining the COCDL for use in depth of field calculation. That is because my concern is with blurring as it might be perceived by the viewer (in a certain assumed viewing situation), not as it might affect preservation of the camera's basic sensor resolution. (The latter approach would be absurd, since that resolution is diluted to 26.7% of its original value by the downsizing.)

I will assume presentation of the 800 x 600 px image on a display with a width of 1680 px and a physical width of 17" (my ViewSonic VX2035WM). Thus the image, on-screen, will have physical dimensions of very nearly 8.00 " x 6.00 ". The on-screen pixel pitch is essentially 0.01".

I will assume viewing at a distance of 24"". (In my personal case, that is enforced by how I have my glasses made.)

We will use as our COCDL the "half-cycle" resolution of a human eye with 20/20 vision, namely 1/60 degree (one arc minute). At a distance of 24", that angle subtends a distance of 0.007". That is better resolution than the display supports, so we will shift gears. We will use a COCDL that corresponds to one pixel pitch on the display, 0.01" on the display.
So in fact we will proceed on the basis of preserving the system resolution - as it exists for the delivered image! Preservers of system resolution, rejoice! Kerr has given up his damned visual acuity premise!​
Now. blowing this back to the sensor, we find that COCDL will correspond to 3.75 sensor pixels. That is very nearly 0.007 mm.

Now we can do the depth of field calculations. We will assume the following shot parameters:

Focal length: the geometric mean of the camera's minimum and maximum focal lengths (6.1 - 30.5 mm): 13.6 mm. (For reference, the full-frame 35-mm equivalent is 63 mm.)

Focus distance: 10 m

Aperture: f/2.8

COCDL: 0.007 mm (per the above rationale)​
Based on these parameters, we calculate the depth of field. We find that the limits of the field are:

Near: 5.86 m

Far: Infinity​
So in fact in this case, we could better our position by focusing at the hyperfocal distance, 9.45 m. Then, the limits of the field will be:

Near: 4.73 m

Far: Infinity.​

How about that!

Guys, you can pre-order your G16 at B&H right now!

Best regards,

Doug
 
Most of my photographic images are "delivered" for electronic viewing, as components of blog posts, on forums (such as this one), or embedded in e-mail messages.

I thought it would be interesting to see how we might do "depth of field planning" in that context.

The various parameters I will assume match my common practice. Others may want to follow the template using parameters that fit their practice. Your mileage may vary.

Hi Doug,

Nice planning! I'm also curious how the G16 will perform, updated DIGIC processor and all.

Just for the fun of it, I ran your parameters through my tool, and obviously came up with very similar results. The difference between both approaches might also be interesting.

Since you will routinely crop your images to 75%, I used a 1/0.75 x larger (10.7" x 8") output image goal (to be cropped later).

DPlan-Example001.png


I will assume presentation of the 800 x 600 px image on a display with a width of 1680 px and a physical width of 17" (my ViewSonic VX2035WM). Thus the image, on-screen, will have physical dimensions of very nearly 8.00 " x 6.00 ". The on-screen pixel pitch is essentially 0.01".

This is a variable that my tool doesn't specifically calculate, but it is mentioned that the final output modality resampling to the native PPI still needs to be done. Since display technology uses different resolutions (new displays will increasingly make use of Hi-DPI technology), and other output modalities such as printers require other parameters, it is left to the user (and time, which will offer newer output modalities with different pixel resolutions).

That also means that the images can be either printed or displayed, with the same impression of DOF, because the resampling will not change the inherent angular resolution, just how many original camera pixels will be combined.

Now we can do the depth of field calculations. We will assume the following shot parameters[...]
Based on these parameters, we calculate the depth of field. We find that the limits of the field are:

Near: 5.86 m
Far: Infinity​

I get slightly different values, 5.74 m - 39 m, due to the built in tolerance for different output modalities:
DPlan-Example002.png

So in fact in this case, we could better our position by focusing at the hyperfocal distance, 9.45 m. Then, the limits of the field will be:

Near: 4.73 m

Far: Infinity.​

Yet, for slightly more flexible output , I'd use slightly different assumptions, but the differences will be almost impossible to see ...

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Hi Doug,

Nice planning! I'm also curious how the G16 will perform, updated DIGIC processor and all.

Just for the fun of it, I ran your parameters through my tool, and obviously came up with very similar results. The difference between both approaches might also be interesting.

Thanks for doing this.

That also means that the images can be either printed or displayed, with the same impression of DOF, because the resampling will not change the inherent angular resolution, just how many original camera pixels will be combined.

I am bothered by this rationale.

Suppose we have an sensor whose sensel layout is 3000 px/PH, and we take the maximum-resolution output, at 3000 px/PH. We proceed on the basis of the approximation that its resolution will be about 1500 cy/PH.

We will use the "preserve the camera resolution for objects at the distance limits of our field" approach to DoF calculation, and thus adopt, for DoF calculations, a COCDL of 1/3000 PH.

But if the reality is that the image that will be delivered to its only "viewers" is an image downsampled 1:3, then the effective resolution of the system (so far as any recipient of the image is concerned) will be roughly 500 cy/PH. (This is as valid as our treating the resolution of the camera itself as being 1500 cy/PH.

Now, holding to the concept that our DoF calculation should be based on "conserving", for objects at the DoF distance limits, the resolution of the system as seen for an object "in perfect focus", we ask the question, how would we reckon that for a system resolution of 500 cy/PH (since to the recipients of the image, it appears to have come from a camera with a resolution of approximately 500 cy/PH.

The argument that "the resampling will not change the inherent angular resolution, just how many original camera pixels will be combined" does not hold up. When we "combine" camera pixels into delivered image pixles, we are changing the resolution of the system as it pertains to the delivered image. We have "thrown away" the original camera resolution, and choosing a range of object distances (through DoF calculation) that will "conserve" it is meaningless.

Of course, if our outlook is that the basic product of the camera is the image at full camera resolution (even though we may wish to issue "reduced resolution" versions of it into some arenas), and we wish to plan the shot so that we retain for all objects in "the field" that resolution, so that we can then without concern deliver a product at that resolution with no compromise due to imperfect focus of our "principal" objects, then the statement has meaning.

But if our objective for a particular shot is (only) to deliver a "product" at resolution less than the camera resolution, then preserving the camera resolution for all principal objects is a wholly ethereal objective.

Let's look at another situation. We have a camera whose sensel layout is 4000 x 3000. But we have it set to deliver to us an image at 2000 x 1500 pixels. We can than accept the approximation that the resolution of the camera is 750 cy/PH.

Now, if we want to do DoF planning so as to preserve that resolution (the only resolution the camera has now) for objects over the depth of the field, we must use a COCDL of 1/1500 PH in our DoF calculation. It makes no sense to use a COCDL of 1/3000 PH "to preserve the camera's resolution (assumed as 1500 cy/PH)". The camera does not have that resolution (right now).

The "impression of DoF" (under the "preserve the resolution" outlook) means what objects appear to the viewer to be "as sharp as the object at perfect focus".

Suppose that we reckon the DoF based on the resolution of the expected delivered image (1:3 downsampled from the camera resolution). Then objects at the near and far limits of the calculated field will (under our simplistic view) appear to the viewer as sharp as the object at the ideal focus distance.

But now, we trot out a print made form the original camera image (not downsampled). All the objects on it will look "sharper". And now we can see that the objects at the near and far limits of the field are noticeably less sharp than the object at the distance of ideal focus.

So clearly the downsampling has not left the same impression of depth of field performance.

Thanks again for your demonstrations. I will review your results and see how they fit in with mine.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Bart,

Note that what I said in the previous message, as well as most of our thinking about resolution and COCDL, tacitly assumes a naïve model of the combining of two spatial frequency responses. In the context of our recent discussions, this outlook could be paraphrased as:

"If we have a system with a resolution of 1500 cy/PH (which is equivalent to a blur figure of diameter 1/3000 PH), then blurring of an object from misfocus with a blur figure of diameter not over 1/3000 PH will not degrade the sharpness of the image of that object."​
In fact, if the basic system spread function is Gaussian, and that of the blur figure as well, then in the case above, the presence of the blur would degrade the resolution of the out-of-focus object to 70.7% of the basic system resolution. There is no "harmless" value.

If we wanted the effect of imperfect focus to not cause a decline of over 20% in the resolution enjoyed by the out-of-focus object (below the resolution of the camera proper), then we would need to use a COCDL of 1.33 times the pixel pitch.

************

I don't mean by the above to suggest that in DoF planning we should aspire to adopt a precise model of the interaction of various point spread functions and the like. To do so would be wholly impractical in a "working photography" context. And the result would be profitless. Our use of DoF planning is to have broad guidelines for the planning of a shot. It is not as if, as I had said elsewhere, we were trying to to determine, to a precision of 0.0001", the diameter of a pin that would be a class 4 slip fit into a diamond-shaped hole in a steel plate.

But it is important that we remain mindful of how broad our process is.

And simplifications, if consistently employed, can still help us recognize what notions might be meaningful and what ones not so much.

Best regards,

Doug
 
I am bothered by this rationale.

Hi Doug,

Always useful for a discussion, different viewpoints on the same subject that is, not the being bothered bit.

But if the reality is that the image that will be delivered to its only "viewers" is an image downsampled 1:3, then the effective resolution of the system (so far as any recipient of the image is concerned) will be roughly 500 cy/PH. (This is as valid as our treating the resolution of the camera itself as being 1500 cy/PH.

Ah, and here we have the different viewpoint. Your assumption of reduced resolution is only valid if the resampled image is shown of the same output modality. However, this is often not the case when different devices are used. Downsampling to 1/3rd the size and displaying on a device with much higher resolution than the limited display, will have 3x the resolution of the original size on the low res display (if not limited by DOF)!

Common monitor displays today have native resolutions of around 100 PPI. An Epson desktop (or larger) printer can reach 720 PPI output resolution, and the difference between 360 and 720PPI is just visible. Added to this is the greater potential for deconvolution sharpening where some potential artifacts from that sharpening will be too small to notice at that size, but the entire MTF curve will be boosted. But that's for another discussion, about print quality.

The argument that "the resampling will not change the inherent angular resolution, just how many original camera pixels will be combined" does not hold up. When we "combine" camera pixels into delivered image pixles, we are changing the resolution of the system as it pertains to the delivered image. We have "thrown away" the original camera resolution, and choosing a range of object distances (through DoF calculation) that will "conserve" it is meaningless.

Maybe, in view of the above, there is more to it than you assumed based on another viewpoint.

The viewing conditions you originally stated, 'normal' visual acuity and 24 inches viewing distance, produce a line resolution of 143.2 PPI that needs to be satisfied in output. When the printer interpolates the 4000 x3000 pixels to 7680 x 5760 pixels (10.67 x 8 in @ 720 PPI, uncropped), it will indeed not add any resolution (going from 375 PPI to 720 PPI requires upsampling of the original data), but it won't lose any of the original captured resolution either. The larger pixel matrix will be printed much denser (@ 720 PPI) than was possible on the display, but it will still hold all COCDL blurred resolution that the camera had to offer, at the same angular resolution as the display although that is limited further by its dot pattern but the creative impression of sharpness fall-off with distance is the same.

Before elaborating on your alternative scenario, I'll wait and see if we can agree on this point of view first.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

I get slightly different values, 5.74 m - 39 m, due to the built in tolerance for different output modalities:
DPlan-Example002.png

Well, I used a COCDL of 0.007 mm, whereas this uses a more stringent COCDL of 0.004869 mm, so we might expect different DoF results.

My COCDL was one half cycle at the resolution of the "product" (that is, as simplistically inferred from the pixel density) .

Best regards,

Doug
 
But it is important that we remain mindful of how broad our process is.

And simplifications, if consistently employed, can still help us recognize what notions might be meaningful and what ones not so much.

Hi Doug, I fully agree.

It's the simplification that allows us to reach insight much quicker, as long as the simplification isn't exaggerated to leading to wrong conclusions.

Cheers,
Bart
 
Well, I used a COCDL of 0.007 mm, whereas this uses a more stringent COCDL of 0.004869 mm, so we might expect different DoF results.

Hi Doug,

The COCDL in my tool is just the product of the other settings in the first section, perhaps adjusted by 'absolute diffraction blur' which wouldn't happen on the G16 until stopped down to f/7.1 which it probably can't. That's why it's the last field in the settings, it's more for informational purposes, but the other settings determine what it will turn out to be. It's more output of the exercise than input.

Cheers,
Bart


P.S. I've added the G16 to the camera database for my DOF planning tool, because I have a feeling it might be useful for future discussions ... ;)
 

Doug Kerr

Well-known member
Hi, Bart,


The COCDL in my tool is just the product of the other settings in the first section, perhaps adjusted by 'absolute diffraction blur' which wouldn't happen on the G16 until stopped down to f/7.1 which it probably can't.

Well, it can close down to f/8!

That's why it's the last field in the set, it's more for informational purposes, but the other settings determine what it will turn out to be. It's more output of the exercise than input.

I have to learn my way through the "flow diagram" of your planner!

Best regards,

Doug
 
Well, it can close down to f/8!

Hi Doug,

Nice, you can then give empirical proof of diffraction theory... ;)

I have to learn my way through the "flow diagram" of your planner!

I understand. I'm sometimes a bit of a lateral (out-of-the-box) thinker. My flow for input starts with the output criteria for print and display, then uses the camera settings in an order as they mostly determine the values of later settings, but with the possibility to jump forth, and back again.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

The COCDL in my tool is just the product of the other settings in the first section, perhaps adjusted by 'absolute diffraction blur' which wouldn't happen on the G16 until stopped down to f/7.1 which it probably can't. That's why it's the last field in the settings, it's more for informational purposes, but the other settings determine what it will turn out to be. It's more output of the exercise than input.

Well, the user may not (in general) set it directly, but it is not only a "for your information" output; it must be a parameter to the calculation of the field limits.

Could you give me a concise summary of how the COCDL is determined? That will make many other things clear.

Based on the $25.00 reverse engineering job (7 minutes), it smells like it is basically just the horizontal sensel pitch.

Your note says:

The Circle of Confusion diameter will be automatically updated based on your output quality requirements, as specified in section 1, and by changing aperture settings in section 2 when diffraction will limit resolution.​
But I can't ever see that happening. If the horizontal sensel pitch is 0.01 mm, then regardless of the "quality" setting, and regardless of the aperture setting, the COCDL shows to be 0.010 mm.

I note that for an aperture that produces a diffraction blur diameter of about 1.67 times the COCDL, we receive this note:

(diffraction blur MTFNyquist = xx.x%)

I think this means that the resolution implied by the expected diffraction blur is xx.x% of the resolution corresponding to the Nyquist frequency. (I think the wording of the note is not the best, but . . .) When that declines to 60% (perhaps a good arbitrary criterion for when diffraction has intruded significantly) we get the note.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Bart,

I do note that if we change our "quality" aspiration, it will change the recommendation as to maximum output size for a given viewing distance. That of course makes sense.

The recommendation of maximum output size (for "Normal" vision) results in one sensel's worth of the image subtending 1/60° (one arc minute) for the viewer. (One pixels worth, if we are working with the maximum resolution camera output.)

(I hate to keep mentioning that, but such is not automatically the case.)​
That fits in precisely with what we have been discussing.

By the way, when I change the viewing distance, the maximum output size does not always update. (I can force it by changing the "quality" and then changing it back.)​

Of course the maximum size recommendation has nothing to do with depth of field matters, which is a separate branch of the planning.

Your outlook is that visual acuity is not involved in the choice of a COCDL, so indeed (notwithstanding the note) we should not expect a change in "quality" to affect the COCDL. So there is nothing inconsistent about the COCDL remaining at the sensel pitch. (The note is just wrong.)

On the other hand, it would be reasonable to not choose a COCDL that was smaller than the projected diameter of the blur figure from diffraction (and the note suggests that). But in fact that is not enforced by the program upon the COCDL. (Here the note is probably "right", and the program wrong.)

In the other arena in which I am these days doing a lot of programming (fixing an application that is full of bugs), we say that the refrain to our marching song is, "It's always something".

Best regards,

Doug
 
Hi, Bart,



Well, the user may not (in general) set it directly, but it is not only a "for your information" output; it must be a parameter to the calculation of the field limits.

Could you give me a concise summary of how the COCDL is determined? That will make many other things clear.

Based on the $25.00 reverse engineering job (7 minutes), it smells like it is basically just the horizontal sensel pitch.

Sure Doug, here it is:

It is initially set to:

Sensor Width in mm / (2 * Output Cymm * Output Width in mm)
or
Sensor Height in mm / (2 * Output Cymm * Output Height in mm)

Depending on context the smaller of the two is used, e.g. to trigger the diffraction only induced MTF reduction comment, whichever is smaller and thus affected more. This is done to allow better judgement of the effects on non-square sensel arrangements.

Your note says:

The Circle of Confusion diameter will be automatically updated based on your output quality requirements, as specified in section 1, and by changing aperture settings in section 2 when diffraction will limit resolution.​
But I can't ever see that happening. If the horizontal sensel pitch is 0.01 mm, then regardless of the "quality" setting, and regardless of the aperture setting, the COCDL shows to be 0.010 mm.

And then, try increasing the output size at item 1.4), when you want to produce that bill board sized output (also the G16 can produce that, it's just a matter of viewing distance). Currently it will drive the COCDL into smaller than sensel pitch size but the red warning is an indicator for erroneous input. I'll have to verify whether that is also an indication that it is ignored as input, not sure how I handled that, but it's on my list of todo items anyway.

Decreasing the output size will increase the COC diameter, since we can now relax the sensel sized restrictions because the image will be downsampled to achieve the output size.

I note that for an aperture that produces a diffraction blur diameter of about 1.67 times the COCDL, we receive this note:

(diffraction blur MTFNyquist = xx.x%)

I think this means that the resolution implied by the expected diffraction blur is xx.x% of the resolution corresponding to the Nyquist frequency. (I think the wording of the note is not the best, but . . .) When that declines to 60% (perhaps a good arbitrary criterion for when diffraction has intruded significantly) we get the note.

The logic is the following. From my experience, when the diffraction pattern diameter becomes 1.5x the sensel pitch (or larger) I start seeing a visible degradation of micro-detail contrast. This is purely empirical observation, not assumed from the following.

Intuitively that makes sense because at 1.4142 x the diffraction pattern the diffraction pattern diameter only affects at most the 4 hor/ver neighboring sensels, but above that more (also diagonal=Green luminance) neighbors get contrast reducing blur added. Of course the diffraction pattern has an Airy 'disk' pattern distribution which is not of unifom signal strength across its diameter, but it also doesn't stop at the first zero signal diameter.

We actually also have to consider different wavelengths, but the common Luminosity weighting could be used to determine a weighted average of 564nm for full spectrum image content. Luminance is also used by Raw converters to compensate for the lower sampling density of Red and Blue.

This hit on MTF is not handled as a reduction of resolution perse, but a warning comment is shown because it will visibly reduce micro-detail contrast. Yet, very high contrast input can still somewhat survive for later deconvolution repair.

When we go for even narrower apertures, e.g. as was/is used by large format flim shooters (going by the creed of: f/64 and be there), then there will be a point where no possible resolution is left above a certain spatial frequency. This is where I also start increasing the COCDL because of the defacto zero resolution by absolute diffraction limiting (MTF=0).

This paper explains some of these concepts a bit better.

In addition to these diffraction related issues, I also scale the effects by image magnification due to closer than infinity focusing, and the Pupil Factor where applicable.

These things will be handled in the tool's thread as they are required for further explanation.

I am working on implementing an even more restrictive COCDL limiting model that also adds the interaction between diffraction and defocus blur MTFs, which unfortunately is not a simple multiplication of the two MTFs.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Sure Doug, here it is:

It is initially set to:

Sensor Width in mm / (2 * Output Cymm * Output Width in mm)
or
Sensor Height in mm / (2 * Output Cymm * Output Height in mm)

Depending on context the smaller of the two is used, e.g. to trigger the diffraction only induced MTF reduction comment, whichever is smaller and thus affected more. This is done to allow better judgement of the effects on non-square sensel arrangements.

Ah, yes. That will turn out to be one of those sensel pitches!


And then, try increasing the output size at item 1.4), when you want to produce that bill board sized output (also the G16 can produce that, it's just a matter of viewing distance).

Ah, yes, I see that at work. And then the "quality" enters the picture.

So essentially here we recognize that visual acuity may become controlling. Very clever!

Currently it will drive the COCDL into smaller than sensel pitch size but the red warning is an indicator for erroneous input.

Ah, yes, now I see that.

I'll have to verify whether that is also an indication that it is ignored as input, not sure how I handled that, but it's on my list of todo items anyway.

Decreasing the output size will increase the COC diameter, since we can now relax the sensel sized restrictions because the image will be downsampled to achieve the output size.

Ah, I see. Makes sense.

The logic is the following. From my experience, when the diffraction pattern diameter becomes 1.5x the sensel pitch (or larger) I start seeing a visible degradation of micro-detail contrast. This is purely empirical observation, not assumed from the following.

Sure.

Intuitively that makes sense because at 1.4142 x the diffraction pattern the diffraction pattern diameter only affects at most the 4 hor/ver neighboring sensels, but above that more (also diagonal=Green luminance) neighbors get contrast reducing blur added. Of course the diffraction pattern has an Airy 'disk' pattern distribution which is not of unifom signal strength across its diameter, but it also doesn't stop at the first zero signal diameter.

Makes sense.

We actually also have to consider different wavelengths, but the common Luminosity weighting could be used to determine a weighted average of 564nm for full spectrum image content. Luminance is also used by Raw converters to compensate for the lower sampling density of Red and Blue.

This hit on MTF is not handled as a reduction of resolution perse, but a warning comment is shown because it will visibly reduce micro-detail contrast. Yet, very high contrast input can still somewhat survive for later deconvolution repair.

Makes sense.

When we go for even narrower apertures, e.g. as was/is used by large format flim shooters (going by the creed of: f/64 and be there), then there will be a point where no possible resolution is left above a certain spatial frequency. This is where I also start increasing the COCDL because of the defacto zero resolution by absolute diffraction limiting (MTF=0).

Sure

This paper explains some of these concepts a bit better.

I'll take a look. Thanks.

In addition to these diffraction related issues, I also scale the effects by image magnification due to closer than infinity focusing, and the Pupil Factor where applicable.

These things will be handled in the tool's thread as they are required for further explanation.

I am working on implementing an even more restrictive COCDL limiting model that also adds the interaction between diffraction and defocus blur MTFs, which unfortunately is not a simple multiplication of the two MTFs.

Indeed.

Bart, this is a very sophisticated, well-thought out tool. My congratulations.

Thanks for helping me grasp what is going on here.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Bart,

This paper explains some of these concepts a bit better.

Very nice paper. Thanks.

Where the author says:

The magnitude squared of the diffracted electric-field amplitude E in V/cm gives the irradiance profile of the impulse response in W/cm2. . .​
he has actually omitted the constant 754 ohms (the so-called "impedance of free space", 377 ohms, times 2 to take care of the square of the ratio between amplitude and RMS of the electric field), but that doesn't really affect where he is going.

(He may have not meant amplitude of E, but rather the RMS of E, in which case all that is missing is 377 ohms.)

Thanks.

Best regards,

Doug
 
Last edited:
Bart, this is a very sophisticated, well-thought out tool. My congratulations.

Thanks for helping me grasp what is going on here.

Hi Doug,

You're welcome. I knew you'd appreciate some of the refinements involved. Of course, that can for many scenarios amount to minuscule differences, lost in rounding for display, but I attempted to have a universal tool for all sorts of applications, from recording macroscopic detail to gigantic output sizes, and anything in between.

Thanks for your kind words,

Bart
 
Top