• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

A mystery about lens AF calibration and micro AF adjustement

Doug Kerr

Well-known member
My understanding of the basic principles of the Canon AF system (and of course it changes in many unknown details all the time) is this:

1. The focus error is measured when the process begins and the amount the lens focusing mechanism should have to move to attain ideal focus is reckoned, based on parameters provided by the lens.

2. The lens focusing mechanism is commanded to move that far, and the error at that point is measured.

3. If the error is within some small band (the "tolerance" on final focus error), the job is considered done, and the shutter is allowed to fire.

4. If the error is not within that small band, but within a certain sightly larger band, the amount the lens should have to move to attain ideal focus is reckoned, based on parameters provided by the lens. The lens focusing mechanism is commanded to move that amount, and when that movement is done, the shutter is allowed to fire (the assumption being that, from "not very far from ideal focus", that amount of movement will surely put focus within the acceptable tolerance.

5. If the error is not within this larger error band, the process repeats from step 1.

An important complication is that the "ideal focus" as indicated by the focus detector is not necessarily the "best focus" in terms of the image. The reason is the impact of spherical aberration, the lens aberration in which rays from a point on the subject, passing through different parts of the lens aperture, do not converge at the same spot. Thus no lens focus setting will produce "perfect" focus.

Thus we wish to set the lens at the place that will make the image of that point seem "sharpest" to the ultimate viewer (and there are empirical rules that guide us in determining that).

But that "best focus" position is not the same position that would be adjudged "perfect" by the focus detector (which is after all operating on essentially a "rangefinder" basis, and cannot take into account this matter).

Thus, the finding of the AF detector of "how far, if at all, is focus away from 'perfect' " is adjusted by a value given by the lens (the "best focus adjustment", or BFA, value) before being considered to see, for example, is "focus is now ideal".

Now we know by examination of the service manuals for some very old EF lenses that this value may differ from copy to copy of the same lens model. We know this because there are instructions for setting that value on the specific lens copy (on those old lenses, it is set as a two-bit value by placing solder bridges across two pairs of pads on the circuit board in the lens!).

I have no idea what variation in the physical construction of the lens can lead to a significant variation in the BFA value.​

Now, a question is:

Considering that the overall system is essentially "closed loop"* (that is, the body ultimately manipulates the lens until the focus error is "negligible"), what property(ies) of an individual copy of the lens is there than can cause the focus result to not be ideal?

* Note that this is not really true when the scenario for an AF job turns out to be (in terms of the steps above) 1-2-4.​

This question is related to:

• What property(ies) of an individual lens copy that make autofocus with that copy be "not ideal" are adjusted at a service center when we send a lend in for calibration?

• What property(ies) of an individual lens copy that make autofocus with that copy be "not ideal" are compensated for by the micro focus adjustment set into a body for that lens?

It might be a simple as the matter of the BFA not being "proper" (at the present time) in the copy of the lens we have in hand.

But other than that, I have trouble coming up with a credible (to me) scenario for this matter.

I would appreciate any input on this matter.

Thanks.

Best regards,

Doug
 

Jerome Marot

Well-known member
The answer must be that the system is not really a closed loop (and it isn't if one does not measure sharpness directly at the detector). In a similar note, some cameras (like my older A900) have a focus adjust. They would allow to store a different value for each lens and, indeed, some lenses would front-focus a bit, some others would back-focus a bit, etc... For zooms, actually, the correction often depended on the zoom position (focal length). So basically, it was a different offset for each lens and even for each focal length as far as zooms were concerned.

As to spherical aberration, some AF system adjust focus depending on aperture to compensate for it, most notably the Hasselblad H6D discussed in another thread.
 

Doug Kerr

Well-known member
Hi. Jerome,

The answer must be that the system is not really a closed loop (and it isn't if one does not measure sharpness directly at the detector).

An excellent point.

But, if we want to be strict, there is perhaps no physical control system that is truly "closed loop". Often we must rely, for practical purposes, on a sensor that does not truly "measure" the quantity we seek to control, but rather a quantity relatable to it to a better or worse degree.

In a similar note, some cameras (like my older A900) have a focus adjust. They would allow to store a different value for each lens and, indeed, some lenses would front-focus a bit, some others would back-focus a bit, etc... For zooms, actually, the correction often depended on the zoom position (focal length). So basically, it was a different offset for each lens and even for each focal length as far as zooms were concerned.

Sure. Many quite modern cameras have such.

But what property(ies) of the individual lenses that prevents them from inherently participating in an AF system that led to "ideal" focus" (within some tolerance we find acceptable) do those "particularization" settings compensate for?

Is it in fact the discrepancy between the observation of the AF detector and the reality of "best focus" on the image? And what lens-copy-dependent phenomenon is the cause of that? We know of the implications of spherical aberration, but are there other phenomena?

Thanks for your inputs.

Best regards,

Doug
 

Jerome Marot

Well-known member
I should confess that I don't really know all details of phase AF systems, so please read the following theories with the necessary disclaimers.

I first thought that the problem, whatever it is, comes from the fact that the sensor is disjointed from the value to be measured: contrast AF systems, which measure at the sensor plane, suffer no miscalibration effects. But then, it occurred to me that focussing on the ground glass does not need lens calibration either. If the ground glass is placed correctly, and the mirror is accurately positioned, one can focus quite precisely on the ground glass (or a Dodin split-image device, etc...). All lenses will work, whatever their construction and tolerances (with the exception of spherical aberration and stopping down, but that is a different thing entirely).

Then I realised the following: if you look at an explanation of phase AF, you will see images like this one: https://en.wikipedia.org/wiki/Autofocus#/media/File:Autofocus_phase_detection.svg.

On this image, focus is attained when the two signals are coincident. However, in a real AF system, they probably don't do that. If they did, they would not be able to know in which direction to focus when they are close to the focus point, because they can't then tell the two signals apart (one cannot tell case 1 from case 3 apart, in reality the signals are not green and red as on the schematics).

Therefore, probably (conjecture on my part), the system is designed so the two curves never overlap. One simply needs to put the detector largely in front or behind the locus of focus, and consider that AF is correct when the two signals are at a known distance. Eventually, one may use relay lenses for doing so.

Now, if you do that, that known distance will depend on lens design and that is probably where the calibration is necessary. By analogy, if you consider the ground glass and would have a ground glass shifted in its position, you could still get perfect focus by observing the image from a point source and making sure it is a disc of a known size, but you would need to know the design of the lens to realise what size is correct.

Would that make sense?

Unfortunately, I was not able to find better info on AF systems. I suppose relatively little has been published except in the form of patents. We could of course try to search google patents for the original designs, but I am lacking the time at present.
 

Doug Kerr

Well-known member
Hi, Jerome,

I should confess that I don't really know all details of phase AF systems, so please read the following theories with the necessary disclaimers.

Well, there is so much we all don't know, so we all have to proceed on such a basis!

I first thought that the problem, whatever it is, comes from the fact that the sensor is disjointed from the value to be measured: contrast AF systems, which measure at the sensor plane, suffer no miscalibration effects.

Yes, and in that regard one phenomenon that I fairly well understand is that which arises from the matter of spherical aberration (see more about that below).

But then, it occurred to me that focussing on the ground glass does not need lens calibration either. If the ground glass is placed correctly, and the mirror is accurately positioned, one can focus quite precisely on the ground glass (or a Dodin split-image device, etc...).

Well, the story is different with regard to focuing on the ground glass itself and when using the "little rangefinder" of a split-image focusing aid.

When focusing on the ground glass proper, we are adjusting the lens until we see the "best image" in much the same way the "best image" would be created on the film or sensor. So, for example, the matter of spherical aberration would perhaps equally visit both situations. And if so, there, I don't know what could cause a lens-dependent discrepancy. (And maybe there isn't any.)

But when using the split-image focusing aid, the situation is quite analogous to the case for an AF detector. The split image helps us get the ideal "geometric" focus adjustment, but spherical aberration effects mean that this is not necessarily what leads to the "best image" on the film or sensor.

So, why in the days before serious AF did we not hear about this problem and hear about the need to have lenses calibrated? Maybe our sensitivity to the matter was just not then so well developed!

All lenses will work, whatever their construction and tolerances (with the exception of spherical aberration and stopping down, but that is a different thing entirely).

Then I realised the following: if you look at an explanation of phase AF, you will see images like this one: https://en.wikipedia.org/wiki/Autofocus#/media/File:Autofocus_phase_detection.svg.

On this image, focus is attained when the two signals are coincident. However, in a real AF system, they probably don't do that. If they did, they would not be able to know in which direction to focus when they are close to the focus point, because they can't then tell the two signals apart (one cannot tell case 1 from case 3 apart, in reality the signals are not green and red as on the schematics).

No, not so. The image represented by the red trace and that represented by the green trace are observed by separate imagers (there are two in an AF detector, fed by separate relay lenses), and so the situations of case 1 and case 3 are easily distinguished.

Therefore, probably (conjecture on my part), the system is designed so the two curves never overlap. One simply needs to put the detector largely in front or behind the locus of focus, and consider that AF is correct when the two signals are at a known distance. Eventually, one may use relay lenses for doing so.

No, the detector is at a proxy for the focal plane, and focus is adjudged correct when the situation (as observed on its two imagers) is as in case 2.

Except for the complication introduced by the matter of spherical aberration.

Let me review that here.

Of course the the phenomenon of spherical aberration means that the rays from a point on the object do not ever, regardless of the setting of the lens focusing organ, all converge to a point on the film or sensor.

So (putting aside for the moment the whole matter of AF), we cannot say that the best image is that in which the rays from a point on the object coverage to a point on the film or sensor. They never do.

So we must adjust the focus until some other criterion is met. It might seem that this would be the situation in which the cross-section of the "beam" is the least. But that does not turn out to be the "best image" to the observer (because of the radial distribution of illuminance - we do not have a sharp-edged "blur figure").

But there are empirical rules that guide us as to what focus situation can be expected to produce the "visually best image".

Now the "geometrically-ideal" focus situation resulting from coincidence of the two sub-images in the AF detector does not yield that "best image" focus situation. So we empirically offset the implication of the AF detector, essentially striving for a focus situation in which the two sub-images are out of coincidence by an empirically-established about (called, in Canon systems, the "best focus adjustment" (BFA) amount).

Now this value varies with the focus state of the lens (the nominal distance at which the camera is focused), and for a zoom lens, for its focal length setting. And so we can imagine a two-dimensional table in the lens that, for values of those two parameters, reports to the camera the BFA value to be used in adjusting the indication of the AF detector.

But there is considerable evidence that in fact that table must vary with the lens copy. (I have no idea what real variations in lens construction leads to that need.) And hopefully that table is populated when the individual lens is being manufactured, and no doubt re-populated as the lens is "re-calibrated" at a service center. That may well be the totality of what such lens "re-calibration" does.

Unfortunately, I was not able to find better info on AF systems. I suppose relatively little has been published except in the form of patents. We could of course try to search google patents for the original designs, but I am lacking the time at present.

Yes, it is very tough to find any real scoop.

Thanks.

Best regards,

Doug
 

Jerome Marot

Well-known member
No, not so. The image represented by the red trace and that represented by the green trace are observed by separate imagers (there are two in an AF detector, fed by separate relay lenses), and so the situations of case 1 and case 3 are easily distinguished.

No, the detector is at a proxy for the focal plane, and focus is adjudged correct when the situation (as observed on its two imagers) is as in case 2.

I see. So my explanation was in error.


So (putting aside for the moment the whole matter of AF), we cannot say that the best image is that in which the rays from a point on the object coverage to a point on the film or sensor. They never do.

So we must adjust the focus until some other criterion is met. It might seem that this would be the situation in which the cross-section of the "beam" is the least. But that does not turn out to be the "best image" to the observer (because of the radial distribution of illuminance - we do not have a sharp-edged "blur figure").

....

Your theory is quite convincing. Further, if one looks at the way the AF system is constructed, it appears to use rays going around the periphery of the lens. Those rays are the ones most affected by spherical aberration.
 

Doug Kerr

Well-known member
Hi, Jerome,

I see. So my explanation was in error.

Your theory is quite convincing. Further, if one looks at the way the AF system is constructed, it appears to use rays going around the periphery of the lens.

Yes. In the classical AF system, the two "sub-images" are formed through small aperture stops that are located near the periphery of the lens' actual aperture (opposite each other). The distance between them is equivalent to the baseline distance of a rangefinder.

Those rays are the ones most affected by spherical aberration.

A good point.

By the way, the basic reason why a certain AF detector will not work when the lens in place has a maximum aperture smaller than a certain size (the AF is of course done with the lens at maximum aperture) is that then the "small aperture stops" through which the AF sub-images are formed lie outside the lens' overall aperture and so of course are blocked.

Another interesting point (related to some thoughts you expressed earlier in this thread) is this.

The two imagers in a given AF detector "regard" the same small part of the scene (different parts for the different AF detectors, those for different "AF points").

Thus, conceptually, at the focal plane for the AF detectors (a proxy for the main focal plane) the two imagers for a given AF detector should be coincident. But that is of course not possible.

So in fact the two imagers for a given AF detector are separated by a substantial distance on the chip that carries all the imagers for all the AF detectors.

But the relay lenses that are part of the optical systems for the two imagers (each imager is in effect part of a tiny camera) in effect have prisms built in, so the two imagers, although in different physical locations, actually occupy the same places in the image spaces of the two "little cameras".

In most designs, a single relay lens may serve one of the two imagers for several AF detectors. Their relative physical locations on the AF imager chip correspond to the relative parts of the scene their AF detectors are intended to observe.

Then a second relay lens serves the "mates" of all those imagers. Again, their relative physical locations on the chip correspond to the relative parts of the scene their AF detectors observe. But, because of the "prisms" in the relay lenses, their absolute locations are offset from their "mates" (since the two imagers for one AF detector cannot be in the same physical locations).

The whole arrangements are very tricky!

Best regards,

Doug
 
Top