Doug Kerr
Well-known member
The New York Times reports that Lytro, Inc. has announced the imminent emergence of a revolutionary new type of camera, the "light field camera".
In our familiar cameras, the suite of rays from the points on the scene that enter via the lens' entrance pupil are manipulated by the lens, based on the focus setting, to produce an in-focus image of the objects at a certain distance, and a "misfocused" image of all the objects at other distances.
Clearly, the information to produce an in-focus image of any object resides in that suite of rays, else we could not focus the camera at different distances while the entrance pupil took in the same suite of rays.
In the light field camera, the suite of rays is (purportedly) captured verbatim in digital form. Then, in postprocessing, we can manipulate that data as the lens would have, at our choice of focus settings, producing an in-focus image for objects at any distance we choose. In fact, it seems almost certain that we could choose to have different portions of the field of view treated as if focused at different distances, producing a composite delivered image with main subject, background objects, and foreground objects all in perfect focus. And we could rework this source data at a later time with different aspirations.
Here is a link to an online piece on the announcement:
http://allthingsd.com/20110621/meet...s-to-sharpen-focus-of-entire-camera-industry/
Here is a link to the doctoral dissertation (Stanford) of Ren Ng, inventor of the scheme and CEO of Lytro, which I expect will give considerable insight into the scheme.
http://www.lytro.com/renng-thesis.pdf
I plan to examine it at length as soon as pressure of other business allows.
It is an exciting prospect.
Best regards,
Doug
In our familiar cameras, the suite of rays from the points on the scene that enter via the lens' entrance pupil are manipulated by the lens, based on the focus setting, to produce an in-focus image of the objects at a certain distance, and a "misfocused" image of all the objects at other distances.
Clearly, the information to produce an in-focus image of any object resides in that suite of rays, else we could not focus the camera at different distances while the entrance pupil took in the same suite of rays.
In the light field camera, the suite of rays is (purportedly) captured verbatim in digital form. Then, in postprocessing, we can manipulate that data as the lens would have, at our choice of focus settings, producing an in-focus image for objects at any distance we choose. In fact, it seems almost certain that we could choose to have different portions of the field of view treated as if focused at different distances, producing a composite delivered image with main subject, background objects, and foreground objects all in perfect focus. And we could rework this source data at a later time with different aspirations.
Here is a link to an online piece on the announcement:
http://allthingsd.com/20110621/meet...s-to-sharpen-focus-of-entire-camera-industry/
Here is a link to the doctoral dissertation (Stanford) of Ren Ng, inventor of the scheme and CEO of Lytro, which I expect will give considerable insight into the scheme.
http://www.lytro.com/renng-thesis.pdf
I plan to examine it at length as soon as pressure of other business allows.
It is an exciting prospect.
Best regards,
Doug