• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The cutting edge in Digital Sensors: Adobe side-swiped?

Asher Kelman

OPF Owner/Editor-in-Chief
Adobe and others have long provided provided software solutions to parameters in simple digital photography. Now we are seeing hardware that might cause programmers to be repurposed!

We routinely correct limitations in perspective, focus, resolution, dynamic range, DR, and even change backgrounds. Often, each need has spawned specialized software.

The number of competing hardware solutions has increased: dynamic range, resolution, better color have been obvious problems to tackle. Fuji, with special array including luminance only sensels aimed to deal with DR and one gains a stop or so, great for wedding photographers especially.

dynamic_range.gif


Credit-Fuji The increased dynamic range from the Super CCD SR Pro double pixel technology (6.17 million S-pixels and 6.17 million R-pixels) became popular in the FinePix S3 Pro with portrait and wedding photographers. The new FinePix S5 Pro continues these advancements by coupling the proven Super CCD SR Pro with the companies new RP (Real Photo) Processor Pro for even more outstanding film-like results. The two technologies will be married together in a fully digital and durable metal alloy body frame.

Now, however, there's practical competition for Fuji's offerings. The first to appear above the horizon is from a well-known name. The electronics giant Panasonic now has shown a new sensor that takes several images and so resolves the shadows and highlights as never before.

Panasonic_hdr_sensor_2.5.2008.PNG

Credit-Panasonic This image, from a Panasonic paper at the ISSCC computer-chip show, shows a sensor with a better ability to span bright and dark areas in a photo. It works by combining three shots into a single high-dynamic range image using a new Panasonic image sensor

Recently Adobe surprised us by their gross solution, a multicomponent lens with software to do this mapping in one camera. This must have been a massive investment.
"Why don't we have a 3D healing brush and, say, get rid of everything behind his head?", Says Story. What story?

adobe_3D_camera_lens_10.8.2007_270x214.JPG

Dave Story Showing the Multicomponent lens (Credit: Audioblog.fr)

Dave Story is none other than vice president of digital imaging product development at Adobe, showed off aspects of how the technology worked. First comes a lens which, like an insect's compound eye, transmits several smaller images to the camera. The result is a photograph with multiple sub-views, each taken from a slightly different vantage point at exactly the same time.

He didn't demonstrate that idea, but he showed another application of the 3D technology. "If we know the 3D nature of every pixel, what if we could make a focus brush? What if I had a three-dimensional brush where I could reach into the scene and adjust the focus?"

He then showed what he said this focus brush--along with a corresponding defocus brush--might look like. (To my jaundiced eye he could have just been copying from one focus layer to another, but creating the multiple focal planes from a single image is impressive.)


Surce


Surprise, surprise! Now a Stanford University group has announced a novel sensor which divided the chip surface into overlapping sets of sensels which in effect form cameras looking at the image scene from slightly different angles. With special software, all the component image structures and even textural components are resolved.

multiarray_sensor_350x189.png

Credit-Keith Fife/Stanford University
Source

Such a chip, (named "The MA Imager"), unlike the Adobe solution, is an obvious advantage for mass production. Having each image component potentially separated into file layers, it's easy to automatically replace backgrounds or alter the sharpness or color of each layer with ease.

We are now seeing such a rapid rate of advances, that what we took for granted as needing significant Photoshop resources and our hard-wonexpertise, now will be simply part of mass produced digicams by HP and Kodak and the possessions of the kids and young that already rule the game market. Built-in capabilities will increasingly force software companies to push their own capabilities even further.

Asher
 
Last edited:

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Nikolai,

I principle, yes. Each object is assigned a depth. A host of new applications will be spawned. This will also repurpose the people doing PS and like software to allow optimizing texture, since one could look at the different surfaces from slightly different angles and cheat, so to speak, getting the best look (and the least reflectivity in a highlights).

One can simply add a shot of a forrest or mountain and simply replace the b.g. Or else one can use this as evidence in a crime scene as real measurements will be easy with just one shot. I've been itching to build such a camera for some time and never got past drawings and reading research papers. It's such an obvious concept. The full impications and uses, however, are not. My mind can imagine some of the interesting applications with profound implications beyond art. It will, I believe impact quality control and machine vision, aid to the blind as well as surgery and more!

Asher
 
Thanks Asher,

this is so.....WOW....

Besides being able to reassemble the 3D image into a normal 2D photograph that we all know and love, the researchers can also figure out distances of objects in the photo.
The chip works by overlapping small 16×16-pixel patches, called subarrays, each hiding under its own lens. After a photo is taken, special software then analyzes it looking for small location differences for the same element in different patches. Based on this information the software is able to extrapolate the distance from one object to another at the time the photo was taken.

The result is a regular photo accompanied by a “depth map” which records how far away each pixel is. Right now there is no specific file format for such an image, but the researchers said that the information can be embedded into a regular JPEG as metadata.

The researchers claim that the technology can also be used to reduce noise in digital photos. Since the same views are captured on different subarrays, they can all be compared to eliminate any unnatural color.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Thanks Asher,

this is so.....WOW....
Hi Georg,

When you think about it, the knocking down of the sattelite to protect our planet or the detection of minute angles subtended by objects in the image field is all the same; just pretty simple geometry.

I've been waiting for smarter sensors and we're are only just beginning.

Grannie's camera will have these before Canon! The camera market is interesting in that digicams, because of the massive market, can incorporate the latest technology and the cost is spread. Assuming that the current pro cameras will be able to always boast the latest technology and be ahead may be intuitive but will be tested in the near future.

Asher
 
Last edited:

Asher Kelman

OPF Owner/Editor-in-Chief
Tom,

It might surprise you, or let me remind you, and unwind you! Many older folk rise 3-4 hours before the rest of us.

"Early to be early to rise, makes a man healthy, wealthy and wise"

One man I know:

"Stalks sunsets, and the tides
Saunters sands afore sunrise,
He's beau, wise and sage,
And, unlike the others of his age
He's still got dynamic range!"

Asher
 

Michael Seltzer

New member
Putting aside the issue of early risers (my granny used to always get up before me), isn't this similar to what you can do now with software like PhotoAcute? Being able to do it with a quick single shot has some advantages, but being able to control the number and the variance of the shots also has some advantage.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Putting aside the issue of early risers (my granny used to always get up before me), isn't this similar to what you can do now with software like PhotoAcute? Being able to do it with a quick single shot has some advantages, but being able to control the number and the variance of the shots also has some advantage.
Michael,

That's a very good question. Photoacute relies on the fact that the images, because of vibrations and involuntary movements are in fact taken from slightly different positions and angles. so the pixels will not always align exactly. The software is able to translate these small differences to geometrical parameters and hence assign every point in each diagram to a position in a 3 dimensional construct.

With photoacute, using a 10MP sensor and 6 frames, there's a huge amount of date to be correlated but in the end we get greater definition of detail and some three dimensional mapping. Thus for example, if a person was walking in past a building being photographed, using photoacute, one could wipe them away as they are on a near plane. This works only if there are enough frames to get that information as the person is covering different parts of the building in each successive frame.

With widely separated cameras, each view can "See" behind objects that are not in front of that lens.
With enough separation between two lenses, even stationary objects in any plane can be removed as we have from one of the cameras all the data needed for that position in the picture.

With the new smart chips from Stanford, we are merely reassigning (perhaps only for several shots) enough pixels to make several overlapping arrays of pixels as if there were separate chips, each looking from a different angle. This set up has the advantage of being very very accurate, but the resolution is less than using our the entire output from a sequence of shots with a current digital camera.

I think the future is for the hardware with reliable geometry and simple interfaces. I'd like to see what photoacute programmers could do with the new chip!

For now, I'd like to see Photoacute capability loaded into the camera with an open source firmware standard. That would really take full advantage of the advances we are seeing. The idea of closed camera platforms is so archaic and to me is a block to rapid application of new ideas.

Although, companies do great things after the fact with software, the cameras should be the platforms where these braniacs make their marks.
 
Last edited:

John_Nevill

New member
Focus your image after the fact!

Californian company Refocus Imaging is claiming to offer new technology to cameras that enables focusing after the picture is taken!

For more info check out their website
 
Last edited by a moderator:

Asher Kelman

OPF Owner/Editor-in-Chief
Californian company Refocus Imaging is claiming to offer new technology to cameras that enables focusing after the picture is taken!

For more info check out their website

John,

You can watch a well-explained video reporting on Refocus Imaging here. Ren Ng the inventor shows pretty impressive pictures to illustrate his work with the new chip!

Unfortunately, dividing the incoming rays to many 16x16 vistual sensors reduces the effective resolution by a factor of 10! Still, it's a very accurate method of determining depth and I'd imagine new ways to ovecome it's limitations. For industial artificial vision and military work, where cost is often not such a barrier, a 60 MP sensor would give 6MP images which is pretty good. The other thing likely is that further software advances would resynthesize a partial 3D surface and then compute one plane from that using better math.

Asher
 
Top