The resulting wave of technical innovation has put cameras everywhere, from satellites to cellphones. But bigger changes in the technology are yet to come.

Thanks to continued advances in software and processing power, research labs are continually exploring new ideas about what cameras and photographs can do.

Freezing motion

Cameras use short exposures, software or even moving sensors to correct for camera shake. But they can't remove motion blur – the fuzzy impressions created by objects moving during the exposure.

Prevention: During the exposure their camera moves quickly to the left before gradually coming to a halt and then accelerating in the opposite direction.The movement ensures that, regardless of its speed, any object moving to either the left or the right is perfectly captured by the camera for a fraction of the exposure.

The object is out of focus at other times during the exposure, and so, in the final photograph, all objects appear as a blurry mess. Crucially, though, all objects – whether moving or static during the exposure – are blurred to the same degree, so the final image can be "de-blurred" quickly and easily.

Goodbye to glare

Although glare caused by bright light sources can be used artistically, it ruins many more photos than it enhances.Glare happens because not all light that hits a lens is focused onto a camera's sensor. A small fraction is reflected inside the lens, emerging in unpredictable places. If a light source is bright enough, this effect can bleach out parts of an image.although rogue light can emerge from any part of a lens, it is always tightly confined to one direction, rather like a laser beam. If light from that direction can be filtered out, glare disappears.

Prevention: A mask is designed that fits between a camera's lens and image sensor, peppered with rows of small holes that each act like a pinhole camera. Each hole captures a tiny circular chunk of the camera lens's output and focuses it on the sensor.

The image on the camera sensor is composed of hundreds of small dots, a bit like a Pointillist painting.

Because glare emerging from the lens is confined to a well defined beam, it only shows up in some of those pinholes, as a tiny bright spot in some of those dots, producing a "salt and pepper" effect across the area that would normally be bleached out entirely.

Software can fill these bright spots in using colour from the rest of the pinhole, producing a glare-free image. This method currently limits the final resolution to the number of holes in the mask, but researchers hope to improve that.

Pared-down pixels

Digital cameras have been marketed for years using the number of pixels or megapixels on the sensor as a gauge of how much detail they can capture.Rice University fellows have developed a one-pixel camera that produces surprisingly good results and think megapixel sensors are wasteful.The pair points out that most of the millions of pixels a camera records are discarded when the final compressed image is produced. But a mathematical technique makes it possible to work backwards, starting with a small sample of information and essentially expanding it into a higher quality image.

The one-pixel camera contains an array of tiny mirrors where the sensor would usually be, each capable of directing light onto the single pixel sensor. At any one time, the camera directs a randomly selected half of the mirrors onto the sensor, capturing just half the image one pixel at a time. That process is repeated up to 200,000 times in just a few seconds to produce a dataset that can be extrapolated into a final image with more pixels than were actually captured.The mathematics behind the technique has improved enough over recent years to make the single-pixel camera "practical and not just a mathematical curiosity" say the researchers. That could help improve battery performance, because standard image compression is very power intensive, they add, although the process still needs to become faster.

Shooting the invisible

Satellite views of the Earth are often obscured by cloud. But a camera that exploits quantum physics to photograph images it can't directly see would have no such problems.University of Maryland fellows have built a prototype of just such a camera that exploits quantum effects that can link pairs of photons.

A 'splitter' divides photons from a single light source into two beams, one headed towards the camera sensor and the other towards the object to be photographed.

When a photon bounces off the toy, it is recorded by a photon detector beside it. Occasionally the photon detector and camera record a photon at exactly the same time. Those two photons are linked by a quantum effect called "two-photon interference", and both occupy a similar position in their respective beams.

Whenever a photon reaches the camera at exactly the same time that its linked partner is detected bouncing off the toy and onto the photon detector's surface, a point is recorded in the image at the corresponding position. After 1000 or more linked photons reach the camera sensor, an image of object becomes clear, even though the camera itself has no view of it.

For further details go to the page Future of Photography.

Technorati Tags:


Posted by  Madhu  on  9/18/2008  |  Permalink
 |  


Want automatic updates of our articles? Subscribe to our RSS feed or Get Email Updates sent directly to your inbox!


If you find this article useful, bookmark to:

Digg   del.icio.us   Stumbleupon   Technorati   Twit This Reddit   Facebook   Spurl   Yahoo   Google Bookmarks


0 comments

Post a Comment

Site Meter