Researchers at Stanford University have developed an innovative camera lens that captures depth perception in each shot. The system, called multi-aperture, uses a 3-megapixel sensor to capture 16×16 pixel squares called subarrays, each slightly overlapping; image processing then analyses the pixel location differences between subarrays to work out the relative distance between objects in the photo. At present the 3D information is stored as metadata in a normal JPEG.
The system could also lead to a reduction in noise, which particularly plagues the generally lower-ISO capable cameras found in cellphones, and a streamlining in design and build. Each chip could potentially be smaller than existing models, thanks to fast advancing chip manufacture technology:
“There is opportunity for most of the complexity of the lens design to sit at the semiconductor rather than at the objective lens. Although the local optics [on the sensor] may be challenging, it is possible that the optics can be better controlled with lithography and semiconductor processes than with the injection molding and grinding that is used in the conventional camera lenses” Keith Fife, Stanford University
At present the multiarray camera uses ten times the battery draw of a normal lens, due to the extra processing needed to fathom out the depth data; it also reduces overall megapixel count, because of the overlapping subarrays. I’m still excited to see this sort of technology hit cellphones, though; it’s almost like micro-geotagging, adding position data not just to the image as a whole (i.e. where you took it) but even more specifically where individual things were in the shot.