Seeing depth through a single lens
9 Sep 2013 by Evoluted New Media
Microscopists may soon be able to create a 3D image through a single lens, without moving the camera, thanks to researchers at the Harvard School of Engineering and Applied Sciences (SEAS).
The technique relies only on computation and mathematics – with no fancy hardware or lenses. The technique could offer a new and accessible way to create 3D images of translucent biological materials.
Principal investigator, Kenneth B. Crozier, John L. Loeb Associate Professor of the Natural Sciences, said: “If you close one eye, depth perception becomes difficult. Your eye can focus on one thing or another, but unless you also move your head from side to side, it’s difficult to gain much sense of objects’ relative distances. If your viewpoint is fixed in one position, as a microscope would be, it’s a challenging problem.”
Crozier and graduate student Antony Orth essentially compute how the image would look if it were taken from a different angle, by relying on clues encoded within the rays of light entering the camera.
“Arriving at each pixel, the light’s coming at a certain angle, and that contains important information. Cameras have been developed with all kinds of new hardware – microlens arrays and absorbing masks – that can record the direction of light, and that allow you to do some very interesting things, such as take a picture and focus it later, or change the perspective view. That’s great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?” explained Crozier.
The team discovered that the key lies in inferring the angle of the light at each pixel, rather than directly measuring it. They took two images from the same camera position but focused at different depths. The tiny differences between the two images provide enough information for a computer to mathematically create a brand-new image as if the camera had moved to one side.
The team then stitched the two images together into an animation, providing a way for microscopists to create the impression of a stereo image without the need for expensive hardware. Crozier and Orth have named their method “light-field moment imaging.”
Conor, L. Evans, an assistant professor at Harvard Medical School and an expert in biomedical imaging who was not involved in the research said: “As the method can be applied to any image pair, microscopists can readily add this approach to our toolkit. Moreover, as the computational method is relatively straightforward on modern computer hardware, the potential exists for real-time rendering of depth-resolved information, which will be a boon to microscopists who currently have to comb through large data sets to generate similar 3D renders. I look forward to using their method in the future.”