Stanford Team Develops 4D Camera for Use in Robots, VR, Autonomous Cars
Stanford researchers have designed a new 4D camera for use in robotics, autonomous vehicles and virtual and augmented reality technologies.
"We want to consider what would be the right camera for a robot that drives or delivers packages by air," said Donald Dansereau, a postdoctoral fellow in electrical engineering, in a report about the new camera. "We're great at making cameras for humans but do robots need to see the way humans do? Probably not."
The camera uses a technique called light field photography, originally described by Stanford professors Marc Levoy and Pat Hanrahan in 1996, that captures information about the direction and distance of the light that hits the sensor. Light field photography allows for images that can be refocused after they are taken and could allow robots to see through things like rain, that might otherwise obscure their vision.
"A 2D photo is like a peephole because you can't move your head around to gain more information about depth, translucency or light scattering," added Dansereau. "Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess."
The camera also takes in an extremely wide field of view, at 138 degrees, capturing nearly two-thirds of the circle around it with each shot, thanks to a spherical lens. That lens also caused problems, though, as it channels light onto a flat sensor. Leaning on optics and fabrication expertise from team members at the University of California, San Diego and the algorithmic expertise of Stanford Assistant Professor of Engineering Gordon Wetzstein's lab, the team was able to overcome those challenges.
"It's at the core of our field of computational photography," said Wetzstein in a Stanford release about the camera. "It's a convergence of algorithms and optics that's facilitating unprecedented imaging systems."
From a distance, the camera works more or less like a conventional camera, but it is designed to improve more close-up images and would be particularly useful, according to the team, in situations such as robots navigating small areas, self-driving cars or landing drones. It would also be useful for improving rendering of real scenes in virtual or augmented reality systems and would be helpful in combining real scenes and computer-generated components.
So far the team has created a proof-of-concept camera. Next, they plan to produce a prototype small and light enough for use in a robot, with one humans can wear following shortly.
"Many research groups are looking at what we can do with light fields but no one has great cameras. We have off-the-shelf cameras that are designed for consumer photography," said Dansereau in a Stanford release. "This is the first example I know of a light field camera built specifically for robotics and augmented reality. I'm stoked to put it into peoples' hands and to see what they can do with it."
For more information about the camera, visit computationalimaging.org.
About the Author
Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].