Open Menu Close Menu

Virtual Reality

Research Enables iPhone Videos to be Merged into 4D Movies

Researchers developed a smartphone-enabled 4D visualization model that enables viewers to see "behind" figures by removing them from the scene altogether.

Researchers developed a smartphone-enabled 4D visualization model that enables viewers to see "behind" figures by removing them from the scene altogether. Photo: Carnegie Mellon University

A team of researchers at Carnegie Mellon University has come up with a way to combine iPhone videos to create 4D visualizations. The results allow the viewer to see action from myriad angles, erase people from the scene to see what's behind them and remove objects that block lines of sight. Others can be added to the scene too. To gain that fourth dimension (time), the viewer can freeze time, freeze the view and move through time, and change both time and view.

The videos can be shot from different vantage points, as might occur at a party or a sporting event, and then merged into a comprehensive model that can reconstruct a dynamic or static 3D scene. The process uses convolutional neural nets (CNNs), a type of deep learning program that has proven useful in analyzing visual data.

Although the method can't necessarily capture scenes in full 3D detail, the system can limit playback angles so incompletely reconstructed areas don't show up to shatter the illusion of 3D imagery.

The 4D visualization method was presented at the Computer Vision and Pattern Recognition (CVPR) virtual conference last month. According to a paper on the project, the approach was validated with the use of 15 smartphone cameras, filming dances and martial arts demonstrations, among other groupings.

"We are only limited by the number of cameras," with no upper limit on how many video feeds can be used, said Aayush Bansal, a Ph.D. student in CMU's Robotics Institute, in a statement.

"Virtualized reality" is nothing new, Bansal noted. But in the past it has been restricted to studio setups, such as the university's own Panoptic Studio, which has 500-plus video cameras embedded in its geodesic walls. "The point of using iPhones was to show that anyone can use this system," he said. "The world is our studio."

The paper suggested that the functionality had "many potential applications," particularly in the movie industry and the consumer segment, as virtual reality headsets become more common.

The work was supported by the National Science Foundation, Office of Naval Research and Qualcomm.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

comments powered by Disqus