Purdue Researchers Develop AI to Create 3D Shapes From 2D Images

A new technique that uses the artificial intelligence methods of machine learning and deep learning is able to create 3-D shapes from 2-D images, such as photographs, and is even able to create new, never-before-seen shapes.

Karthik Ramani, Purdue's Donald W. Feddersen Professor of Mechanical Engineering, says that the "magical" capability of AI deep learning is that it is able to learn abstractly.

"If you show it hundreds of thousands of shapes of something such as a car, if you then show it a 2-D image of a car, it can reconstruct that model in 3-D," he says. "It can even take two 2-D images and create a 3-D shape between the two, which we call 'hallucination.'"

When fully developed, this method, called SurfNet, could have significant applications in the fields of 3-D searches on the Internet, as well as helping robotics and autonomous vehicles better understand their surroundings.

Perhaps most exciting, however, is that the technique could be used to create 3-D content for virtual reality and augmented reality by simply using standard 2-D photos.

"You can imagine a movie camera that is taking pictures in 2-D, but in the virtual reality world everything is appearing magically in 3-D," Ramani says. "Inch-by-inch we are going there, and in the next five years something like this is going to happen.

"Pretty soon we will be at a stage where humans will not be able to differentiate between reality and virtual reality."

"This is very similar to how a camera or scanner uses just three colors, red, green and blue—known as RGB—to create a color image, except we use the XYZ coordinates," he says.

Ramani says this technique also allows for greater accuracy and precision than current 3-D deep learning methods that operate more using volumetric pixels (or voxels).

"We use the surfaces instead since it fully defines the shape. It's kind of an interesting offshoot of this method. Because we are working in the 2-D domain to reconstruct the 3-D structure, instead of doing 1,000 data points like you would otherwise with other emerging methods, we can do 10,000 points. We are more efficient and compact."

One significant outcome of the research would be for robotics, object recognition and even self-driving cars in the future; they would only need to be fitted with standard 2-D cameras, yet still have the ability to understand the 3-D environment around them.

Ramani says that for this research to be developed, more basic research in AI will be needed.

"There's not a box of machine learning algorithms where we can take those and apply them and things work magically," he says. "To move from the flatland to the 3-D world we will need much more basic research. We are pushing, but the mathematics and computational techniques of deep learning are still being invented and largely an unknown area in 3-D."

Media

Read 380 times

Rate this item
(1 Vote)

Copyright © 2017 Prototype Today ®. All rights reserved.

|   Privacy Policy |   Terms & Conditions |   Contact Us |

All trademarks and registered trademarks are the property of their respective owners.

Additive Manufacturing Today