Editing tool lets you manipulate 3D objects in 2D photos (Wired UK)

3D Object Manipulation in a Single Photograph using Stock 3D ModelsNatasha Kholgade

Ever since snaps emerged of the Loch Ness Monster in the 1930s,
the capabilities of the photo editing have boggled people’s minds.
Things are about to get even more trippy however, thanks to a new
tool developed by researchers at Carnegie Mellon University, which
lets editors manipulate individual objects within a shot in 3D.

It is very common for objects to be resized or have their
positions within the frame altered slightly within the 2D image
plane, but the new tool will let people turn or flip objects, which
will show bits of them that weren’t even captured by the camera.
The secret to this is that the software uses publicly available 3D
models of objects to inform the editing software how to complete
the geometry and the parts of the object not on show. By studying
the structure and symmetry of an object, the software can fill in
the blanks to recreate the object in its entirety — or at least
make a best guess and what it would be like.

“In the real world, we’re used to handling objects — lifting
them, turning them around or knocking them over,” said Natasha Kholgade, from Carnegie Mellon Robotics Institute
and lead author of the paper
on the development of the tool
. “We’ve created an environment
that gives you that same freedom when editing a photo.”

Originally the system was designed to work using digital
imagery, but the researchers have since found that it can also be
used to manipulate images within historical photos and even
paintings. They have also discovered that it is possible to use
this technique to animate photos and have created an example in
which an origami bird takes off, turns around and flies off down a
corridor.

The main weakness of the method using publicly available models
is that most of the models inevitably don’t quite match the photo
exactly. Variations are often caused inside the photos by ageing,
weathering or lighting. The researchers did develop a technique
that means the model is semi-automatically aligned with the
geometry of the object in question. This allows the software to
estimate the environmental illumination on the parts of the object
that cannot be seen in the photo and replicate them.

While there are not models available for every object online at
the moment, the selection is only getting better and better thanks
to 3D scanning becoming more widespread.

The software will be shown off at the Siggraph graphics
conference in Vancouver next week.

If the article suppose to have a video or a photo gallery and it does not appear on your screen, please Click Here

5 August 2014 | 5:16 pm – Source: wired.co.uk

Leave a Reply

Your email address will not be published.