Microsoft turns single-camera smartphone into 3D depth sensor (Wired UK)

Learning to be a depth camera for close-range human capture and interactionI3D

The Kinect may be out of a job after a Microsoft team has turned
standard smartphone and webcam devices into 3D depth-sensing
cameras with only a few minor hardware adjustments.

Presenting their work at Vancouver’s SIGGRAPH 2014 conference,
the team led by Sean Fanello described a method of removing the infrared filter — used to
block unwanted light from photographs — and attaching a ring of
near-infrared LEDs to a Galaxy Nexus to allow depth information to
be recorded by the phone. Ordinarily, depth-sensing would require
multiple visual inputs, such as those used in Kinect or, more
relevantly, the six cameras dotted around Amazon’s smartphone in
development — or indeed Google’s own Project Tango. But the team,
in conjunction with the Italian Institute of Technology, has
successfully used a combination of cheap LEDs and machine learning
to approximate depth.

We present a machine learning technique for estimating
absolute, per-pixel depth using any conventional monocular 2D
camera, with minor hardware modifications,” the team wrote in the
white paper accompanying their findings. “Our approach
targets close-range human capture and interaction where dense 3D
estimation of hands and faces is desired.” The team’s focus was
almost entirely on hands and faces, to limit the amount of training
data they had to accrue. After developing the basic frame of the
system, the researchers found they were able to sense depth and
motion at a rate of 220 frames per second.

Speaking in Vancouver, coauthor of the paper Shahram Izadi said:
“We kind of turned the camera on its head.” Without the benefit of
perspective, the team had to rely on the relative intensity of
reflected infrared light to determine the distance to a point. The
size of the object also played into the formula when educating the
machine on how to determine distance in a flat plane, as (much like
Father Ted
) the researchers had to teach it to distinguish
between a large but far away hand and one that is
genuinely up-close, but small.

The team hopes that this will allow at least an increase of
prototypes for depth-sensing applications in a variety of new ways.
“Whilst this method cannot replace commodity depth sensors
for general use, our hope is that it will enable 3D face and hand
sensing and interactive systems in novel contexts.”

If the article suppose to have a video or a photo gallery and it does not appear on your screen, please Click Here

15 August 2014 | 9:00 am – Source: wired.co.uk

Leave a Reply

Your email address will not be published.