This app lets you control Google Glass with your thoughts (Wired UK)

MindRDRThis Place Ltd

A London user experience company has launched
the first app that allows people to control Google Glass using only
their thoughts.  

MindRDR
has been launched on GitHub in the hope that the open source tool
will be further investigated and developed, potentially to aid
those that cannot use hand or voice commands to control Google
Glass — Stephen Hawking is apparently “interested” in its
progress.

“The possibilities of Google Glass telekinesis are
vast,” said Chloe Kirton, creative director at This Place, the company behind
it. “In the future, MindRDR could give those with conditions like
locked-in syndrome, severe multiple sclerosis or quadriplegia the
opportunity to interact with the wider world through wearable
technology like Google Glass.”

The system works by combining Neurosky’s EEG biosensor with Glass. Anyone
that wants to use the app must be wearing the Neurosky headband, as
well as Google Glass — so it’s a bit of a cumbersome and pricey
affair, for now. Electrical signals in the brain picked up by the
sensors are first sent to Glass via Bluetooth. Then the app’s
algorithm seeks out any peaks or troughs in activity. For instance,
when a wearer concentrates in a very focussed way on something,
that might be considered a peak and signify a command for yes. On
the flipside, when the mind relaxes and thinks of nothing (or at
least, relatively less) that might be seen by the app’s algorithm
as a trough, and translated into a negative command. So far, that
has meant This Place has managed to programme in functionality for
taking a photo, and then sharing it on social media.

The company, which counts Mandarin Oriental and
Dunhill as clients, began developing the tool 11 months ago,
inspired by its work consulting for the medical sector. “We’d
finished working with Touch
Surgery
, and had done a few things in those spaces,” founder
and CEO Dusan Hamlin told Wired.co.uk. The team has high hopes for
the tool, in terms of how it can potentially transform
communications for individuals that cannot use hand and voice
commands. However for now, the aim has been to lay the groundwork.
“With EEG we have to find some focus — it’s such a huge industry.
We asked ourselves, how can we create something that can be
meaningful in terms of the medical sector,” said Hamlin.
Essentially, the team does not want to be in the position of
over-promising. Hamlin tells us he suffers from an autoimmune
disease, and is constantly seeing articles about new technologies
that could drastically help him one day. But he says This Space is
treading carefully so as not to give “false hopes”. “We want to be
careful to be genuine.”

Kirton tells us the team does not have the
expertise to know for sure whether it can help someone with
locked-in syndrome, or other disorders that mar communication. In
Hawking’s case, they admit the tech is far behind what he uses to
communicate now, in terms of functionality. But the team is
incredibly excited about MindRDR’s potential and wants to find a
way to explore these options “sensibly and sensitively”.

That’s partially why today will be the first
time anyone outside of This Space has had access to it (bar a
handful of journalists). The group is not signing deals and
carrying out studies with clients. It is opening the tech up to the
floor, to see what the world can do with it.

This Space has also designed the app to be
technology agnostic — it’s not at all reliant on Neurosky, per
say, and this keeps the prospects wide open.

Right now, they have identified 18 different
kinds of brain waves that could be used as commands. But for the
purposes of the initial demo, just two commands have been focussed
on — ones that essentially signify “yes” and “no”, for the
purposes of the app. Anyone wearing it needs to concentrate in
order to command Glass to take a photo. A display line appears, and
it will move upwards as the wearer concentrates. When it reaches a
bar at the top, you know the photo has been taken. Keep focussing,
and the line will raise up to a second bar to publish the photo on
social media.

People have so far used all kinds of techniques
in order to prompt the command. Some imagine writing, others
driving or some mathematical sums  — “eventually you find the
sweet spot where it feels so hard,” says Kirton. For Hamlin, its
counting back from 20 — “by the time I get to 11, I’m done”.
Having an average conversation with someone won’t be enough to push
the command, he explains. It’s this tipping point where full
concentration takes over, that drives it.

The team believes it’s achieved its goal — to
deliver a minimum viable product, but a robust one, to the
developer community. But of course, taking “mind-controlled” Glass
from a photo snap, to some more nuanced and complex controls will
be a steep challenge. Right now, it works in a sequence.
Concentrate hard, and you will take a photo, concentrate some more
and you will move on to the next stage — sharing it. There is no
room for deviation, and this will surely be a tricky obstacle in
development.

“Absolutely,” said Kirton. “But when people first
looked at touchscreens, there were new paradigms that had to be
developed. Now we’re looking at how it works with the mind, and
that has to be investigated. Yes right now we’re forcing people to
go through that order, and that’s part of many questions brought
up. In the future of navigating this, is there even an order? Will
there even be a yes and a no? How do you create an interface that
responds to maybe? We’re on the edge of looking at those questions.
But even looking at them is quite exciting.”

If the article suppose to have a video or a photo gallery and it does not appear on your screen, please Click Here

Source: wired.co.uk
———————————————————————————————————————

Leave a Reply

Your email address will not be published.