The interfaces of the future are almost always depicted as something that’s devoid of any interface peripheral leaving nothing between the humans and the screens they’re interacting with. Of these the most iconic is the one from Minority Report where we see Tom Cruise wave his hands around in order to manipulate data with the motions being so intuitive many people left the theater wondering how long it would take to get that technology in their homes. It didn’t take long for it to be developed but despite people’s excitement regarding the potential future of interface technology you’d be hard pressed to find it anywhere, let alone in anyone’s house.
There’s a company out there that’s trying to change that called Leap Motion and their new product has a distinctly Minority Report feel to it:
Now the Leap Motion controller isn’t anything revolutionary from a technological point of view. It’s fundamentally the same as a Kinect (which itself is based off PrimeSense technology) however rather than doing whole body detection over a wide area the Leap Motion controller has instead been designed to recognize finer grained motion in a much smaller area. So instead of being aimed at the gaming market Leap Motion is positioning itself as an alternative interface to the traditional desktop PC, one that has the potential to replace many of the capabilities of the current standard interface peripherals (and even some of the non-standard ones). However there are some fundamental issues with it that will likely impede its adoption and they’re not exactly unique to the Leap Motion idea.
The Gorilla Arm effect is a well known phenomena in ergonomics whereby any interface system that requires someone to hold their arm out and make fine motions ends up with the user’s arm feeling tired and sore in no short order. It was first encountered when touchscreens were first developed which, at the time, was thought to be the next big revolution in interface design. Now whilst touchscreens are a big part of the world today they’re used much more like traditional peripherals (I.E. they don’t require you to hold your arms up) and not in the same way in which the Leap Motion demonstrates much of its functionality.
Now the argument can be made that the Leap Motion controller can provide a lot of additional functionality without invoking the Gorilla Arm effect as there has been musings that it could replace your keyboard and, by logical extension, your mouse as well. The trouble with that is however is that such interfaces lack any kind of tactile feedback something which plagued the similarly cool but useless idea of the laser keyboards. Indeed as I mentioned in my review of the Surface and its atrocious touch keyboard the lack of feedback makes using them quite a chore and unfortunately I can’t see how Leap Motion would be able to get around that particular issue.
Where it might become useful is in gestures that could be tied up with shortcuts in your application of choice. Personally I wouldn’t find much use for it as my muscle memory for all the required shortcuts is already etched into my nervous system but it would essentially be an alternative to something like a multi-touch trackpad. Whether or not one is better than the other is an exercise that I’ll leave up to the reader but suffice to say whilst the Leap Motion controller looks cool it’s applicability in the real world seems rather limited.
It could make a rather awesome little augment for robotics projects, however.
We’re on the cusp of a new technological era thanks in no small part to the ubiquity of smart phones. They’ve already begun to augment us in ways we didn’t expect, usurp industries that failed to adapt and have created a fledgling industry that’s already worth billions of dollars. The really interesting part, for me at least, is the breaking down of the barriers between us and said technology as whilst it’s all well and good that we can tap, swipe and type our way through things it does feel like there should be a better solution. Whilst we’re still a ways off from being able to control things with our brains (although there’s a lot of promising research in this direction) there’s a new product available that I think is going to be the bridge between our current interface standards and that of more direct control methods.
Shown above is a product called the MYO from Thalmic Labs, a Y-Combinator backed company that’s just started taking pre-orders for it. The concept for the device is simple: once you slip this band over your arm it can track the electrical activity in your muscles which it can then send back to another device via BlueTooth. This allows it to track all sorts of gestures and since it doesn’t rely on a camera it’ll work in far more situations than other devices that do. It’s also incredibly sensitive being able to pick up movement right down to your fingers, something which I wasn’t sure would be possible based on other similar prototype devices I had seen in the past. Needless to say I was very intrigued when I saw it as I instantly saw it as a perfect companion to Google’s Glass.
All the demonstration videos for Google Glass shows it being commanded by a pretty powerful voice interface with some functions (like basic menu navigation) handled through eye tracking. As a technology demo its pretty impressive but I’m not the biggest fan of voice interfaces, especially if I’m in a public space. I then started thinking about alternative input methods and whilst something like a laser keyboard works in certain situations I wanted something that would be as discreet as typing on a smartphone but was also a bit more elegant than carting around that (admittedly small) device. The MYO could provide the answer to this.
Now the great thing about the MYO is that they’re opening it up to developers from the get go, allowing people like me to create all sorts of interesting applications for the device. For me there’s really only a single killer application required to justify the entry cost: a simple virtual keyboard that uses your muscles. I’ve read about similar things being in development for a while now but nothing seems to have made it past the high concept stage. MYO on the other hand has the real potential to bring this to fruition within the next year or two and whilst I probably won’t have the required augmented reality device to take advantage of it I’ll probably end up with one of these devices anyway, just for experimentation.
With this missing piece of the puzzle I feel like Glass has gone from being a technical curiosity to a device that I could see myself using routinely. The 1.0 MYO might be a little cumbersome to keep around but I’m sure further iterations of it will make it nigh on unnoticeable. This is just my narrow view of the technology as well and I’m sure there’s going to be hundreds of other applications where a MYO device will unlock some seriously awesome potential. I’m very excited about this and can’t wait to get my hands on one of them.