There’s little doubt that Google’s Project Glass is going to be a disruptive technology, although whether that comes from revolutionizing the way we interface with technology or more because of the social implications will remain to be seen. Considering that the device has been limited to the technically elite and the few that got in on the #ifhadglass competition (disappointingly restricted to US citizens only) we still don’t have much to go on as to how Glass will function as an everyday technology. Sure we’ve got lots of impressions about it but the device is still very much in the nascent stages of adoption and third party development on the platform is only just starting to occur.
We do have a much better idea of what is actually behind Google Glass though thanks to the device reaching more people outside the annals of the Googleplex. From what I’ve read it’s comparable to a mid range smartphone in terms of features with 16GB of storage, a 5MP camera capable of taking 720p video and a big enough battery to get you through the day with typical usage. This was pretty much expected given Glass’ size and recent development schedule but what’s really interesting isn’t so much the hardware that’s powering everything, it’s the terms with which Google is letting you interface with it.
Third party applications, which make use of the Mirror API, are forbidden from inserting ads into their applications. Not only that they are also forbidden from sending API data, which can be anything from feature usage to device information like location, to third party advertisers. This does not preclude Google from doing that, indeed the language hinges on the term 3rd party, however it does firmly put the kibosh on any application that attempts to recoup development costs through the use of ads or on-selling user data. Now whether or not you’ll be able to recoup costs by using Google’s AdSense platform remains to be seen but it does seem that Google wants to have total control of the platform and any revenue generated on it from day 1 which may or may not be a bad thing, depending on how you view Google.
What got me though was the strict limitation of Glass only talking to web applications. Whilst this still allows Glass to be extended in many ways that we’re only really beginning to think of it still drastically limits the potential of the platform. For instance my idea of pairing it with a MYO to create a gesture interface (for us anti-social types who’d rather not speak at it constantly) is essentially impossible thanks to this limitation, even though the hardware is perfectly capable of syncing with BlueTooth devices. Theoretically it’d still be possible to accomplish some of that whilst still using a web app but it’d very cumbersome and not at all what I had envisioned when I first thought of pairing the two together.
Of course that’s just a current limitation set by Google and with exploits already winding their way around the Internet it’s not unreasonable to expect that such functionality could be unlocked should you want it. There’s also the real possibility that this limitation is only temporary and once Glass hits general availability later this year it’ll become a much more open platform. Honestly I hope Google does open up Glass to native applications as whilst Glass has enormous amounts of potential in its current form the limitations put a hard upper barrier on what can be accomplished, something which competitors could rapidly capitalize on.
Google aren’t a company to ignore the demands of developers and consumers at large though so should native apps become the missing “killer app” for the platform I can’t imagine they’d stave off enabling them for long. Still the current limitations are a little worrying and I hope that they’re only an artefact of Glass being in its nascent form. Time will tell if this is the case however and the day of reckoning will come later this year when Glass finally becomes generally available.
I’ll probably still pick one up regardless, however.
We’re on the cusp of a new technological era thanks in no small part to the ubiquity of smart phones. They’ve already begun to augment us in ways we didn’t expect, usurp industries that failed to adapt and have created a fledgling industry that’s already worth billions of dollars. The really interesting part, for me at least, is the breaking down of the barriers between us and said technology as whilst it’s all well and good that we can tap, swipe and type our way through things it does feel like there should be a better solution. Whilst we’re still a ways off from being able to control things with our brains (although there’s a lot of promising research in this direction) there’s a new product available that I think is going to be the bridge between our current interface standards and that of more direct control methods.
Shown above is a product called the MYO from Thalmic Labs, a Y-Combinator backed company that’s just started taking pre-orders for it. The concept for the device is simple: once you slip this band over your arm it can track the electrical activity in your muscles which it can then send back to another device via BlueTooth. This allows it to track all sorts of gestures and since it doesn’t rely on a camera it’ll work in far more situations than other devices that do. It’s also incredibly sensitive being able to pick up movement right down to your fingers, something which I wasn’t sure would be possible based on other similar prototype devices I had seen in the past. Needless to say I was very intrigued when I saw it as I instantly saw it as a perfect companion to Google’s Glass.
All the demonstration videos for Google Glass shows it being commanded by a pretty powerful voice interface with some functions (like basic menu navigation) handled through eye tracking. As a technology demo its pretty impressive but I’m not the biggest fan of voice interfaces, especially if I’m in a public space. I then started thinking about alternative input methods and whilst something like a laser keyboard works in certain situations I wanted something that would be as discreet as typing on a smartphone but was also a bit more elegant than carting around that (admittedly small) device. The MYO could provide the answer to this.
Now the great thing about the MYO is that they’re opening it up to developers from the get go, allowing people like me to create all sorts of interesting applications for the device. For me there’s really only a single killer application required to justify the entry cost: a simple virtual keyboard that uses your muscles. I’ve read about similar things being in development for a while now but nothing seems to have made it past the high concept stage. MYO on the other hand has the real potential to bring this to fruition within the next year or two and whilst I probably won’t have the required augmented reality device to take advantage of it I’ll probably end up with one of these devices anyway, just for experimentation.
With this missing piece of the puzzle I feel like Glass has gone from being a technical curiosity to a device that I could see myself using routinely. The 1.0 MYO might be a little cumbersome to keep around but I’m sure further iterations of it will make it nigh on unnoticeable. This is just my narrow view of the technology as well and I’m sure there’s going to be hundreds of other applications where a MYO device will unlock some seriously awesome potential. I’m very excited about this and can’t wait to get my hands on one of them.
My group of friends is undeniably tech-oriented but that doesn’t mean all of us share the same views on how technology should be used, especially in social situations. If you were to see us out at a restaurant it’s pretty much guaranteed that at least one of us is on our phone, probably Googling an answer to something or sifting through our social networking platform of choice. For most of us this is par for the course being with all of us being members of Gen Y however some of my friends absolutely abhor the intrusion that smartphones have made on normal social situations and if the direction of technology is anything to go by that intrusion is only going to get worse, not better.
Late last year I came across the Memento Kickstarter project, a novel device that takes 1 picture every 30 seconds and even tags it with your GPS location. It’s designed to be worn all the time so that you end up with a visual log of your life, something that’s obviously of interest to a lot of people as they ended up getting funded 11 times over. Indeed just as a device it’s pretty intriguing and I had caught them early enough that I could have got one at a hefty discount. However something that I didn’t expect to happen changed my mind on it completely: my technically inclined friends’ reactions to this device.
Upon linking my friends to the Kickstarter page I wasn’t met with the usual reactions. Now we’re not rabid privacy advocates, indeed many of us engage in multiple social networks and many of us lead relatively open online lives, but the Memento was met with a great deal of concern over it’s present in everyone’s private lives. It wasn’t a universal reaction but it was enough to give me pause about the idea and in the end I didn’t back it because of it. With Google Glass gearing up to increase its presence in the world these same privacy questions are starting to crop up again and the social implications of Google’s flagship augmented reality device are starting to become apparent.
Google Glass is a next step up from Memento as whilst it has the same capability to take photos (without the express knowledge or consent from people in it) its ability to run applications and communicate directly with the Internet poses even more privacy issues. Sure the capability isn’t too much different than what’s available now with your garden variety smartphone however it is ever-present, attached the side of someone’s head and can be commanded at will of the user. That small step of taking your phone out of your pocket is enough of a social cue to let people know what your intentions are and make their concerns known well before hand.
What I feel is really happening here is that the notion of societal norms are being challenged by technology. Realistically such devices are simply better versions of things we have natively as humans (I.E. imaging devices with attached storage) but their potential for disseminating their contents is much greater. Just like social norms developed around ubiquitous smartphones so too they must develop around the use of augmented reality devices like Google Glass. What these norms will end up being however is something that we can’t really predict until they reach critical mass which, from what I can tell, is at least a couple years off in the future, possibly even longer.
For my close knit circle of tech friends however I can predict a few things. Most of them wouldn’t have any issues with me wearing and using it whilst we were doing things together but I can see them wanting me to take them off if we were sitting down to dinner or at someone’s private residence. It could conceivably be seen as somewhat rude to wear it if you’re deep in conversation although I feel that might change over time as people realise it’s not something that’s being used 100% of the time. Things will start to get murky as Glass like devices start to become smaller and less obtrusive although the current generations of battery technology put Glass on the slimmest end of the spectrum possible so I doubt they’ll be getting smaller any time soon.
Essentially I see these kinds of augment reality devices being an organic progression of smartphones, extending our innate human abilities with that of the Internet. The groundwork has already been laid for a future that is ever-increasingly intertwined with technology and whilst this next transition poses its own set of challenges I have no doubt that we’ll rapidly adapt, just like we have done in the past. What these adaptations are and how they function in the real world will be an incredibly interesting thing to bear witness to and I, for one, can’t wait to see it.
As someone who’s been deep in high technology for the better part of 2 decades it’s been interesting to watch the dissemination of technology from the annals of my brethren down to the level of the every day consumer. For the most part its a slow process as many of the technological revolutions that are unleashed onto the mass markets have usually been available for quite some time for those with the inclination to live on the cutting edge. Companies like Apple are prime examples of this, releasing products that are often technically inferior but offer that technology in such a way as to be accessible to anyone. Undoubtedly the best example of this is their iPhone which arguably spawned the smart phone revolution that is still thundering along.
When it was first released the iPhone wasn’t really anything special. It didn’t support third party applications, couldn’t send or receive MMS and even lacked some of the most critical functionality of a smart phone like cut and paste. For those brandishing their Windows Mobile 6.5 devices the idea of switching to it was laughable but they weren’t the target consumer. No Apple had their eye on the same market that Nintendo did when they released the Wii console: the people who traditionally didn’t buy their product. This transformed the product into a mass market success and was the first steps for Apple in developing their iOS ecosystem.
With the beachhead firmly established this paved the way for other players like Google to branch out into the smart phone world. Whilst they played catch up to Apple for a good 3 years or so Google was finally crowned the king early last year and hasn’t showed any signs of slowing down since then. Of course in that same time Apple created an entirely new market in the form of tablet computers, a market which Android has yet to make any significant in roads too. However whilst Google might be making a token appearance in the market currently I don’t they’re that interested in trying to follow Apple’s lead on this one.
Their sights are set firmly on the idea of creating another market all of their own.
For products that really bring something new to the table you really can’t beat Project Glass. Back when I first posted about Google’s augmented reality device it seemed like a cool piece of technology that the technical elite would love but if I honest I didn’t really know how the wider world would react to it. As more and more people got to use Glass the reaction has been overwhelmingly positive to the point where comparisons to the early revisions of the iPhone seem apt, even though Glass is technically cutting edge all by its own. The question then is whether Google can ride Glass to iPhone level success in creating another market in the world of augmented reality devices.
There are few companies in the world that can create a new market that have high potential for profitability but Google is one of the few that has a track record in doing so. Whilst the initial reviews are positive for Glass it’s still far from being a mass market device with the scarce few being made available are only for the technical elite, and only those who went to Google I/O and pony up the requisite $1500 for a prototype device. No doubt this will help in creating a positive image of the device prior to its retail release but getting tech heads to buy cutting edge tech is like shooting fish in a barrel. The real test will be when Joe Public gets his hands on the device and how they integrate into our everyday activities.
It was just over a decade ago now but I can still vividly remember walking around the streets of Akihabara in Tokyo. It’s a technical wonderland and back then when Internet shopping was something only crazy people did (for fear of losing your credit card details) it was filled with the kind of technology you couldn’t find anywhere else. I was there on a mission looking for a pocket translator similar to the one my Japanese teacher had lent me. While my quest went unfulfilled I did manage to see all sorts of technology there that wouldn’t make it to Australia shores for years to come, and one piece in particular stuck in my mind.
There was a row of these chunky looking head sets, each hooked up to what looked like a portable CD player. I remember picking one up and looking at the headset I saw two tiny displays in it, one for each eye. Putting on the headset I was greeted to a picture that seemed massive in comparison to the actual size of the device playing some kind of demo on a loop. It wasn’t perfect but it was enough to make me fascinated with the concept and I thought it wouldn’t be long before everyone had some kind of wearable display. Here we are just over a decade later and the future I envisioned hasn’t yet come to pass but it seems we’re not far off.
Today Google announced Project Glass, one of their brain childs of the secretive Google[x] lab. There’s been rumours floating around for quite a while now that they were working on something of this nature but no one could give much above the general idea that it would be a head mounted display and Android would be powering it. Looking over what Google’s released today as well as the comments from other news outlets makes it clear that Google is quite serious about this idea and it could be something quite revolutionary.
The initial headset designs I saw back when I heard the original rumours were the kind of of clunky, overly large glasses we’ve come to expect when anyone mentions a wearable display. Google’s current design (pictured above) seems rather elegant in comparison. It’ll still draw a lot of attention thanks to the chunky white bar at the side but it’s a far cry from what we’ve come to expect from wearable displays. What’s even more impressive is the concept demo they included alongside it, showcasing what the headset is capable of:
The possibilities for something like this are huge. Just imagine extending the capabilities to recognise faces of people you’ve met before, neatly side stepping that awkward moment when you forget someone’s name. You could even work a barcode scanner into it, allowing you to scan food to see the nutritional value (and then see if it fits in with your diet) before you purchase it. I could go on forever about the possibilities of a device like the Project Glass but suffice to say it’s quite an exciting prospect.
What will be really interesting to see is how these kind of devices blend in to every day social interactions. The smart phone and tablet managed to work their way into social norms rather quickly but a device like this is a whole other ball game. The sleek and unobtrusive design will help ease its transition in some what but I can still see a long adaptation period where people will wonder why the heck you’re wearing it. That won’t deter me from doing so though as it’s this kind of device that makes me feel like I’m living in the future. That’s all it takes for me to overcome any social anxiety that I might have about wearing one of these 😉