I can remember my first encounter with virtual reality way back in the 90s. It was a curiosity more than anything else, something that was available at this one arcade/pizza place in the middle of town. You’d go in and there it would be, two giant platforms containing people with their heads strapped into oversized head gear. On the screens behind them you could see what they were seeing, a crude polygonal world inhabited by the other player and a pterodactyl. I didn’t really think much of it at the time, mostly since I couldn’t play it anywhere but there (and that was an hour drive away) but as I grew older I always wondered what had become of that technology. Today VR is on the cusp of becoming mainstream and it looks like Google wants to thrust it into the limelight.
Meet Google Cardboard, the ultra low cost virtual reality headset that Google gave out to every attendee at I/O this year. It’s an incredibly simple idea, using your smartphone’s screen and to send different images to your eyes. Indeed if you were so inclined a similar system could be used to turn any screen into a VR headset, although the lenses would need to be crafted for the right dimensions. With that in mind the range of handsets that Google Cardboard supports is a little limited, mostly to Google Nexus handsets and some of their closely related cousins, but I’m sure that future incarnations that support a wide range of devices won’t be too far off. Indeed if the idea has piqued your interest enough you can get an unofficial version of it for the low cost of $25, a bargain if you’re looking to dabble with VR.
Compared to the original OculusVR specs most smartphones are more than capable of driving Google Cardboard with an acceptable level of performance. My current phone, the Sony Xperia Z, has a full 1080p resolution and enough grunt to run some pretty decent 3D applications. That combined with the bevy of sensors that are in most modern smartphones make Google Cardboard a pretty brilliant little platform for testing out what you can do with VR. Of course that also means the experience you can get with this will vary wildly depending on what handset you have but for those looking for a cheap platform to validate ideas on it’s hard to argue against it.
Of course this begs the question as to what Google’s larger plan is for introducing this concept to the world. Ever since the breakaway success that was the OculusVR it’s been obvious that there’s consumer demand for VR and it only seems to be increasing as time goes on. However most applications are contained solely within the games industry with only a few interesting experiments (like Living with Lag) breaking outside that mould. There’s a ton of augmented reality applications on Android which could potentially benefit from widespread adoption of something like Cardboard, however beyond that I’m not so sure.
I think it’s probably a gamble on Google’s part as history has proven that throwing out a concept to the masses is a great way to root out innovative ideas. Google might not have any solid plans for developing VR of this nature themselves but the community that arises around the idea could prove a fruitful place for applications that no one has thought of before. I had already committed myself to a retail version of an Oculus when it came out however so whilst Cardboard might be a curiosity my heart is unfortunately promised to another.
With an abundance of space and not much else the rural parts of Australia aren’t really the place where a kid has much to entertain themselves with. From the age of about 12 however my parents let us kids bash our way around the property in all manner of vehicles which has then fed into a lifelong obsession with cars. This has been in direct competition with my financially sensible side however as cars are a depreciating asset, one that no amount of money invested in them can ever recoup. However I still enjoy the act of driving itself, especially if it’s through some of Australia’s more picturesque landscapes. You’d think then that the idea of a self driving car would be abhorrent to a person like myself but in reality it’s anything but.
We’re fast approaching the time when cars that can drive themselves to and from any location are not only technically feasible, they’re a few short steps away from being a commercial reality. Google’s self driving car, whilst it has only left its home town a couple times, has demonstrated that it’s quite possible to arm a car with a bevy of sensors and have it react better than a human would in many situations. Indeed the accidents their car has been involved in have not been the fault of the software, but of the humans either controlling the self driving car or those ramming into the back of it. Whilst there’s still many regulatory hurdles to go before these things are seen en-masse on our roads it would seem like having them there would be a huge boon to everyone, especially those travelling as its passengers.
For me whilst driving isn’t an unpleasant experience it’s still a time where I’m unable to do anything else but drive the car. Now I’m not exactly your stereotypical workaholic (I will keep a standard hour day and attempt to automate most of my work instead) but having an extra hour or so a day where I can complete a few tasks, or even just catch up on interesting articles, would be pretty handy. Indeed this is the reason why I still fly most places when travelling for business, even when the flight from Canberra to the other capitals is below an hour total. It’s not me doing the driving which allows me to get things done rather than spending multiple hours watching the odometer.
There’s also those numerous times when neither the wife nor I feel like driving and we could simply hand over to the car for the trip. I can even imagine it reducing our need to have separate cars as I could simply have the car drop my wife off and return to me if I needed it. That’s a pretty huge benefit and one that’s well worth paying a bit of a premium for.
This would also have the unintentional benefit of making those times when I wanted to drive that much more enjoyable. Nothing takes the fun out of something that enjoy than being forced to do it all the time for another purpose, something which driving to work every day certainly did for me. If I was only driving when I wanted to however I feel that I’d enjoy it far more than I’d otherwise would. I think a lot of car enthusiasts will feel the same way as few drive their pride and joys to work every day, instead having a daily driver that they run on the cheap. Of course some will abhor the experience in its entirety but you get that with any kind of new technology.
For me this technology can not come quick enough as the benefits are huge with the only downside being the likely high cost of acquisition. I’ve only been speaking from a personal viewpoint here too as there’s far much more to be gained once self driving cars reach a decent level of penetration among the wider community.
That’s a blog post for another day, however.
Twitch.tv started out as the bastard child of Justin.tv, a streaming website that wanted to make it easy for anyone to stream content to a wider audience. Indeed for a long time Twitch felt like something of an after thought as divesting part of an already niche site into another niche didn’t seem like a sound business maneuver. However since then Twitch has vastly outgrown its parent company becoming the default platform for content streamers around the world. The sponsorship model it has used for user’s channels has proven to be successful enough that thousands of people now make their living streaming games, giving Twitch a sustainable revenue stream. This hasn’t gone unnoticed of course and rumours are starting to circulate that Google will be looking to purchase them.
The agreement is reported to be $1 billion all cash deal, an amazing deal for the founders and employees of Twitch. The acquisition makes sense for Google as they’ve been struggling to get into the streaming market for a long time now with many of their attempts drawing only mild success. For the Twitch community though there doesn’t appear to be any direct benefits to speak of, especially considering that Google isn’t a company to let their acquisitions just do their own thing. Indeed if this rumour has any truth to it the way in which Google integrates Twitch into its larger platform will be the determining factor in how the brand grows or ultimately fails.
At the top of the list of concerns for Twitch streamers is the potential integration between YouTube’s ContentID system and the Twitch streams. Whilst most of the games that are popular on Twitch are readily endorsed by their creators (like League of Legends, DOTA2, World of Warcraft, etc.) most of them aren’t, something which has seen content producers and game developers butt heads multiple times over on YouTube. With the Twitch platform integrated into YouTube there’s potential for game creators to flag content they don’t want streamed something which is at odds with the current Twitch community ethos. If not handled correctly it could see much of Twitch’s value evaporate after they transition across to YouTube as arguably most of it comes from its wide community, not the technology or infrastructure powering it.
On the flip side though Twitch has been known to suffer from growing pains every time a popular event happens to grace its platform, something which Google could go a long way to fixing. Indeed that would likely be the only thing that Twitch has to gain from this: a global presence without the need to invest in costly additional infrastructure. If Google maintains Twitch as a separate, wholly owned brand then this could be of benefit to both of them as a more stable and available platform is likely to drive user numbers much quicker than Twitch has been able to do previously.
We’ll have to see if this rumour turns out to be true as whilst I wouldn’t begrudge Twitch taking the cash the question of what Google will do with them is what will determine their future. Whilst the combination of Twitch chat and YouTube comments sounds like the most unholy creation on the Internet since /b/ there is potential for both Twitch and Google to gain something from this. Whether that’s to the benefit of the community though remains to be seen.
Google isn’t a company that’s known for curtailing its ambitions; starting off with its humble beginnings as the best search engine on the web to the massive conglomerate that it is today, encompassing everything from smartphones to robotic cars. In the past many of the ideas were the result of acquisitions where Google made strategic purchases in order to acquire the talent required to dominate the space they were in. More recently however they’ve started developing their own moonshot style ideas through their Project X labs, a research and development section that has many of the hallmarks of previous idea incubators. Their most recent acquisition trend however seems to be a mix of both with Google picking up a lot of talent to fuel a potential project that they’re being incredibly tight lipped about.
Now I’ll be honest, I really had no idea that Google was looking to enter in the robotics industry until just recently when it was announced that they had acquired Boston Dynamics. For the uninitiated Boston Dynamics is a robotics company that’s been behind some of the most impressive technology demonstrations in the industry, notably the Big Dog robot which displayed stability which few robots have been able to match. Most recently they started shipping out their Atlas platform to select universities for the DARPA robotics challenge program which hopes to push the envelope of what robots are capable of achieving.
Boston Dynamics is the 8th acquisition that Google has made in the robotics space in the past 6 months, signalling that they’ve got some kind of project on the boil which needs an incredible amount of robotics expertise. The acquisitions seem to line up in a few categories with the primary focus being on humanoid robots. Companies in this area include Japanese firm Schaft, who has created a robot similar to that of Atlas, and several more industrial robotics focused companies like Industrial Perception, Meka, Redwood Robotics. They also snapped up Bot and Dolly, the robotics company behind the incredible Box video, who’s technology provided some of the special effects for the recent movie Gravity. There were also 2 design firms, Autofuss and Holomni, who were also picked up in Google’s most recent spending spree.
At the head of all of this is Andy Rubin who came to Google as the lead of Android. It’s likely that he’s been working on this ever since he left the Android division at Google back in March this year although it was only recently announced that he would be heading up the robotics projects. As to what that is currently Google isn’t saying however they have said that they consider it a moonshot project, right alongside their other ideas like Project Loon, Google Glass and the Self Driving Car. Whilst it seems clear that their intention with all these acquisitions will be to create some kind of humanoid robot what kind of purpose that will serve remains to be seen, but that won’t stop me from speculating.
I think in the beginning they’ll use much of the expertise on these systems to bolster the self driving car initiative as whilst they’ve made an incredible amount of progress of late I’m sure the added expertise in computer vision systems that these companies have will prove to be invaluable. From there the direction they’ll take is less clear as whilst it’d be amazing for them to create the in home robots of the future it’s unlikely we’ll see anything of that project for at least a couple years. Heck just incorporating all these disparate companies into the Google fold is going to take the better part of a couple months and it’s unlikely they’ll produce anything of note for sometime after.
Whatever Google ends up doing with these companies we can be assured it’s going to be something revolutionary, especially now that they’ve added the incredible talent of Boston Dynamics to their pool. Hopefully this will allow them to deliver their self driving car technology sooner and then use that as a basis for delivering more robotics technology to the end users. It will be a while before this starts to pay dividends for Google however the benefits for both them and the world at large has the potential to be quite great and that should make us all incredibly excited.
The tech world was all abuzz about Phonebloks just over a month ago with many hailing it as the next logical step in the smartphone revolution. Whilst I liked the idea since it spoke to the PC builder in me it was hard to overlook the larger issues that plagued the idea, namely the numerous technical problems as well as the lack of buy in from component manufacturers. Since then I hadn’t heard anything further on it and figured that the Thunderclap campaign they had ended without too much fuss but it appears that it might have caught the attention of people who could make the idea happen.
Those people are Motorola.
As it turns out Motorola has been working on their own version of the Phonebloks idea for quite some time now, over a year in fact. It’s called Project Ara and came about as a result of the work they did during Sticky, essentially trucking around the USA with unlocked handsets and 3D printers and holding a series of makeathons. The idea is apparently quite well developed with a ton of technical work already done and some conceptual pieces shown above. Probably the most exciting thing for Phonebloks followers ;will be the fact that Motorola has since reached out to Dave Hakkens and are hoping to use his community in order to further their idea. By their powers combined it might just be possible for a modular handset to make its way into the real world.
Motorola’s handset division, if you recall, was acquired by Google some 2 years ago mostly due to their wide portfolio of patents that Google wanted to get its hands on. At the same time it was thought that Google would then begin using Motorola for their first party Nexus handsets however that part hasn’t seemed to eventuate with Google leaving them to do their own thing. However such a close tie with Google might provide Project Ara the resources it needs to actually be successful as there’s really no other operating system they could use (and no, the Ubuntu and Firefox alternatives aren’t ready for prime time yet).
Of course the technical issues that were present in the Phonebloks idea don’t go away just because some technicians from Motorola are working on them. Whilst Motorola’s design is quite a bit less modular than what Phonebloks was purporting it does look like it has a bit more connectivity available per module. Whether that will be enough to support the amount of connectivity required for things like quad core ARM CPUs or high resolution cameras will remain to be seen however.
So whilst the Phonebloks idea in its original form might never see the light of day it does appear that at least one manufacturer is willing to put some effort into developing a modular handset. There’s still a lot of challenges for it to overcome before the idea can be made viable but the fact that real engineers are working on it with the backing of their company gives a lot of credence to it. I wouldn’t expect to see any working prototypes for a while to come though, even with Motorola’s full backing, but potentially in a year or so we might start to see some make their way to trade shows and I’ll be very interested to see their capabilities.
The public cloud is a great solution to a wide selection of problems however there are times when its use is simply not appropriate. This is typical of organisations who have specific requirements around how their data is handled, usually due to data sovereignty or regulatory compliance. However whilst the public cloud is a great way to bolster your infrastructure on the cheap (although that’s debatable when you start ramping up your VM size) it doesn’t take advantage of the current investments in infrastructure that you’ve already made. For large, established organisations this is not insignificant and is why many of them were reluctant to transition fully to public cloud based services. This is why I believe the future of the cloud will be paved with hybrid solutions, something I’ve been saying for years now.
Microsoft has finally shown that they’ve understood this with the release of Windows Azure Pack for Server 2012R2. Sure there was beginnings of it with SCVMM 2012 allowing you to add in your Azure account and move VMs up there but that kind of thing has been available for ages through hosting partners. The Azure Pack on the other hand brings features that were hidden behind the public cloud wall down to the private level, allowing you to make full use of it without having to rely on Azure. If I’m honest I thought that Microsoft would probably be the only ones to try this given their presence in both the cloud and enterprise space but it seems other companies have begun to notice the hybrid trend.
Google has been working with the engineers at Red Hat to produce the Test Compatibility Kit for Google App Engine. Essentially this kit provides the framework for verifying the API level functionality of a private Google App Engine implementation, something which is achievable through an application called CapeDwarf. The vast majority of the App Engine functionality is contained within that application, enough so that current developers on the platform could conceivably use their code using on premises infrastructure if they so wished. There doesn’t appear to be a bridge between the two currently, like there is with Azure, as CapeDwarf utilizes its own administrative console.
They’ve done the right thing by partnering with RedHat as otherwise they’d lack the penetration in the enterprise market to make this a worthwhile endeavour. I don’t know how much presence JBoss/OpenShift has though so it might be less of using current infrastructure and more about getting Google’s platform into more places than it currently is. I can’t seem to find any solid¹ market share figures to see how Google currently rates compared to the other primary providers but I’d hazard a guess they’re similar to Azure, I.E. far behind Rackspace and Amazon. The argument could be made that such software would hurt their public cloud product but I feel these kinds of solutions are the foot in the door needed to get organisations thinking about using these services.
Whilst my preferred cloud is still Azure I’m still a firm believer that the more options we have to realise the hybrid dream the better. We’re still a long way from having truly portable applications that can move between freely between private and public platforms but the roots are starting to take hold. Given the rapid pace of IT innovation I’m confident that the next couple years will see the hybrid dream fully realised and then I’ll finally be able to stop pining for it.
¹This article suggests that Microsoft has 20% of the market which, since Microsoft has raked in $1 billion, would peg the total market at some $5 billion total which is way out of line with what Gartner says. If you know of some cloud platform figures I’d like to see them as apart from AWS being number 1 I can’t find much else.
Just outside the Googleplex in Mountain View California there’s a small facility that was the birthplace for many of the revolutionary technologies that Google is known for today. It’s called Google [x] and is akin to the giant research and development labs of corporations in ages past where no idea is off limits. It’s spawned some of the most amazing projects that Google has made public including the Driverless Car and Project Glass. These are only a handful of the projects that are currently under development at this lab however with vast majority of them remaining secret until they’re ready for release into the world. One more of their projects has just reached that milestone and it’s called Project Loon.
The idea is incredibly simple: provide Internet access to everyone regardless of their location. How they’re going about that however is the genius part: they’re going to use a system of high altitude balloons and base relay stations with each of them being able to cover a 40KM area. For countries that don’t have the resources to lay the cables required to provide Internet this provides a really easy solution to covering large areas and even makes providing Internet possible to regions that would otherwise be inaccessible.
What’s really amazing however is how they’re going about solving some of the issues you run into when you’re using balloons as your transportation system:
The height they fly at is around the bottom end of the range for your typical weather balloon (they can be found from 18KM all the way up to 38KM) and is about half the height from where Felix Baumgartner made his high altitude jump from last year. I wasn’t aware that different layers of the stratosphere had different wind directions and making use of them to keep the balloons in position is just an awesome piece of engineering. Of course this would all be for naught if the Internet service they delivered wasn’t anything above what’s available now with satellite broadband, but it seems they’ve got that covered too.
The Loon stations use the 2.4GHz and 5.8GHz frequencies for communications with ground receivers and base stations and are capable of delivering speeds comparable to 3G (~2MBps or so). Now if I’m honest the choice to use these public signal spaces seems like a little bit of a gamble as whilst it’s free to use it’s also a signal space that’s already quite congested. I guess this is less of a problem in the places where Loon is primarily aimed at, namely regional and remote areas, but even those places have microwaves and personal wifi networks. It’s not an insurmountable problem of course, and I’m sure the way-smarter-than-me people at Google[x] have already thought of that, it’s just an issue with anything that tries to use that same frequency space.
I might never end up being a user of this particular project but as someone who lived on the end of a 56K line for the majority of his life I can tell you how exciting this is for people living outside broadband enabled areas. According to Google it’s launching this month in New Zealand to a bunch of pilot users so it won’t be long before we see how this technology works in the real world. From there I’m keen to see where they take it next as there’s a lot of developing countries where this technology could make some really big waves.
My introduction to RSS readers came around the same time as when I started to blog daily as after a little while I found myself running dry on general topics to cover and needed to start finding other material for inspiration. It’s all well and good to have a bunch of bookmarked sites to trawl through but visiting each one is a very laborious task, one that I wasn’t keen to do every day just to crank out a post. Thus I found the joys that were RSS feeds allowing me to distill dozens of sites down to a singular page, dramatically cutting down the effort required to trawl through them all. After cycling through many, many desktop based readers I, like many others, eventually settled on Google Reader, and all was well since then.
That was until last week when Google announced that Reader was going away on July 1st this year.
Google has been doing a lot of slimming down recently as part of its larger strategy to focus more strongly on its core business. This has led to many useful, albeit niche, products to be shutdown over the course of the past couple years. Whilst the vast majority of them are expected there have been quite a few notable cases where they’ve closed down things that still have a very active user base whilst other things (like Orkut, yeah remember that?) which you’d figure would be closed down aren’t. If there’s one service that no one expected them to close down it would be Reader but apparently they’ve decided to do this due to dwindling user numbers.
Whilst I won’t argue that RSS is the defacto standard for content consumption these days it’s still proven to be a solid performer for anyone who provides it and Google Reader was the RSS reader to use. Even if you didn’t use the reader directly there are hundreds of other products which utilize Google Reader’s back end in order to power their interfaces and whilst they will likely continue on in spite of Reader going away it’s highly unlikely that any of them will have the same penetration that Reader did. Even from my meagre RSS stats it’s easy to tell that Reader has at least 50% of the market, if not more.
If you doubt just how popular Reader was consider that Feedly, shown above syncing with my feeds, managed to gain a whopping 500,000 users in the short time since Google made the announcement. They were actually so popular that right after the start their site was down for a good couple hours and their applications on iOS and Android quickly becoming the number 1 free app on their respective stores. For what its worth it’s a very well polished application, especially if you like visual RSS readers, however there are a few quirks (like it not being in strict chronological order) which stopped me from making the total switch immediately. Still the guys behind it seem dedicated to improving it and filling in the void left by replicating the Reader API (and running it on Google’s AppEngine, for the lulz).
From a business point of view it’s easy to understand why Google is shutting down services like this as they’re a drain on resources that could be better used to further their core business. However it was usually these niche services that brought a lot of customers to Google in the first place and by removing them they burn a lot of goodwill that they generated by hosting them. I also can’t imagine that the engineers behind these products, many of which were products of Google’s famous 20% time, feel great about seeing them go away either. For something as big as Reader I would’ve expected them to try to innovate it rather than abandon it completely as looking over the alternatives there’s still a lot of interesting things that can be done in the world of RSS, especially with such a dedicated user base.
Unfortunately I don’t expect Google to do an about face on this one as there’s been public outcries before (iGoogle, anyone?) but nothing seems to dissuade them once their mind has been made up. It’s a real shame as I feel there’s still a lot of value in the Reader platform, even if it pales in comparison to Google’s core business products. Whilst the alternatives might not be 100% there yet I have no doubt they’ll get there in short order and, if the current trend is anything to go by, surpass Reader in terms of features and functionality.
We’re on the cusp of a new technological era thanks in no small part to the ubiquity of smart phones. They’ve already begun to augment us in ways we didn’t expect, usurp industries that failed to adapt and have created a fledgling industry that’s already worth billions of dollars. The really interesting part, for me at least, is the breaking down of the barriers between us and said technology as whilst it’s all well and good that we can tap, swipe and type our way through things it does feel like there should be a better solution. Whilst we’re still a ways off from being able to control things with our brains (although there’s a lot of promising research in this direction) there’s a new product available that I think is going to be the bridge between our current interface standards and that of more direct control methods.
Shown above is a product called the MYO from Thalmic Labs, a Y-Combinator backed company that’s just started taking pre-orders for it. The concept for the device is simple: once you slip this band over your arm it can track the electrical activity in your muscles which it can then send back to another device via BlueTooth. This allows it to track all sorts of gestures and since it doesn’t rely on a camera it’ll work in far more situations than other devices that do. It’s also incredibly sensitive being able to pick up movement right down to your fingers, something which I wasn’t sure would be possible based on other similar prototype devices I had seen in the past. Needless to say I was very intrigued when I saw it as I instantly saw it as a perfect companion to Google’s Glass.
All the demonstration videos for Google Glass shows it being commanded by a pretty powerful voice interface with some functions (like basic menu navigation) handled through eye tracking. As a technology demo its pretty impressive but I’m not the biggest fan of voice interfaces, especially if I’m in a public space. I then started thinking about alternative input methods and whilst something like a laser keyboard works in certain situations I wanted something that would be as discreet as typing on a smartphone but was also a bit more elegant than carting around that (admittedly small) device. The MYO could provide the answer to this.
Now the great thing about the MYO is that they’re opening it up to developers from the get go, allowing people like me to create all sorts of interesting applications for the device. For me there’s really only a single killer application required to justify the entry cost: a simple virtual keyboard that uses your muscles. I’ve read about similar things being in development for a while now but nothing seems to have made it past the high concept stage. MYO on the other hand has the real potential to bring this to fruition within the next year or two and whilst I probably won’t have the required augmented reality device to take advantage of it I’ll probably end up with one of these devices anyway, just for experimentation.
With this missing piece of the puzzle I feel like Glass has gone from being a technical curiosity to a device that I could see myself using routinely. The 1.0 MYO might be a little cumbersome to keep around but I’m sure further iterations of it will make it nigh on unnoticeable. This is just my narrow view of the technology as well and I’m sure there’s going to be hundreds of other applications where a MYO device will unlock some seriously awesome potential. I’m very excited about this and can’t wait to get my hands on one of them.
My group of friends is undeniably tech-oriented but that doesn’t mean all of us share the same views on how technology should be used, especially in social situations. If you were to see us out at a restaurant it’s pretty much guaranteed that at least one of us is on our phone, probably Googling an answer to something or sifting through our social networking platform of choice. For most of us this is par for the course being with all of us being members of Gen Y however some of my friends absolutely abhor the intrusion that smartphones have made on normal social situations and if the direction of technology is anything to go by that intrusion is only going to get worse, not better.
Late last year I came across the Memento Kickstarter project, a novel device that takes 1 picture every 30 seconds and even tags it with your GPS location. It’s designed to be worn all the time so that you end up with a visual log of your life, something that’s obviously of interest to a lot of people as they ended up getting funded 11 times over. Indeed just as a device it’s pretty intriguing and I had caught them early enough that I could have got one at a hefty discount. However something that I didn’t expect to happen changed my mind on it completely: my technically inclined friends’ reactions to this device.
Upon linking my friends to the Kickstarter page I wasn’t met with the usual reactions. Now we’re not rabid privacy advocates, indeed many of us engage in multiple social networks and many of us lead relatively open online lives, but the Memento was met with a great deal of concern over it’s present in everyone’s private lives. It wasn’t a universal reaction but it was enough to give me pause about the idea and in the end I didn’t back it because of it. With Google Glass gearing up to increase its presence in the world these same privacy questions are starting to crop up again and the social implications of Google’s flagship augmented reality device are starting to become apparent.
Google Glass is a next step up from Memento as whilst it has the same capability to take photos (without the express knowledge or consent from people in it) its ability to run applications and communicate directly with the Internet poses even more privacy issues. Sure the capability isn’t too much different than what’s available now with your garden variety smartphone however it is ever-present, attached the side of someone’s head and can be commanded at will of the user. That small step of taking your phone out of your pocket is enough of a social cue to let people know what your intentions are and make their concerns known well before hand.
What I feel is really happening here is that the notion of societal norms are being challenged by technology. Realistically such devices are simply better versions of things we have natively as humans (I.E. imaging devices with attached storage) but their potential for disseminating their contents is much greater. Just like social norms developed around ubiquitous smartphones so too they must develop around the use of augmented reality devices like Google Glass. What these norms will end up being however is something that we can’t really predict until they reach critical mass which, from what I can tell, is at least a couple years off in the future, possibly even longer.
For my close knit circle of tech friends however I can predict a few things. Most of them wouldn’t have any issues with me wearing and using it whilst we were doing things together but I can see them wanting me to take them off if we were sitting down to dinner or at someone’s private residence. It could conceivably be seen as somewhat rude to wear it if you’re deep in conversation although I feel that might change over time as people realise it’s not something that’s being used 100% of the time. Things will start to get murky as Glass like devices start to become smaller and less obtrusive although the current generations of battery technology put Glass on the slimmest end of the spectrum possible so I doubt they’ll be getting smaller any time soon.
Essentially I see these kinds of augment reality devices being an organic progression of smartphones, extending our innate human abilities with that of the Internet. The groundwork has already been laid for a future that is ever-increasingly intertwined with technology and whilst this next transition poses its own set of challenges I have no doubt that we’ll rapidly adapt, just like we have done in the past. What these adaptations are and how they function in the real world will be an incredibly interesting thing to bear witness to and I, for one, can’t wait to see it.