Twitch.tv started out as the bastard child of Justin.tv, a streaming website that wanted to make it easy for anyone to stream content to a wider audience. Indeed for a long time Twitch felt like something of an after thought as divesting part of an already niche site into another niche didn’t seem like a sound business maneuver. However since then Twitch has vastly outgrown its parent company becoming the default platform for content streamers around the world. The sponsorship model it has used for user’s channels has proven to be successful enough that thousands of people now make their living streaming games, giving Twitch a sustainable revenue stream. This hasn’t gone unnoticed of course and rumours are starting to circulate that Google will be looking to purchase them.
The agreement is reported to be $1 billion all cash deal, an amazing deal for the founders and employees of Twitch. The acquisition makes sense for Google as they’ve been struggling to get into the streaming market for a long time now with many of their attempts drawing only mild success. For the Twitch community though there doesn’t appear to be any direct benefits to speak of, especially considering that Google isn’t a company to let their acquisitions just do their own thing. Indeed if this rumour has any truth to it the way in which Google integrates Twitch into its larger platform will be the determining factor in how the brand grows or ultimately fails.
At the top of the list of concerns for Twitch streamers is the potential integration between YouTube’s ContentID system and the Twitch streams. Whilst most of the games that are popular on Twitch are readily endorsed by their creators (like League of Legends, DOTA2, World of Warcraft, etc.) most of them aren’t, something which has seen content producers and game developers butt heads multiple times over on YouTube. With the Twitch platform integrated into YouTube there’s potential for game creators to flag content they don’t want streamed something which is at odds with the current Twitch community ethos. If not handled correctly it could see much of Twitch’s value evaporate after they transition across to YouTube as arguably most of it comes from its wide community, not the technology or infrastructure powering it.
On the flip side though Twitch has been known to suffer from growing pains every time a popular event happens to grace its platform, something which Google could go a long way to fixing. Indeed that would likely be the only thing that Twitch has to gain from this: a global presence without the need to invest in costly additional infrastructure. If Google maintains Twitch as a separate, wholly owned brand then this could be of benefit to both of them as a more stable and available platform is likely to drive user numbers much quicker than Twitch has been able to do previously.
We’ll have to see if this rumour turns out to be true as whilst I wouldn’t begrudge Twitch taking the cash the question of what Google will do with them is what will determine their future. Whilst the combination of Twitch chat and YouTube comments sounds like the most unholy creation on the Internet since /b/ there is potential for both Twitch and Google to gain something from this. Whether that’s to the benefit of the community though remains to be seen.
Google isn’t a company that’s known for curtailing its ambitions; starting off with its humble beginnings as the best search engine on the web to the massive conglomerate that it is today, encompassing everything from smartphones to robotic cars. In the past many of the ideas were the result of acquisitions where Google made strategic purchases in order to acquire the talent required to dominate the space they were in. More recently however they’ve started developing their own moonshot style ideas through their Project X labs, a research and development section that has many of the hallmarks of previous idea incubators. Their most recent acquisition trend however seems to be a mix of both with Google picking up a lot of talent to fuel a potential project that they’re being incredibly tight lipped about.
Now I’ll be honest, I really had no idea that Google was looking to enter in the robotics industry until just recently when it was announced that they had acquired Boston Dynamics. For the uninitiated Boston Dynamics is a robotics company that’s been behind some of the most impressive technology demonstrations in the industry, notably the Big Dog robot which displayed stability which few robots have been able to match. Most recently they started shipping out their Atlas platform to select universities for the DARPA robotics challenge program which hopes to push the envelope of what robots are capable of achieving.
Boston Dynamics is the 8th acquisition that Google has made in the robotics space in the past 6 months, signalling that they’ve got some kind of project on the boil which needs an incredible amount of robotics expertise. The acquisitions seem to line up in a few categories with the primary focus being on humanoid robots. Companies in this area include Japanese firm Schaft, who has created a robot similar to that of Atlas, and several more industrial robotics focused companies like Industrial Perception, Meka, Redwood Robotics. They also snapped up Bot and Dolly, the robotics company behind the incredible Box video, who’s technology provided some of the special effects for the recent movie Gravity. There were also 2 design firms, Autofuss and Holomni, who were also picked up in Google’s most recent spending spree.
At the head of all of this is Andy Rubin who came to Google as the lead of Android. It’s likely that he’s been working on this ever since he left the Android division at Google back in March this year although it was only recently announced that he would be heading up the robotics projects. As to what that is currently Google isn’t saying however they have said that they consider it a moonshot project, right alongside their other ideas like Project Loon, Google Glass and the Self Driving Car. Whilst it seems clear that their intention with all these acquisitions will be to create some kind of humanoid robot what kind of purpose that will serve remains to be seen, but that won’t stop me from speculating.
I think in the beginning they’ll use much of the expertise on these systems to bolster the self driving car initiative as whilst they’ve made an incredible amount of progress of late I’m sure the added expertise in computer vision systems that these companies have will prove to be invaluable. From there the direction they’ll take is less clear as whilst it’d be amazing for them to create the in home robots of the future it’s unlikely we’ll see anything of that project for at least a couple years. Heck just incorporating all these disparate companies into the Google fold is going to take the better part of a couple months and it’s unlikely they’ll produce anything of note for sometime after.
Whatever Google ends up doing with these companies we can be assured it’s going to be something revolutionary, especially now that they’ve added the incredible talent of Boston Dynamics to their pool. Hopefully this will allow them to deliver their self driving car technology sooner and then use that as a basis for delivering more robotics technology to the end users. It will be a while before this starts to pay dividends for Google however the benefits for both them and the world at large has the potential to be quite great and that should make us all incredibly excited.
The tech world was all abuzz about Phonebloks just over a month ago with many hailing it as the next logical step in the smartphone revolution. Whilst I liked the idea since it spoke to the PC builder in me it was hard to overlook the larger issues that plagued the idea, namely the numerous technical problems as well as the lack of buy in from component manufacturers. Since then I hadn’t heard anything further on it and figured that the Thunderclap campaign they had ended without too much fuss but it appears that it might have caught the attention of people who could make the idea happen.
Those people are Motorola.
As it turns out Motorola has been working on their own version of the Phonebloks idea for quite some time now, over a year in fact. It’s called Project Ara and came about as a result of the work they did during Sticky, essentially trucking around the USA with unlocked handsets and 3D printers and holding a series of makeathons. The idea is apparently quite well developed with a ton of technical work already done and some conceptual pieces shown above. Probably the most exciting thing for Phonebloks followers ;will be the fact that Motorola has since reached out to Dave Hakkens and are hoping to use his community in order to further their idea. By their powers combined it might just be possible for a modular handset to make its way into the real world.
Motorola’s handset division, if you recall, was acquired by Google some 2 years ago mostly due to their wide portfolio of patents that Google wanted to get its hands on. At the same time it was thought that Google would then begin using Motorola for their first party Nexus handsets however that part hasn’t seemed to eventuate with Google leaving them to do their own thing. However such a close tie with Google might provide Project Ara the resources it needs to actually be successful as there’s really no other operating system they could use (and no, the Ubuntu and Firefox alternatives aren’t ready for prime time yet).
Of course the technical issues that were present in the Phonebloks idea don’t go away just because some technicians from Motorola are working on them. Whilst Motorola’s design is quite a bit less modular than what Phonebloks was purporting it does look like it has a bit more connectivity available per module. Whether that will be enough to support the amount of connectivity required for things like quad core ARM CPUs or high resolution cameras will remain to be seen however.
So whilst the Phonebloks idea in its original form might never see the light of day it does appear that at least one manufacturer is willing to put some effort into developing a modular handset. There’s still a lot of challenges for it to overcome before the idea can be made viable but the fact that real engineers are working on it with the backing of their company gives a lot of credence to it. I wouldn’t expect to see any working prototypes for a while to come though, even with Motorola’s full backing, but potentially in a year or so we might start to see some make their way to trade shows and I’ll be very interested to see their capabilities.
The public cloud is a great solution to a wide selection of problems however there are times when its use is simply not appropriate. This is typical of organisations who have specific requirements around how their data is handled, usually due to data sovereignty or regulatory compliance. However whilst the public cloud is a great way to bolster your infrastructure on the cheap (although that’s debatable when you start ramping up your VM size) it doesn’t take advantage of the current investments in infrastructure that you’ve already made. For large, established organisations this is not insignificant and is why many of them were reluctant to transition fully to public cloud based services. This is why I believe the future of the cloud will be paved with hybrid solutions, something I’ve been saying for years now.
Microsoft has finally shown that they’ve understood this with the release of Windows Azure Pack for Server 2012R2. Sure there was beginnings of it with SCVMM 2012 allowing you to add in your Azure account and move VMs up there but that kind of thing has been available for ages through hosting partners. The Azure Pack on the other hand brings features that were hidden behind the public cloud wall down to the private level, allowing you to make full use of it without having to rely on Azure. If I’m honest I thought that Microsoft would probably be the only ones to try this given their presence in both the cloud and enterprise space but it seems other companies have begun to notice the hybrid trend.
Google has been working with the engineers at Red Hat to produce the Test Compatibility Kit for Google App Engine. Essentially this kit provides the framework for verifying the API level functionality of a private Google App Engine implementation, something which is achievable through an application called CapeDwarf. The vast majority of the App Engine functionality is contained within that application, enough so that current developers on the platform could conceivably use their code using on premises infrastructure if they so wished. There doesn’t appear to be a bridge between the two currently, like there is with Azure, as CapeDwarf utilizes its own administrative console.
They’ve done the right thing by partnering with RedHat as otherwise they’d lack the penetration in the enterprise market to make this a worthwhile endeavour. I don’t know how much presence JBoss/OpenShift has though so it might be less of using current infrastructure and more about getting Google’s platform into more places than it currently is. I can’t seem to find any solid¹ market share figures to see how Google currently rates compared to the other primary providers but I’d hazard a guess they’re similar to Azure, I.E. far behind Rackspace and Amazon. The argument could be made that such software would hurt their public cloud product but I feel these kinds of solutions are the foot in the door needed to get organisations thinking about using these services.
Whilst my preferred cloud is still Azure I’m still a firm believer that the more options we have to realise the hybrid dream the better. We’re still a long way from having truly portable applications that can move between freely between private and public platforms but the roots are starting to take hold. Given the rapid pace of IT innovation I’m confident that the next couple years will see the hybrid dream fully realised and then I’ll finally be able to stop pining for it.
¹This article suggests that Microsoft has 20% of the market which, since Microsoft has raked in $1 billion, would peg the total market at some $5 billion total which is way out of line with what Gartner says. If you know of some cloud platform figures I’d like to see them as apart from AWS being number 1 I can’t find much else.
Just outside the Googleplex in Mountain View California there’s a small facility that was the birthplace for many of the revolutionary technologies that Google is known for today. It’s called Google [x] and is akin to the giant research and development labs of corporations in ages past where no idea is off limits. It’s spawned some of the most amazing projects that Google has made public including the Driverless Car and Project Glass. These are only a handful of the projects that are currently under development at this lab however with vast majority of them remaining secret until they’re ready for release into the world. One more of their projects has just reached that milestone and it’s called Project Loon.
The idea is incredibly simple: provide Internet access to everyone regardless of their location. How they’re going about that however is the genius part: they’re going to use a system of high altitude balloons and base relay stations with each of them being able to cover a 40KM area. For countries that don’t have the resources to lay the cables required to provide Internet this provides a really easy solution to covering large areas and even makes providing Internet possible to regions that would otherwise be inaccessible.
What’s really amazing however is how they’re going about solving some of the issues you run into when you’re using balloons as your transportation system:
The height they fly at is around the bottom end of the range for your typical weather balloon (they can be found from 18KM all the way up to 38KM) and is about half the height from where Felix Baumgartner made his high altitude jump from last year. I wasn’t aware that different layers of the stratosphere had different wind directions and making use of them to keep the balloons in position is just an awesome piece of engineering. Of course this would all be for naught if the Internet service they delivered wasn’t anything above what’s available now with satellite broadband, but it seems they’ve got that covered too.
The Loon stations use the 2.4GHz and 5.8GHz frequencies for communications with ground receivers and base stations and are capable of delivering speeds comparable to 3G (~2MBps or so). Now if I’m honest the choice to use these public signal spaces seems like a little bit of a gamble as whilst it’s free to use it’s also a signal space that’s already quite congested. I guess this is less of a problem in the places where Loon is primarily aimed at, namely regional and remote areas, but even those places have microwaves and personal wifi networks. It’s not an insurmountable problem of course, and I’m sure the way-smarter-than-me people at Google[x] have already thought of that, it’s just an issue with anything that tries to use that same frequency space.
I might never end up being a user of this particular project but as someone who lived on the end of a 56K line for the majority of his life I can tell you how exciting this is for people living outside broadband enabled areas. According to Google it’s launching this month in New Zealand to a bunch of pilot users so it won’t be long before we see how this technology works in the real world. From there I’m keen to see where they take it next as there’s a lot of developing countries where this technology could make some really big waves.
My introduction to RSS readers came around the same time as when I started to blog daily as after a little while I found myself running dry on general topics to cover and needed to start finding other material for inspiration. It’s all well and good to have a bunch of bookmarked sites to trawl through but visiting each one is a very laborious task, one that I wasn’t keen to do every day just to crank out a post. Thus I found the joys that were RSS feeds allowing me to distill dozens of sites down to a singular page, dramatically cutting down the effort required to trawl through them all. After cycling through many, many desktop based readers I, like many others, eventually settled on Google Reader, and all was well since then.
That was until last week when Google announced that Reader was going away on July 1st this year.
Google has been doing a lot of slimming down recently as part of its larger strategy to focus more strongly on its core business. This has led to many useful, albeit niche, products to be shutdown over the course of the past couple years. Whilst the vast majority of them are expected there have been quite a few notable cases where they’ve closed down things that still have a very active user base whilst other things (like Orkut, yeah remember that?) which you’d figure would be closed down aren’t. If there’s one service that no one expected them to close down it would be Reader but apparently they’ve decided to do this due to dwindling user numbers.
Whilst I won’t argue that RSS is the defacto standard for content consumption these days it’s still proven to be a solid performer for anyone who provides it and Google Reader was the RSS reader to use. Even if you didn’t use the reader directly there are hundreds of other products which utilize Google Reader’s back end in order to power their interfaces and whilst they will likely continue on in spite of Reader going away it’s highly unlikely that any of them will have the same penetration that Reader did. Even from my meagre RSS stats it’s easy to tell that Reader has at least 50% of the market, if not more.
If you doubt just how popular Reader was consider that Feedly, shown above syncing with my feeds, managed to gain a whopping 500,000 users in the short time since Google made the announcement. They were actually so popular that right after the start their site was down for a good couple hours and their applications on iOS and Android quickly becoming the number 1 free app on their respective stores. For what its worth it’s a very well polished application, especially if you like visual RSS readers, however there are a few quirks (like it not being in strict chronological order) which stopped me from making the total switch immediately. Still the guys behind it seem dedicated to improving it and filling in the void left by replicating the Reader API (and running it on Google’s AppEngine, for the lulz).
From a business point of view it’s easy to understand why Google is shutting down services like this as they’re a drain on resources that could be better used to further their core business. However it was usually these niche services that brought a lot of customers to Google in the first place and by removing them they burn a lot of goodwill that they generated by hosting them. I also can’t imagine that the engineers behind these products, many of which were products of Google’s famous 20% time, feel great about seeing them go away either. For something as big as Reader I would’ve expected them to try to innovate it rather than abandon it completely as looking over the alternatives there’s still a lot of interesting things that can be done in the world of RSS, especially with such a dedicated user base.
Unfortunately I don’t expect Google to do an about face on this one as there’s been public outcries before (iGoogle, anyone?) but nothing seems to dissuade them once their mind has been made up. It’s a real shame as I feel there’s still a lot of value in the Reader platform, even if it pales in comparison to Google’s core business products. Whilst the alternatives might not be 100% there yet I have no doubt they’ll get there in short order and, if the current trend is anything to go by, surpass Reader in terms of features and functionality.
We’re on the cusp of a new technological era thanks in no small part to the ubiquity of smart phones. They’ve already begun to augment us in ways we didn’t expect, usurp industries that failed to adapt and have created a fledgling industry that’s already worth billions of dollars. The really interesting part, for me at least, is the breaking down of the barriers between us and said technology as whilst it’s all well and good that we can tap, swipe and type our way through things it does feel like there should be a better solution. Whilst we’re still a ways off from being able to control things with our brains (although there’s a lot of promising research in this direction) there’s a new product available that I think is going to be the bridge between our current interface standards and that of more direct control methods.
Shown above is a product called the MYO from Thalmic Labs, a Y-Combinator backed company that’s just started taking pre-orders for it. The concept for the device is simple: once you slip this band over your arm it can track the electrical activity in your muscles which it can then send back to another device via BlueTooth. This allows it to track all sorts of gestures and since it doesn’t rely on a camera it’ll work in far more situations than other devices that do. It’s also incredibly sensitive being able to pick up movement right down to your fingers, something which I wasn’t sure would be possible based on other similar prototype devices I had seen in the past. Needless to say I was very intrigued when I saw it as I instantly saw it as a perfect companion to Google’s Glass.
All the demonstration videos for Google Glass shows it being commanded by a pretty powerful voice interface with some functions (like basic menu navigation) handled through eye tracking. As a technology demo its pretty impressive but I’m not the biggest fan of voice interfaces, especially if I’m in a public space. I then started thinking about alternative input methods and whilst something like a laser keyboard works in certain situations I wanted something that would be as discreet as typing on a smartphone but was also a bit more elegant than carting around that (admittedly small) device. The MYO could provide the answer to this.
Now the great thing about the MYO is that they’re opening it up to developers from the get go, allowing people like me to create all sorts of interesting applications for the device. For me there’s really only a single killer application required to justify the entry cost: a simple virtual keyboard that uses your muscles. I’ve read about similar things being in development for a while now but nothing seems to have made it past the high concept stage. MYO on the other hand has the real potential to bring this to fruition within the next year or two and whilst I probably won’t have the required augmented reality device to take advantage of it I’ll probably end up with one of these devices anyway, just for experimentation.
With this missing piece of the puzzle I feel like Glass has gone from being a technical curiosity to a device that I could see myself using routinely. The 1.0 MYO might be a little cumbersome to keep around but I’m sure further iterations of it will make it nigh on unnoticeable. This is just my narrow view of the technology as well and I’m sure there’s going to be hundreds of other applications where a MYO device will unlock some seriously awesome potential. I’m very excited about this and can’t wait to get my hands on one of them.
My group of friends is undeniably tech-oriented but that doesn’t mean all of us share the same views on how technology should be used, especially in social situations. If you were to see us out at a restaurant it’s pretty much guaranteed that at least one of us is on our phone, probably Googling an answer to something or sifting through our social networking platform of choice. For most of us this is par for the course being with all of us being members of Gen Y however some of my friends absolutely abhor the intrusion that smartphones have made on normal social situations and if the direction of technology is anything to go by that intrusion is only going to get worse, not better.
Late last year I came across the Memento Kickstarter project, a novel device that takes 1 picture every 30 seconds and even tags it with your GPS location. It’s designed to be worn all the time so that you end up with a visual log of your life, something that’s obviously of interest to a lot of people as they ended up getting funded 11 times over. Indeed just as a device it’s pretty intriguing and I had caught them early enough that I could have got one at a hefty discount. However something that I didn’t expect to happen changed my mind on it completely: my technically inclined friends’ reactions to this device.
Upon linking my friends to the Kickstarter page I wasn’t met with the usual reactions. Now we’re not rabid privacy advocates, indeed many of us engage in multiple social networks and many of us lead relatively open online lives, but the Memento was met with a great deal of concern over it’s present in everyone’s private lives. It wasn’t a universal reaction but it was enough to give me pause about the idea and in the end I didn’t back it because of it. With Google Glass gearing up to increase its presence in the world these same privacy questions are starting to crop up again and the social implications of Google’s flagship augmented reality device are starting to become apparent.
Google Glass is a next step up from Memento as whilst it has the same capability to take photos (without the express knowledge or consent from people in it) its ability to run applications and communicate directly with the Internet poses even more privacy issues. Sure the capability isn’t too much different than what’s available now with your garden variety smartphone however it is ever-present, attached the side of someone’s head and can be commanded at will of the user. That small step of taking your phone out of your pocket is enough of a social cue to let people know what your intentions are and make their concerns known well before hand.
What I feel is really happening here is that the notion of societal norms are being challenged by technology. Realistically such devices are simply better versions of things we have natively as humans (I.E. imaging devices with attached storage) but their potential for disseminating their contents is much greater. Just like social norms developed around ubiquitous smartphones so too they must develop around the use of augmented reality devices like Google Glass. What these norms will end up being however is something that we can’t really predict until they reach critical mass which, from what I can tell, is at least a couple years off in the future, possibly even longer.
For my close knit circle of tech friends however I can predict a few things. Most of them wouldn’t have any issues with me wearing and using it whilst we were doing things together but I can see them wanting me to take them off if we were sitting down to dinner or at someone’s private residence. It could conceivably be seen as somewhat rude to wear it if you’re deep in conversation although I feel that might change over time as people realise it’s not something that’s being used 100% of the time. Things will start to get murky as Glass like devices start to become smaller and less obtrusive although the current generations of battery technology put Glass on the slimmest end of the spectrum possible so I doubt they’ll be getting smaller any time soon.
Essentially I see these kinds of augment reality devices being an organic progression of smartphones, extending our innate human abilities with that of the Internet. The groundwork has already been laid for a future that is ever-increasingly intertwined with technology and whilst this next transition poses its own set of challenges I have no doubt that we’ll rapidly adapt, just like we have done in the past. What these adaptations are and how they function in the real world will be an incredibly interesting thing to bear witness to and I, for one, can’t wait to see it.
I’ve been using my Nokia Lumia 900 for some time now and whilst it’s a solid handset Windows Phone 7 is starting to feel pretty old hat at this point, especially with the Windows Phone 8 successor out in the Lumia 920. However I had made the decision to go back to Android due to the application ecosystem on there. Don’t get me wrong for most people Windows Phone has pretty much everything you need but for someone like me who revels in doing all sorts of esoteric things with his phone (like replicating iCloud levels of functionality, but better) Android is just the platform for me. With that in mind I had been searching for a handset that would suit me and I, like many others, found it in the Nexus 4.
Spec wise its a pretty comparable phone to everything else out there with the only glaring technical fault being the lack of a proper 4G modem. Still its big screen, highly capable processor and above all stock Android experience with updates that come direct from Google make up for that in spades. The price too is pretty amazing as I paid well over 50% more for my Galaxy S2 back in the day. So it was many months ago that I had resigned myself to wait for the eventual release of the Nexus 4 so I could make the transition back the Android platform and all the goodness that would come along with it.
Unfortunately for me the phone went on sale at some ludicrous time for us Australians so I wasn’t awake for the initial run of them and missed my chance at getting in on the first bunch. I wasn’t particularly worried though as they had a mailing list I could join for when stock would be available again and I figured that after the initial rush it wouldn’t be too hard to get my hands on one of them. However the stock they got sold out so quickly that by the time I checked my email and found they were available again they had sold out, leaving me without the opportunity to purchase one yet again. Thinking that there’s no way that Google would be out of stock for long (they never were for previous Nexus phones) I resigned myself to wait until it became available again, or at least a pre-order system came up.
Despite stories I hear of handsets being available for some times and tales of people being able to order one at various times I have not once seen a screen that differs from the one shown above. Nearly every day for the past 2 months I’ve been checking the Nexus site in the hopes that they’d become available but not once have I had the chance to purchase one. Now Google and LG have been pointing fingers in both directions as to who is to blame for this but in the end that doesn’t matter because both of them are losing more and more customers the longer these supply issues continue. It doesn’t help when they announce that AT&T will start stocking them this month which has to mean a good portion of inventory was diverted from web sales to go them instead. That doesn’t build any good will for Google in my mind especially when I’ve been wanting to give them my money for well over 2 months now.
And with that in mind I think I’m done waiting for it.
For the price the Nexus 4 looked like a great device but time hasn’t made the specifications look any better, especially considering the bevy of super powerful smartphones that debuted at CES not too long ago. I, along with many other potential Nexus 4 buyers, would have gladly snapped up one of their handsets long ago if it was available to us and the next generation wouldn’t have got much of a look in. However due to the major delays I’m now no longer considering the Nexus 4 viable when I might only be a month or two away from owning something like the ZTE Grand S which boasts better specifications all round and is probably the thinnest handset you’ll find. Sure I’ll lose the completely stock experience and direct updates from Google but after waiting for so long the damage has been done and I need to find myself a better suitor.
You don’t have to read far on this blog to know that the relationship I have with Apple swings from wild amounts of hate to begrudging acceptance that they do make some impressive products. Indeed I’ve been called everything from an Apple fan boy to an Apple hater based on the opinions I’ve put forth on here so I think that means I’m doing the right thing when it comes to being a technology critic. Of course that means taking them, and their fans, to task whenever they start getting out of line and it appears that the latest instalment of Apple fans going wild comes care of the iOS 6 Maps application which I’ve abstained from covering here previously.
For the uninitiated Apple decided to give Google Maps the boot as the default mapping application on their handsets and tablets. The move was done primarily because their negotiated agreement with Google was scheduled to come to an end soon and Apple, for whatever reasons that I won’t bother speculating about, decided that instead of renewing it they’d go ahead and build their own maps application, including the massive back end cartography database. Now they’re no stranger to building a maps application, indeed whilst it used to say “Google Maps” it was in fact an Apple developed application that used the Google APIs, but the application was an unmitigated disaster. In fact it was so bad Apple even got Tim Cook do one of those “we’re admitting there’s a problem without admitting it” open letters pointing to alternatives that were available.
I held off on commenting on the whole issue because since I don’t use an iPhone any more I didn’t want to start trashing the app without knowing what the reality was. Plus I’m not one to bandwagon (unless I’m really struggling for good material) and it felt like everything that needed to be said had been said. I almost caved when I started reading apologist garbage like this from MG Siegler but others had done that work for me so re-iterating those points wouldn’t provide much value. However one bit of unabashed fanboyism caught my eye recently and it really needs to be taken to task over what they’re saying:
Situation: Apple cannot get Google to update its maps app on iOS. It was ok, but Google refused to update it to include turn-by-turn directions or voice guidance even though Android had these features forever. Apple says, “Enough” and boots Gmaps from iOS and replaces it with an admittedly half-baked replacement. The world groans. Apple has egg on its face. Google steps up it’s game and rolls out a new, free new maps app in iOS today that is totally amazing, I’m sure to stick it in Apple’s face… Ooops
Bottom line: Apple took one for the team (ate some shit) and fooled Google into doing exactly what Apple has been asking for years. Users win.
Time to get some facts on the table here. For starters way back in the day when Apple first wanted to bring maps to their platform they approached Google to do it however the terms that Google wanted (better access to user data was their primary concern) meant that an in house developed app was never to be. They could agree on good terms for the API however and so Apple developed their own application on the public Google API. This meant, of course, that they were limited to the functionality provided by said API which doesn’t include the fun things like turn by turn navigation (voice commands however are on Apple’s head to implement).
Instead of capitulating Apple decides to build their own replacement product which isn’t completely surprising given that they’ve done this kind of thing before with services like iTunes and the App Store. Claiming that it was done to fool Google into developing a better app however is total bollocks as if they were doing that they wouldn’t have spent so much money on in-sourcing so much of the infrastructure. Indeed the argument can be made that they could’ve bought/licensed one of the top map apps for a fraction of the cost in order to accomplish the same task. So no Apple didn’t do it to get Google to develop an application for them, they did it because they wanted to bring more applications into their ecosystem.
Google’s revamped map app proved to be extremely popular rocketing to the number 1 spot for free applications after just 7 hours of being available. I (in a slightly rhetorical/trolling way) put the feelers out on Twitter to see what Apple fans would have to say about that particular feat and was surprised when I got a reply within minutes. Whilst their arguments didn’t hold up to mild scrutiny (and I didn’t change their opinion on the matter) I was honestly surprised just how defensive some people can be of a product that even the company who developed it has admitted was bad. Especially when the replacement has been, by all accounts, pretty spectacular.
Apple’s trademark secrecy about its plans and intentions is what feeds into these kinds of wild theories about their overall strategy for their products and their highly dedicated fan base too often falls prey to them without giving them some routine fact checking. I don’t blame them in particular however, it’s hard to see fault with a company you admire so much, but this kind of wide-eyed speculation doesn’t do any good for them. Indeed give it a couple weeks and no one will care that there’s yet another map application on iOS and this whole thing will get filed alongside antennagate (remember that?).