The tech world was all abuzz about Phonebloks just over a month ago with many hailing it as the next logical step in the smartphone revolution. Whilst I liked the idea since it spoke to the PC builder in me it was hard to overlook the larger issues that plagued the idea, namely the numerous technical problems as well as the lack of buy in from component manufacturers. Since then I hadn’t heard anything further on it and figured that the Thunderclap campaign they had ended without too much fuss but it appears that it might have caught the attention of people who could make the idea happen.
Those people are Motorola.
As it turns out Motorola has been working on their own version of the Phonebloks idea for quite some time now, over a year in fact. It’s called Project Ara and came about as a result of the work they did during Sticky, essentially trucking around the USA with unlocked handsets and 3D printers and holding a series of makeathons. The idea is apparently quite well developed with a ton of technical work already done and some conceptual pieces shown above. Probably the most exciting thing for Phonebloks followers ;will be the fact that Motorola has since reached out to Dave Hakkens and are hoping to use his community in order to further their idea. By their powers combined it might just be possible for a modular handset to make its way into the real world.
Motorola’s handset division, if you recall, was acquired by Google some 2 years ago mostly due to their wide portfolio of patents that Google wanted to get its hands on. At the same time it was thought that Google would then begin using Motorola for their first party Nexus handsets however that part hasn’t seemed to eventuate with Google leaving them to do their own thing. However such a close tie with Google might provide Project Ara the resources it needs to actually be successful as there’s really no other operating system they could use (and no, the Ubuntu and Firefox alternatives aren’t ready for prime time yet).
Of course the technical issues that were present in the Phonebloks idea don’t go away just because some technicians from Motorola are working on them. Whilst Motorola’s design is quite a bit less modular than what Phonebloks was purporting it does look like it has a bit more connectivity available per module. Whether that will be enough to support the amount of connectivity required for things like quad core ARM CPUs or high resolution cameras will remain to be seen however.
So whilst the Phonebloks idea in its original form might never see the light of day it does appear that at least one manufacturer is willing to put some effort into developing a modular handset. There’s still a lot of challenges for it to overcome before the idea can be made viable but the fact that real engineers are working on it with the backing of their company gives a lot of credence to it. I wouldn’t expect to see any working prototypes for a while to come though, even with Motorola’s full backing, but potentially in a year or so we might start to see some make their way to trade shows and I’ll be very interested to see their capabilities.
The public cloud is a great solution to a wide selection of problems however there are times when its use is simply not appropriate. This is typical of organisations who have specific requirements around how their data is handled, usually due to data sovereignty or regulatory compliance. However whilst the public cloud is a great way to bolster your infrastructure on the cheap (although that’s debatable when you start ramping up your VM size) it doesn’t take advantage of the current investments in infrastructure that you’ve already made. For large, established organisations this is not insignificant and is why many of them were reluctant to transition fully to public cloud based services. This is why I believe the future of the cloud will be paved with hybrid solutions, something I’ve been saying for years now.
Microsoft has finally shown that they’ve understood this with the release of Windows Azure Pack for Server 2012R2. Sure there was beginnings of it with SCVMM 2012 allowing you to add in your Azure account and move VMs up there but that kind of thing has been available for ages through hosting partners. The Azure Pack on the other hand brings features that were hidden behind the public cloud wall down to the private level, allowing you to make full use of it without having to rely on Azure. If I’m honest I thought that Microsoft would probably be the only ones to try this given their presence in both the cloud and enterprise space but it seems other companies have begun to notice the hybrid trend.
Google has been working with the engineers at Red Hat to produce the Test Compatibility Kit for Google App Engine. Essentially this kit provides the framework for verifying the API level functionality of a private Google App Engine implementation, something which is achievable through an application called CapeDwarf. The vast majority of the App Engine functionality is contained within that application, enough so that current developers on the platform could conceivably use their code using on premises infrastructure if they so wished. There doesn’t appear to be a bridge between the two currently, like there is with Azure, as CapeDwarf utilizes its own administrative console.
They’ve done the right thing by partnering with RedHat as otherwise they’d lack the penetration in the enterprise market to make this a worthwhile endeavour. I don’t know how much presence JBoss/OpenShift has though so it might be less of using current infrastructure and more about getting Google’s platform into more places than it currently is. I can’t seem to find any solid¹ market share figures to see how Google currently rates compared to the other primary providers but I’d hazard a guess they’re similar to Azure, I.E. far behind Rackspace and Amazon. The argument could be made that such software would hurt their public cloud product but I feel these kinds of solutions are the foot in the door needed to get organisations thinking about using these services.
Whilst my preferred cloud is still Azure I’m still a firm believer that the more options we have to realise the hybrid dream the better. We’re still a long way from having truly portable applications that can move between freely between private and public platforms but the roots are starting to take hold. Given the rapid pace of IT innovation I’m confident that the next couple years will see the hybrid dream fully realised and then I’ll finally be able to stop pining for it.
¹This article suggests that Microsoft has 20% of the market which, since Microsoft has raked in $1 billion, would peg the total market at some $5 billion total which is way out of line with what Gartner says. If you know of some cloud platform figures I’d like to see them as apart from AWS being number 1 I can’t find much else.
Just outside the Googleplex in Mountain View California there’s a small facility that was the birthplace for many of the revolutionary technologies that Google is known for today. It’s called Google [x] and is akin to the giant research and development labs of corporations in ages past where no idea is off limits. It’s spawned some of the most amazing projects that Google has made public including the Driverless Car and Project Glass. These are only a handful of the projects that are currently under development at this lab however with vast majority of them remaining secret until they’re ready for release into the world. One more of their projects has just reached that milestone and it’s called Project Loon.
The idea is incredibly simple: provide Internet access to everyone regardless of their location. How they’re going about that however is the genius part: they’re going to use a system of high altitude balloons and base relay stations with each of them being able to cover a 40KM area. For countries that don’t have the resources to lay the cables required to provide Internet this provides a really easy solution to covering large areas and even makes providing Internet possible to regions that would otherwise be inaccessible.
What’s really amazing however is how they’re going about solving some of the issues you run into when you’re using balloons as your transportation system:
The height they fly at is around the bottom end of the range for your typical weather balloon (they can be found from 18KM all the way up to 38KM) and is about half the height from where Felix Baumgartner made his high altitude jump from last year. I wasn’t aware that different layers of the stratosphere had different wind directions and making use of them to keep the balloons in position is just an awesome piece of engineering. Of course this would all be for naught if the Internet service they delivered wasn’t anything above what’s available now with satellite broadband, but it seems they’ve got that covered too.
The Loon stations use the 2.4GHz and 5.8GHz frequencies for communications with ground receivers and base stations and are capable of delivering speeds comparable to 3G (~2MBps or so). Now if I’m honest the choice to use these public signal spaces seems like a little bit of a gamble as whilst it’s free to use it’s also a signal space that’s already quite congested. I guess this is less of a problem in the places where Loon is primarily aimed at, namely regional and remote areas, but even those places have microwaves and personal wifi networks. It’s not an insurmountable problem of course, and I’m sure the way-smarter-than-me people at Google[x] have already thought of that, it’s just an issue with anything that tries to use that same frequency space.
I might never end up being a user of this particular project but as someone who lived on the end of a 56K line for the majority of his life I can tell you how exciting this is for people living outside broadband enabled areas. According to Google it’s launching this month in New Zealand to a bunch of pilot users so it won’t be long before we see how this technology works in the real world. From there I’m keen to see where they take it next as there’s a lot of developing countries where this technology could make some really big waves.
My introduction to RSS readers came around the same time as when I started to blog daily as after a little while I found myself running dry on general topics to cover and needed to start finding other material for inspiration. It’s all well and good to have a bunch of bookmarked sites to trawl through but visiting each one is a very laborious task, one that I wasn’t keen to do every day just to crank out a post. Thus I found the joys that were RSS feeds allowing me to distill dozens of sites down to a singular page, dramatically cutting down the effort required to trawl through them all. After cycling through many, many desktop based readers I, like many others, eventually settled on Google Reader, and all was well since then.
That was until last week when Google announced that Reader was going away on July 1st this year.
Google has been doing a lot of slimming down recently as part of its larger strategy to focus more strongly on its core business. This has led to many useful, albeit niche, products to be shutdown over the course of the past couple years. Whilst the vast majority of them are expected there have been quite a few notable cases where they’ve closed down things that still have a very active user base whilst other things (like Orkut, yeah remember that?) which you’d figure would be closed down aren’t. If there’s one service that no one expected them to close down it would be Reader but apparently they’ve decided to do this due to dwindling user numbers.
Whilst I won’t argue that RSS is the defacto standard for content consumption these days it’s still proven to be a solid performer for anyone who provides it and Google Reader was the RSS reader to use. Even if you didn’t use the reader directly there are hundreds of other products which utilize Google Reader’s back end in order to power their interfaces and whilst they will likely continue on in spite of Reader going away it’s highly unlikely that any of them will have the same penetration that Reader did. Even from my meagre RSS stats it’s easy to tell that Reader has at least 50% of the market, if not more.
If you doubt just how popular Reader was consider that Feedly, shown above syncing with my feeds, managed to gain a whopping 500,000 users in the short time since Google made the announcement. They were actually so popular that right after the start their site was down for a good couple hours and their applications on iOS and Android quickly becoming the number 1 free app on their respective stores. For what its worth it’s a very well polished application, especially if you like visual RSS readers, however there are a few quirks (like it not being in strict chronological order) which stopped me from making the total switch immediately. Still the guys behind it seem dedicated to improving it and filling in the void left by replicating the Reader API (and running it on Google’s AppEngine, for the lulz).
From a business point of view it’s easy to understand why Google is shutting down services like this as they’re a drain on resources that could be better used to further their core business. However it was usually these niche services that brought a lot of customers to Google in the first place and by removing them they burn a lot of goodwill that they generated by hosting them. I also can’t imagine that the engineers behind these products, many of which were products of Google’s famous 20% time, feel great about seeing them go away either. For something as big as Reader I would’ve expected them to try to innovate it rather than abandon it completely as looking over the alternatives there’s still a lot of interesting things that can be done in the world of RSS, especially with such a dedicated user base.
Unfortunately I don’t expect Google to do an about face on this one as there’s been public outcries before (iGoogle, anyone?) but nothing seems to dissuade them once their mind has been made up. It’s a real shame as I feel there’s still a lot of value in the Reader platform, even if it pales in comparison to Google’s core business products. Whilst the alternatives might not be 100% there yet I have no doubt they’ll get there in short order and, if the current trend is anything to go by, surpass Reader in terms of features and functionality.
We’re on the cusp of a new technological era thanks in no small part to the ubiquity of smart phones. They’ve already begun to augment us in ways we didn’t expect, usurp industries that failed to adapt and have created a fledgling industry that’s already worth billions of dollars. The really interesting part, for me at least, is the breaking down of the barriers between us and said technology as whilst it’s all well and good that we can tap, swipe and type our way through things it does feel like there should be a better solution. Whilst we’re still a ways off from being able to control things with our brains (although there’s a lot of promising research in this direction) there’s a new product available that I think is going to be the bridge between our current interface standards and that of more direct control methods.
Shown above is a product called the MYO from Thalmic Labs, a Y-Combinator backed company that’s just started taking pre-orders for it. The concept for the device is simple: once you slip this band over your arm it can track the electrical activity in your muscles which it can then send back to another device via BlueTooth. This allows it to track all sorts of gestures and since it doesn’t rely on a camera it’ll work in far more situations than other devices that do. It’s also incredibly sensitive being able to pick up movement right down to your fingers, something which I wasn’t sure would be possible based on other similar prototype devices I had seen in the past. Needless to say I was very intrigued when I saw it as I instantly saw it as a perfect companion to Google’s Glass.
All the demonstration videos for Google Glass shows it being commanded by a pretty powerful voice interface with some functions (like basic menu navigation) handled through eye tracking. As a technology demo its pretty impressive but I’m not the biggest fan of voice interfaces, especially if I’m in a public space. I then started thinking about alternative input methods and whilst something like a laser keyboard works in certain situations I wanted something that would be as discreet as typing on a smartphone but was also a bit more elegant than carting around that (admittedly small) device. The MYO could provide the answer to this.
Now the great thing about the MYO is that they’re opening it up to developers from the get go, allowing people like me to create all sorts of interesting applications for the device. For me there’s really only a single killer application required to justify the entry cost: a simple virtual keyboard that uses your muscles. I’ve read about similar things being in development for a while now but nothing seems to have made it past the high concept stage. MYO on the other hand has the real potential to bring this to fruition within the next year or two and whilst I probably won’t have the required augmented reality device to take advantage of it I’ll probably end up with one of these devices anyway, just for experimentation.
With this missing piece of the puzzle I feel like Glass has gone from being a technical curiosity to a device that I could see myself using routinely. The 1.0 MYO might be a little cumbersome to keep around but I’m sure further iterations of it will make it nigh on unnoticeable. This is just my narrow view of the technology as well and I’m sure there’s going to be hundreds of other applications where a MYO device will unlock some seriously awesome potential. I’m very excited about this and can’t wait to get my hands on one of them.
My group of friends is undeniably tech-oriented but that doesn’t mean all of us share the same views on how technology should be used, especially in social situations. If you were to see us out at a restaurant it’s pretty much guaranteed that at least one of us is on our phone, probably Googling an answer to something or sifting through our social networking platform of choice. For most of us this is par for the course being with all of us being members of Gen Y however some of my friends absolutely abhor the intrusion that smartphones have made on normal social situations and if the direction of technology is anything to go by that intrusion is only going to get worse, not better.
Late last year I came across the Memento Kickstarter project, a novel device that takes 1 picture every 30 seconds and even tags it with your GPS location. It’s designed to be worn all the time so that you end up with a visual log of your life, something that’s obviously of interest to a lot of people as they ended up getting funded 11 times over. Indeed just as a device it’s pretty intriguing and I had caught them early enough that I could have got one at a hefty discount. However something that I didn’t expect to happen changed my mind on it completely: my technically inclined friends’ reactions to this device.
Upon linking my friends to the Kickstarter page I wasn’t met with the usual reactions. Now we’re not rabid privacy advocates, indeed many of us engage in multiple social networks and many of us lead relatively open online lives, but the Memento was met with a great deal of concern over it’s present in everyone’s private lives. It wasn’t a universal reaction but it was enough to give me pause about the idea and in the end I didn’t back it because of it. With Google Glass gearing up to increase its presence in the world these same privacy questions are starting to crop up again and the social implications of Google’s flagship augmented reality device are starting to become apparent.
Google Glass is a next step up from Memento as whilst it has the same capability to take photos (without the express knowledge or consent from people in it) its ability to run applications and communicate directly with the Internet poses even more privacy issues. Sure the capability isn’t too much different than what’s available now with your garden variety smartphone however it is ever-present, attached the side of someone’s head and can be commanded at will of the user. That small step of taking your phone out of your pocket is enough of a social cue to let people know what your intentions are and make their concerns known well before hand.
What I feel is really happening here is that the notion of societal norms are being challenged by technology. Realistically such devices are simply better versions of things we have natively as humans (I.E. imaging devices with attached storage) but their potential for disseminating their contents is much greater. Just like social norms developed around ubiquitous smartphones so too they must develop around the use of augmented reality devices like Google Glass. What these norms will end up being however is something that we can’t really predict until they reach critical mass which, from what I can tell, is at least a couple years off in the future, possibly even longer.
For my close knit circle of tech friends however I can predict a few things. Most of them wouldn’t have any issues with me wearing and using it whilst we were doing things together but I can see them wanting me to take them off if we were sitting down to dinner or at someone’s private residence. It could conceivably be seen as somewhat rude to wear it if you’re deep in conversation although I feel that might change over time as people realise it’s not something that’s being used 100% of the time. Things will start to get murky as Glass like devices start to become smaller and less obtrusive although the current generations of battery technology put Glass on the slimmest end of the spectrum possible so I doubt they’ll be getting smaller any time soon.
Essentially I see these kinds of augment reality devices being an organic progression of smartphones, extending our innate human abilities with that of the Internet. The groundwork has already been laid for a future that is ever-increasingly intertwined with technology and whilst this next transition poses its own set of challenges I have no doubt that we’ll rapidly adapt, just like we have done in the past. What these adaptations are and how they function in the real world will be an incredibly interesting thing to bear witness to and I, for one, can’t wait to see it.
I’ve been using my Nokia Lumia 900 for some time now and whilst it’s a solid handset Windows Phone 7 is starting to feel pretty old hat at this point, especially with the Windows Phone 8 successor out in the Lumia 920. However I had made the decision to go back to Android due to the application ecosystem on there. Don’t get me wrong for most people Windows Phone has pretty much everything you need but for someone like me who revels in doing all sorts of esoteric things with his phone (like replicating iCloud levels of functionality, but better) Android is just the platform for me. With that in mind I had been searching for a handset that would suit me and I, like many others, found it in the Nexus 4.
Spec wise its a pretty comparable phone to everything else out there with the only glaring technical fault being the lack of a proper 4G modem. Still its big screen, highly capable processor and above all stock Android experience with updates that come direct from Google make up for that in spades. The price too is pretty amazing as I paid well over 50% more for my Galaxy S2 back in the day. So it was many months ago that I had resigned myself to wait for the eventual release of the Nexus 4 so I could make the transition back the Android platform and all the goodness that would come along with it.
Unfortunately for me the phone went on sale at some ludicrous time for us Australians so I wasn’t awake for the initial run of them and missed my chance at getting in on the first bunch. I wasn’t particularly worried though as they had a mailing list I could join for when stock would be available again and I figured that after the initial rush it wouldn’t be too hard to get my hands on one of them. However the stock they got sold out so quickly that by the time I checked my email and found they were available again they had sold out, leaving me without the opportunity to purchase one yet again. Thinking that there’s no way that Google would be out of stock for long (they never were for previous Nexus phones) I resigned myself to wait until it became available again, or at least a pre-order system came up.
Despite stories I hear of handsets being available for some times and tales of people being able to order one at various times I have not once seen a screen that differs from the one shown above. Nearly every day for the past 2 months I’ve been checking the Nexus site in the hopes that they’d become available but not once have I had the chance to purchase one. Now Google and LG have been pointing fingers in both directions as to who is to blame for this but in the end that doesn’t matter because both of them are losing more and more customers the longer these supply issues continue. It doesn’t help when they announce that AT&T will start stocking them this month which has to mean a good portion of inventory was diverted from web sales to go them instead. That doesn’t build any good will for Google in my mind especially when I’ve been wanting to give them my money for well over 2 months now.
And with that in mind I think I’m done waiting for it.
For the price the Nexus 4 looked like a great device but time hasn’t made the specifications look any better, especially considering the bevy of super powerful smartphones that debuted at CES not too long ago. I, along with many other potential Nexus 4 buyers, would have gladly snapped up one of their handsets long ago if it was available to us and the next generation wouldn’t have got much of a look in. However due to the major delays I’m now no longer considering the Nexus 4 viable when I might only be a month or two away from owning something like the ZTE Grand S which boasts better specifications all round and is probably the thinnest handset you’ll find. Sure I’ll lose the completely stock experience and direct updates from Google but after waiting for so long the damage has been done and I need to find myself a better suitor.
You don’t have to read far on this blog to know that the relationship I have with Apple swings from wild amounts of hate to begrudging acceptance that they do make some impressive products. Indeed I’ve been called everything from an Apple fan boy to an Apple hater based on the opinions I’ve put forth on here so I think that means I’m doing the right thing when it comes to being a technology critic. Of course that means taking them, and their fans, to task whenever they start getting out of line and it appears that the latest instalment of Apple fans going wild comes care of the iOS 6 Maps application which I’ve abstained from covering here previously.
For the uninitiated Apple decided to give Google Maps the boot as the default mapping application on their handsets and tablets. The move was done primarily because their negotiated agreement with Google was scheduled to come to an end soon and Apple, for whatever reasons that I won’t bother speculating about, decided that instead of renewing it they’d go ahead and build their own maps application, including the massive back end cartography database. Now they’re no stranger to building a maps application, indeed whilst it used to say “Google Maps” it was in fact an Apple developed application that used the Google APIs, but the application was an unmitigated disaster. In fact it was so bad Apple even got Tim Cook do one of those “we’re admitting there’s a problem without admitting it” open letters pointing to alternatives that were available.
I held off on commenting on the whole issue because since I don’t use an iPhone any more I didn’t want to start trashing the app without knowing what the reality was. Plus I’m not one to bandwagon (unless I’m really struggling for good material) and it felt like everything that needed to be said had been said. I almost caved when I started reading apologist garbage like this from MG Siegler but others had done that work for me so re-iterating those points wouldn’t provide much value. However one bit of unabashed fanboyism caught my eye recently and it really needs to be taken to task over what they’re saying:
Situation: Apple cannot get Google to update its maps app on iOS. It was ok, but Google refused to update it to include turn-by-turn directions or voice guidance even though Android had these features forever. Apple says, “Enough” and boots Gmaps from iOS and replaces it with an admittedly half-baked replacement. The world groans. Apple has egg on its face. Google steps up it’s game and rolls out a new, free new maps app in iOS today that is totally amazing, I’m sure to stick it in Apple’s face… Ooops
Bottom line: Apple took one for the team (ate some shit) and fooled Google into doing exactly what Apple has been asking for years. Users win.
Time to get some facts on the table here. For starters way back in the day when Apple first wanted to bring maps to their platform they approached Google to do it however the terms that Google wanted (better access to user data was their primary concern) meant that an in house developed app was never to be. They could agree on good terms for the API however and so Apple developed their own application on the public Google API. This meant, of course, that they were limited to the functionality provided by said API which doesn’t include the fun things like turn by turn navigation (voice commands however are on Apple’s head to implement).
Instead of capitulating Apple decides to build their own replacement product which isn’t completely surprising given that they’ve done this kind of thing before with services like iTunes and the App Store. Claiming that it was done to fool Google into developing a better app however is total bollocks as if they were doing that they wouldn’t have spent so much money on in-sourcing so much of the infrastructure. Indeed the argument can be made that they could’ve bought/licensed one of the top map apps for a fraction of the cost in order to accomplish the same task. So no Apple didn’t do it to get Google to develop an application for them, they did it because they wanted to bring more applications into their ecosystem.
Google’s revamped map app proved to be extremely popular rocketing to the number 1 spot for free applications after just 7 hours of being available. I (in a slightly rhetorical/trolling way) put the feelers out on Twitter to see what Apple fans would have to say about that particular feat and was surprised when I got a reply within minutes. Whilst their arguments didn’t hold up to mild scrutiny (and I didn’t change their opinion on the matter) I was honestly surprised just how defensive some people can be of a product that even the company who developed it has admitted was bad. Especially when the replacement has been, by all accounts, pretty spectacular.
Apple’s trademark secrecy about its plans and intentions is what feeds into these kinds of wild theories about their overall strategy for their products and their highly dedicated fan base too often falls prey to them without giving them some routine fact checking. I don’t blame them in particular however, it’s hard to see fault with a company you admire so much, but this kind of wide-eyed speculation doesn’t do any good for them. Indeed give it a couple weeks and no one will care that there’s yet another map application on iOS and this whole thing will get filed alongside antennagate (remember that?).
As someone who’s been deep in high technology for the better part of 2 decades it’s been interesting to watch the dissemination of technology from the annals of my brethren down to the level of the every day consumer. For the most part its a slow process as many of the technological revolutions that are unleashed onto the mass markets have usually been available for quite some time for those with the inclination to live on the cutting edge. Companies like Apple are prime examples of this, releasing products that are often technically inferior but offer that technology in such a way as to be accessible to anyone. Undoubtedly the best example of this is their iPhone which arguably spawned the smart phone revolution that is still thundering along.
When it was first released the iPhone wasn’t really anything special. It didn’t support third party applications, couldn’t send or receive MMS and even lacked some of the most critical functionality of a smart phone like cut and paste. For those brandishing their Windows Mobile 6.5 devices the idea of switching to it was laughable but they weren’t the target consumer. No Apple had their eye on the same market that Nintendo did when they released the Wii console: the people who traditionally didn’t buy their product. This transformed the product into a mass market success and was the first steps for Apple in developing their iOS ecosystem.
With the beachhead firmly established this paved the way for other players like Google to branch out into the smart phone world. Whilst they played catch up to Apple for a good 3 years or so Google was finally crowned the king early last year and hasn’t showed any signs of slowing down since then. Of course in that same time Apple created an entirely new market in the form of tablet computers, a market which Android has yet to make any significant in roads too. However whilst Google might be making a token appearance in the market currently I don’t they’re that interested in trying to follow Apple’s lead on this one.
Their sights are set firmly on the idea of creating another market all of their own.
For products that really bring something new to the table you really can’t beat Project Glass. Back when I first posted about Google’s augmented reality device it seemed like a cool piece of technology that the technical elite would love but if I honest I didn’t really know how the wider world would react to it. As more and more people got to use Glass the reaction has been overwhelmingly positive to the point where comparisons to the early revisions of the iPhone seem apt, even though Glass is technically cutting edge all by its own. The question then is whether Google can ride Glass to iPhone level success in creating another market in the world of augmented reality devices.
There are few companies in the world that can create a new market that have high potential for profitability but Google is one of the few that has a track record in doing so. Whilst the initial reviews are positive for Glass it’s still far from being a mass market device with the scarce few being made available are only for the technical elite, and only those who went to Google I/O and pony up the requisite $1500 for a prototype device. No doubt this will help in creating a positive image of the device prior to its retail release but getting tech heads to buy cutting edge tech is like shooting fish in a barrel. The real test will be when Joe Public gets his hands on the device and how they integrate into our everyday activities.
There are some 250+ top level domains available for use on the Internet today and most of them can be had through your local friendly domain registrar. The list has grown steadily over the past couple decades as more and more countries look to cement their presence on the Internet with their very own TLD. The registry responsible for all this is the Internet Corporation for Assigned Names and Numbers (ICANN) who looks after all the domain names as well as handing out the IP blocks to ISPs and corporations that request them. Whilst it seemed that the TLD space was forever going to be the place of countries and specific industries ICANN recently decided that it would allow anyone who could pony up the requisite $200,000 could have their own TLD effectively opening the market up to custom domain suffixes.
For an individual such a price seems ludicrous so it’s unlikely you’ll see .johndoe type domain names popping up all over the place. For most companies though securing this new form of brand identity is worth far more than the asking price and so many have signed up to do so. ICANN has since released a list of all the requested gTLDs and having a look through it has lead me, and everyone else it seems, to make some interesting conclusions about the big players in this custom TLD space (I made an excel spreadsheet of it for easy sleuthing).
The biggest player, although it’s not terribly obvious unless you sort by applicant name, is the newly founded donuts.co registry which has snagged some 300+ new gTLDs in order to start up its business. Donuts has $100 million in seed capital with which to play with which about 60% will be tied up solely in these domain suffix acquisitions. They all seem like your run of the mill SEO-y type words, being a large grab bag of words that the general public is likely to be interested in but are of no value for specific companies. Every domain also has its own associated LLC which isn’t a requirement of the application process so I’m wondering why they’ve done it. Likely it’s for isolating losses in the less than successful domains but it seems like an awful lot of work to do when that could be done in other ways.
They’re not the only ones doing that either. A quick search of other companies who’ve bought multiple domains although none of them have bought the same number that Donuts has. There also seems to be a few companies that are handling the gTLD for other big name companies ostensibly because they have no interest in actually running the gTLD but are just doing it for their brand identity. The biggest player in this space seems to be CSC Global who strangely enough did all their applications from another domain under their control, CSCInfo. It’s probably nothing significant but for a company that apparently specializes in brand identity you’d wonder why they’d apply with a different domain than their own.
What’s really got everyone going though is the domains that Amazon and Google have gone after. Whilst their war chests of gTLDs aren’t anything compared to Donut’s they’re still quite sizable with Amazon grabbing about 80 and Google grabbing just over 100. Some are taking this as being indicative of their future plans as Amazon has put in for gTLDs like mobile but realistically I can just most of them being augments to their current services (got an app on AWS? Get your .mobile domain today!). There’s also a bit of overlap for most of the popular domains that both these companies have gone after as well and I’m not sure what the resolution process for that is going to be.
While the 2000 odd applications seems to show that there’s some interest in these top level domains the real question of their value, at least for us web oriented folks, is whether the search engines will like them as much as other TLDs. There’s been a lot of heavy investment in current sites that reside on the regular TLDs and apart from marketing campaigns and new websites that are looking for a good name (http://this.movie.sucks seems like it’ll be created in no time) I question how much value these TLDs will bring. Sure there will be the initial gold rush of people looking to secure all the domains they can on these new TLDs but after that will there really be anything in them? Will businesses actually migrate to these gTLDs as their primary or will they simply just redirect them to their current sites? I don’t have answers to these questions but I’m very interested to see how these gTLDs get used.