The search for life beyond that of our planet is a complicated one. As it stands we only know of life arising in a particular way, one we can’t be sure isn’t unique in the universe. Still it’s the best model we have to go by and so when we search for life we look for all the same signs as we do for anywhere here on Earth. The one constant that binds all life on Earth is water and so that is why we search so fervently for it anywhere in the solar system. Surprisingly there are many places to find it but none are more spectacular than Saturn’s moon Enceladus.
Enceladus is a strange world, truly unlike anything else in our solar system. Its surface is incredibly young, mostly devoid of the numerous pockmarks that are common among other atmosphereless celestial bodies. This is because it’s in a constant state of change, it’s icy surface splitting and cracking open to reveal a new unsullied surface. Enceladus is like this because Saturn’s massive girth warps the tiny moon as it makes its orbit, generating incredible amounts of heat in the process. The same process is responsible for the amazing cryovolcanoes that dot its south pole, spewing forth tons of water per day into the depths of space. Whilst it’s easy to confirm that there’s liquid water somewhere on Enceladus (those cryovolcanoes aren’t magical water spouts) the question of where the reservoir is, if there even is one, has been the subject of much scientific study.
It has long been thought that Enceladus was host to a vast underground ocean although its specifics have always been up for debate. Unlike Europa which is thought to have a layer of liquid water underneath the ice (or a layer of “warmer” ice) the nature of Enceladus’ ocean was less clear. However data gathered by the Cassini spacecraft during its flybys of the moon in 2010~2012 show that it’s very likely that there’s a subsurface ocean below the area where the plumes originate. How they did this is quite incredible and showcases the amazing precision of the instruments we have up in space.
The measurements were made by using the radio communications between Cassini and Earth. These stay at a relatively fixed frequency and thus any changes in the craft’s speed will manifest themselves as slight Doppler Shifts in the frequency. This is the same principle behind how the sound of an approaching ambulance changes as it gets closer and farther away and it allows us to detect even the smallest changes in Cassini’s speed. As it turns out when Cassini flew over Enceladus’ south pole, which has a great big depression in it (meaning there’s less gravity at that point) the change in speed was far less than what we expected. What that means is there’s something more dense below the depression that’s making up for the lack of matter in the depression and, since water is more dense than ice, a giant hidden sea is a very plausible explanation.
There may be other explanations of course, like a giant deposit of heavy elements or just plain rock, however the fact that there’s water gushing up from that location gives more credence to the theory that it’s an ocean. The question now turns to nailing down some of the other variables, like how big it actually is and how the water gets to the surface, which I’m not entirely sure the Cassini craft is capable of determining. Still I wasn’t completely sure it was capable of doing this before today so I’m sure the scientists at NASA have some very interesting ideas about what comes next for Enceladus.
It’s late 2001 and I’ve finally managed to find a group of like minded people who enjoy computers, games and all things that I felt ashamed of liking for the better part of my teenage life. We’re gathered at a friend’s house to have a LAN as this was long before the time when broadband was a common thing in Australian households. As much as these gatherings were a hive for sharing ill-gotten files they were also the beginnings of my career in IT as often we’d be experimenting with the latest software just for laughs. It’s at this very gathering where I had my first encounter with the latest operating system from Microsoft, Windows XP, and little did I know that I’d still be encountering it for the next 13 years.
Today marks a day that we have known was coming for a long time but many have refused to accept: the day when Windows XP is no longer supported by Microsoft. You can still get support for Microsoft Security Essentials on Windows XP until July 14, 2015 but Microsoft will no longer be providing any updates, free or paid, to the aging operating system. For administrators like me it’s the ammunition we’ve been using for the better part of 2 years to get people to move away from the old operating system as nothing scares corporate customers more than the possibility of no support. Still though out of the total Windows market share XP still claims a staggering 27%, meaning almost 1 in every 3 Windows users is now on a system that won’t have any kind of official support. Many have criticised Microsoft for doing this but in all honesty it had to happen sometime or they’d never see the end of it.
The reason behind XP’s longevity, something which is usually unheard of in the high technology industry, can be almost wholly attributed to the utter dismal failure that Windows Vista was. Prior to that Microsoft customers were more than happy to go through the routine upgrade process every 3~5 years however the fact that Vista didn’t deliver on what it promised, coupled with it’s astoundingly bad reliability, meant that the vast majority of organisations got comfortable with Windows XP as their operating system. The time between XP and Windows 7 was long enough that the pain of moving forward became too great and many opted to wait until there was just no option left for them. My most recent project was a great example of this, migrating a large government department to Windows 7 from XP which only barely missed the deadline that was hit today.
This is the prime reason behind Microsoft’s recent change from a longer product cycle to one that’s based around rapid innovation. Whilst it’s true that Windows 8 is shaping up to be the Vista of this current product cycle, with Windows 7 adoption rates still outpacing it, the vast majority of the hard work will be done if users finally move to Windows 7. The upgrade paths from there are a whole lot more forgiving than coming from XP and moving from 8 to 8.1 takes about as much effort as installing a patch. I’m quietly hopeful that Windows 7 won’t become the next XP but at the same time I know how readily history can repeat itself.
So it’s without a heavy heart I say goodbye to Windows XP. It will not be missed by anyone in the industry as it was supposed to be dead and buried a long time ago and it was only through the stubbornness of the majority that it managed to stick around for as long as it did. I’m hoping for a much brighter future, one where Microsoft’s quickened pace of development is embraced and legacy systems are allowed to die the swift death that they so rightly deserve.
It’s no secret that I’m not the biggest fan of the current generation of smartwatches as I feel that, in terms of functionality, they simply don’t provide enough for me to justify purchasing one. Sure they’re pretty neat bits of technology but geek lust can only drive me so far as should I buy one and only end up using it as a watch then I’ll likely feel disappointed. When I thought about this more I figured it was a little strange as I’m already a watch wearer and so you’d think that if it came down to that then, realistically, I was getting my money’s worth anyway. After seeing the Motorola 360 though I think I know why all of the other smartwatches are so lacklustre.
For the uninitiated last week saw the debut of Android Wear, a new version of the Android phone operating system that’s focused specifically on wearable technology. Right now it’s focused at developers with current applications that produce notifications with preview allowing developers to see how they’ll look on future devices. Interestingly enough it supports both the traditional smartwatch screen (square/rectangle) along with a more traditional round face. Considering every smartwatch that I’ve heard of up until this point had a square face I was wondering who would be create such a beast and it’s Motorola, something I probably should’ve seen coming.
Smartwatches have always gone for the rectangular style screen for 2 reasons. The first is that’s what screen manufacturers make and getting something that isn’t standard like that ends up costing quite a considerable amount. In order to make them affordable enough for people to want to buy them this kind of precludes doing anything particularly fancy so square faces it was. Secondly doing content layouts for square screens is hard enough already and doesn’t translate well to the rounded format. Motorola’s 360, in combination with Android Wear, makes this non-standard fantasy a reality but the question then becomes, why?
As it turns out Motorola has realised that smartwatches, whilst a popular niche in their own right, are focusing on one demographic: technophiles. The 360 on the other hand isn’t targeted at them specifically instead it’s aimed more at “people who wear watches” hence the round design (which apparently is 80% of all watch sales worldwide, who knew). Indeed the 360 looks like it’d be right at home among the chunkier watch offerings that have become popular of late with the added side benefit of having additional functionality built into it. The Pebble Steel made some headroads in this regard although it’s hard to deny that the 360 is a much more striking beast.
So I guess what was needed for me to overcome my initial skepticism about smartwatches wasn’t so much the functionality, although I’d admit I still dream of getting an all in one, it was more of design. It will be interesting to see if the round watch face gamble pays off for Motorola since they’ll be the first to market with their device and it’ll likely be the standard by which other Android Wear products are judged. I’ll hold off on saying Motorola has my money for this one until I see one or two in the wild but it’s been quite interesting to see my opinion changed due to good design.
Maybe I am an Apple fan boy after all. *shudder*
Google isn’t a company that’s known for curtailing its ambitions; starting off with its humble beginnings as the best search engine on the web to the massive conglomerate that it is today, encompassing everything from smartphones to robotic cars. In the past many of the ideas were the result of acquisitions where Google made strategic purchases in order to acquire the talent required to dominate the space they were in. More recently however they’ve started developing their own moonshot style ideas through their Project X labs, a research and development section that has many of the hallmarks of previous idea incubators. Their most recent acquisition trend however seems to be a mix of both with Google picking up a lot of talent to fuel a potential project that they’re being incredibly tight lipped about.
Now I’ll be honest, I really had no idea that Google was looking to enter in the robotics industry until just recently when it was announced that they had acquired Boston Dynamics. For the uninitiated Boston Dynamics is a robotics company that’s been behind some of the most impressive technology demonstrations in the industry, notably the Big Dog robot which displayed stability which few robots have been able to match. Most recently they started shipping out their Atlas platform to select universities for the DARPA robotics challenge program which hopes to push the envelope of what robots are capable of achieving.
Boston Dynamics is the 8th acquisition that Google has made in the robotics space in the past 6 months, signalling that they’ve got some kind of project on the boil which needs an incredible amount of robotics expertise. The acquisitions seem to line up in a few categories with the primary focus being on humanoid robots. Companies in this area include Japanese firm Schaft, who has created a robot similar to that of Atlas, and several more industrial robotics focused companies like Industrial Perception, Meka, Redwood Robotics. They also snapped up Bot and Dolly, the robotics company behind the incredible Box video, who’s technology provided some of the special effects for the recent movie Gravity. There were also 2 design firms, Autofuss and Holomni, who were also picked up in Google’s most recent spending spree.
At the head of all of this is Andy Rubin who came to Google as the lead of Android. It’s likely that he’s been working on this ever since he left the Android division at Google back in March this year although it was only recently announced that he would be heading up the robotics projects. As to what that is currently Google isn’t saying however they have said that they consider it a moonshot project, right alongside their other ideas like Project Loon, Google Glass and the Self Driving Car. Whilst it seems clear that their intention with all these acquisitions will be to create some kind of humanoid robot what kind of purpose that will serve remains to be seen, but that won’t stop me from speculating.
I think in the beginning they’ll use much of the expertise on these systems to bolster the self driving car initiative as whilst they’ve made an incredible amount of progress of late I’m sure the added expertise in computer vision systems that these companies have will prove to be invaluable. From there the direction they’ll take is less clear as whilst it’d be amazing for them to create the in home robots of the future it’s unlikely we’ll see anything of that project for at least a couple years. Heck just incorporating all these disparate companies into the Google fold is going to take the better part of a couple months and it’s unlikely they’ll produce anything of note for sometime after.
Whatever Google ends up doing with these companies we can be assured it’s going to be something revolutionary, especially now that they’ve added the incredible talent of Boston Dynamics to their pool. Hopefully this will allow them to deliver their self driving car technology sooner and then use that as a basis for delivering more robotics technology to the end users. It will be a while before this starts to pay dividends for Google however the benefits for both them and the world at large has the potential to be quite great and that should make us all incredibly excited.
Ask any computer science graduate about the first programmable computer and the answer you’ll likely receive would be the Difference Engine, a conceptual design by Charles Babbage. Whilst the design wasn’t entirely new (that honour goes to J. H. Müller who wrote about the idea some 36 earlier) he was the first to obtain funding to create such a device although he never managed to get it to work, despite blowing the equivalent of $350,000 in government money on trying to build it. Still modern day attempts at creating the engine with the tolerances of the time period have shown that such a device would have worked should have he created it.
But Babbage’s device wasn’t created in a vacuum, it built on the wealth of mechanical engineering knowledge from the decades that proceeded him. Whilst there was nothing quiet as elaborate as his Analytical Engine there were some marvellous pieces of automata, ones that are almost worthy of the title of programmable computer:
The fact that this was built over 240 years ago says a lot about the ingenuity that’s contained within it. Indeed the fact that you’re able to code your own message into The Writer, using the set of blocks at the back, is what elevates it above other machines of the time. Sure there were many other automata that were programmable in some fashion, usually by changing a drum, but this one allows configuration on a scale that they simply could not achieve. Probably the most impressive thing about it is that it still works today, something which many machines of today will not be able to claim in 240 years time.
Whilst a machine of this nature might not be able to lay claim to the title of first programmable computer you can definitely see the similarities between it and it’s more complex cousins that came decades later. If anything it’s a testament to the additive nature of technological developments, each one of them building upon the foundations of those that came before it.
Like most people who’ve made their career in IT I’ve spent a great deal of my spare time dabbling in things that (I hope) could potentially lead onto bigger things somewhere down the line. Nearly all of them start off with a burst of excitement as I dive into it, revelling in the challenge and marvelling at the things I can create if I just invest the time into them. After a while however that passion starts to fade into the background, slowly being replaced by the looming reality of the challenge I’ve set myself. In all but one cases this has eventually led to burn out, seeing the project shelved so that I can recoup and hopefully return to it. The only project to ever survive such a period was this blog, but even it came close to being shut down.
Shown above are the stats for this blog over the past couple years and each of the big changes tells a story. As you can see for a long while there was a steady increase in traffic, something which constantly drove me forward, to keep me writing even when I wondered why I was bothering. Then the slow decline started happening and I honestly couldn’t tell you why it was happening. Then I stumbled onto the fact that 20% of my visitors were disappearing between the search engine and my site, indicating that my blog was just loading far too slow for most people to bother waiting for it. Migrating the server to a new host saw an amazing spike in traffic, one that continue its upwards trend for a very long time.
Of course I eventually got curious as to why this was and found that that the majority of users weren’t visiting my site per se, they were just incidental visitors thanks to Google’s Image search. I had figured that this wouldn’t last, dreading the day when the hit came, and when it did the drop in traffic was significant and brutal. Indeed I had come so close to one of my personal goals (20K visits in a month) that losing it all was a big hit to my confidence as a blogger. Still the always upwards trend continued and motivation remained steady, that was until the start of this year when, inexplicably, I took yet another hit.
Try as I might to diagnose the issue the downward trend continued and, unfortunately, my motivation began to follow it. It all came to a head when my site got compromised and I inadvertently deleted my entire web folder, leaving me to wonder if it was worth even bothering to resurrect it. Of course I eventually came to my senses but I’d be lying if I said that my motivation for this wasn’t in some way linked to the number of page views I get at the end of each day.
I had mulled over writing this post for a long time, not to start a pity party or anything like that, more as a catharsis for my current situation. Honestly I had felt that there was something wrong with me as I should have been doing this for the love of it, not for the ego stroke reward that a page view is. However reading over Scott Adam’s (creator of Dilbert) treatise on how to be successful struck a cord with me, showing me that I’m not alone in being motivated by passions that ultimately get dashed by the lack of success. This blog then was the example that getting results is the way to keep yourself motivated and it should come as no surprise that it went away when the apparent success did as well.
For now I’m simply taking it day by day, continuing what I’ve always been doing and enjoying the act of writing more than the pageviews. It’s been helped somewhat by the fact that I’ve been able to make some changes that have directly resulted in little bumps in traffic, nothing crazy mind, but enough to show that I’m on the right track. It’s going to be a long time before I reach the dizzying heights that I was at just under a year ago but hopefully those numbers will be genuine, a real reflection of the effort I’ve put into this place since I began it almost 5 years ago.
I wrote a post just last month that laid out the reasons why the banks would probably not be dropping rates independently of the RBA, even though the current funding climate could allow them to do so. Indeed current interest rates are comparable to when we were in the depths of the Global Financial Crisis however our, and the vast majority of others worldwide, economy is no longer struggling. These are things you don’t usually see going hand in hand because when times are good people like to borrow and spend which usually leads to a healthy credit market. It seems that punters are still wary of another GFC-esque situation as whilst the economy has vastly improve the desire for credit hasn’t which is quite odd, but nothing to be concerned about in the grand scheme of things (unless you’re a lender, of course).
It was for those reasons that many did not expect a rate cut from the Reserve Bank yesterday as all the pressures that prompted past cuts (decline in demand for Australian products, Eurozone Crisis, etc.) have run their course. It came as something of a shock then that they decided to cut another 25 basis points off the current cast rate bringing it to a record low 2.75%, dipping below even the lowest rate available during the height of the GFC. The rate decision release makes for some interesting reading as the reasons behind the decision aren’t the ones I was expecting.
The RBA acknowledges that the funding climate has improved dramatically with many of our larger trading partners undergoing periods of expansion. The Eurozone is still in recession although its effect on us is muted, largely thanks to the limited amount of trade with do with them. They also expect investment in the resources sector to reach its peak this year and so part of this rate cut could be a proactive move to encourage people to start investing in other areas before the resources boom starts to tail off. Inflation has remained within their target range being at 2.5% for the past year. However the major factor in cutting rates seems to come from the desire to encourage more spending and moving their savings into more productive asset classes.
It’s true that rate cuts take a while to work their way through the economy and the last year or so of cuts is still having an effect. Primarily this is due to relieving mortgage pressure which doesn’t yield benefits quickly but sustained periods of low rates will eventually lead to more consumer spending (as the RBA notes). This rate cut then appears to be more of a shock tactic rather than a long play, hoping to encourage people to either spend more or entice people into taking out mortgages at rates that will likely not be repeated for quite some time, boosting the credit industry. Additionally rate cuts always put a downward pressure on the Australian dollar which will help boost exports.
The ideas are sound as historically moving the cash rate downward does all the things that they’re expecting this current rate cut to do. However I’m a little sceptical as to whether it will have the desired effect this time around due to the circumstances we find ourselves in. The numerous cuts over the past 18 months, which were largely in reaction to the deteriorating conditions in the Eurozone, haven’t had the large impacts that they did during the GFC. Primarily this is because of how well insulated we are from said crisis but it also appears that Australian’s have lost their appetite for credit. Whilst its easy to lay the blame at the GFC for this I can’t help but feel there’s something else at play here, something which moving the cash rate won’t do much to alleviate.
This whole situation is a result of the weird financial climate we find ourselves in currently. Whilst I might not think the RBA is on the right track with this decision I don’t have any good solutions to the issues at hand because, as far as I can tell, what we have is a crisis of consumer sentiment, not a problem with the funding environment. It’s quite possible that this last dip will be the hair trigger for a major ramp up but I’ll remain sceptical for now as the previous cuts failed to bring that same idea to fruition, even if they were done for different reasons.
One of the peeves I had with the official Twitter client on Windows Phone 7, something I didn’t mention in my review of the platform, was that among the other things that its sub-par at (it really is the poor bastard child of its iOS/Android cousins) it couldn’t display images in-line. In order to actually see any image you have to tap the tweet then the thumbnail in order to get a look at it, which usually loads the entire large image which isn’t required on smaller screens. The official apps on other platforms were quite capable of loading appropriate sized images in the feed which was a far better experience, especially considering it worked for pretty much all of the image sharing services.
Everyone knows there’s no love lost between Instagram and I but that doesn’t mean I don’t follow people who use it. As far back as I can remember their integration in the mobile apps has left something to be desired, especially if you want to view the full sized image which usually redirected you to their atrocious web view. Testing it for this post showed that they’ve vastly improved that experience which is great, especially considering I’m still on Windows Phone 7 which was never able to preview Instagram anyway, but it seems that this improvement may have come as part of a bigger play from Instagram trying to claw back their users from Twitter.
Reports are coming in far that Instagram has disabled their Twitter card integration which stops Twitter from being able to display the images directly in the feed like they have been doing since day 1. Whilst I don’t seem to be experiencing the issue that everyone is reporting (as you can see from the devastatingly cute picture above) there are many people complaining about this and Instagram has stated that disabling this integration is part of their larger strategy to provide a better experience through their platform. Part of that was improve the mobile web experience which I mentioned earlier.
It’s an interesting move because for those of us who’ve been following both Twitter and Instagram for a while the similarities are startling. Twitter has been around for some 6 years and it spent the vast majority of that being a company that was extraordinarily open with its platform, encouraging developers far and wide to come in and develop on their platform. Instagram, whilst not being as wide open as Twitter was, did similar things making their product integrate tightly with Twitter’s ecosystem whilst encouraging others to develop on it. Withdrawing from Twitter in favour of their own platform is akin to what Twitter did to potential client app developers, essentially signalling to everyone that it’s our way or the highway.
The cycle is eerily similar, both companies started out as small time players that had a pretty dedicated fan base (although Instagram grew like a weed in comparison to Twitter’s slow ride to the hockey stick) and then after getting big they start withdrawing all the things that made them great. Arguably much of Instagram’s growth came from its easy integration with Twitter where many of the early adopters already had large followings and without that I don’t believe they would’ve experienced the massive growth they did. Disabling this functionality seems like they’re shooting themselves in the foot with the intention of attempting some form of monetization eventually (that’s the only reason I can think of for trying to drive users back to the native platform) but I said the same thing about Twitter when they pulled that developer stunt, and they seem to be doing fine.
It probably shouldn’t be surprising that this is what happens when start ups hit the big time because at that point they have to start thinking seriously about where they’re going. For giant sites like Instagram that are still yet to turn a profit from the service they provide it’s inevitable that they’d have to start fundamentally changing the way they do business and this is most likely just the first step in wider sweeping changes. I’m still wondering how Facebook is going to turn a profit from this investment as they’re $1 billion in the hole and there’s no signs of them making that back any time soon.
It was only 2 weeks ago today that the world was captivated by our latest endeavour in space exploration: the landing of the Curiosity rover on Mars. No doubt it was a great achievement and the science data that the rover will bring back to us will undoubtedly further our understanding of our red celestial sister in ways that we can’t possibly fathom yet. Still Curiosity achievement was only possible due to the great amount of work that came before it in the form of dozens of other space problems, numerous landers and of course other roving space craft. There is one craft in particular that has had so much to do with space exploration (and that just crossed a major milestone) that I feel it bears mentioning.
That craft is Voyager 1.
On August 20, 1977 NASA launched the first of two craft in the Voyager program. At the time the alignment of all the planets in our solar system was quite favourable, allowing a probe to be able to visit all of the outer gas giants (Jupiter, Saturn, Uranus and Neptune) without having to use much propellant or having to spend a lot of time travelling between them thanks to the gravity assists it could get from each of the giants. Indeed the recently launched New Horizons craft that will be visiting Pluto sometime in 2015 will have a speed of roughly 15KM/s which is about 2KM/s slower than Voyager’s current speed showing you just how much those gravity assists helped.
Voyager 1’s primary mission was to study the planets of the outer solar system and it made quite a few interesting discoveries. On its approach to Jupiter Voyager 1 noticed that it actually had rings like Saturn’s although they were much to faint to see with any earth bound telescopes at the time. Voyager 1 also discovered that Io was volcanically active, something that the previous Pioneer probes and earth based observatories had failed to see. It’s encounter with Saturn provided some incredible insights into Titan however this precluded it from being able to visit any of the other planets in the grand tour due to it missing out on the potential gravitational boost and trajectory alignment that Saturn could have provided. Still this set it up for it’s ultimate mission: to study interstellar space.
Whilst Voyager’s list of scientific achievements is long and extremely admirable there are actually 2 non-scientific things that keep it stuck in my mind. The first is something that Voyager 1 (and its sister craft) carries on board with it: the Voyager Golden Record. Contained on the record that’s made from materials designed to withstand the harsh environment of space are recordings of various classical music, pictures of earth as well as pictograms that depict how the record should be used by anyone who finds it. Since Voyager 1 will be the first interstellar craft it is quite possible that one day another form of intelligent life will come across it and the record will serve as an introduction to the human species. It’s an absolutely beautiful idea and symbolizes the human desire to reach further and further beyond our limits, something that I believe is a driving force behind all of our space exploration.
The second was a picture and whilst I could go on about its significance I think there’s someone much better qualified than me to do so:
It’s sometimes hard to believe that we’ve managed to create something that’s lasted for 35 years in the harshest environment that we know of. The fact is though that we did, we designed it, built it and launched it into the great unknown and because of that we’ve been able to reap the rewards of undertaking such a challenging endeavour. I find projects like these incredibly inspiring; they show that through determination, hard work and good old fashioned science we can achieve things that we never thought possible. I am truly grateful to be alive in such times and I know that the future will only bring more like this.
Happy birthday Voyager 1.
There are some 250+ top level domains available for use on the Internet today and most of them can be had through your local friendly domain registrar. The list has grown steadily over the past couple decades as more and more countries look to cement their presence on the Internet with their very own TLD. The registry responsible for all this is the Internet Corporation for Assigned Names and Numbers (ICANN) who looks after all the domain names as well as handing out the IP blocks to ISPs and corporations that request them. Whilst it seemed that the TLD space was forever going to be the place of countries and specific industries ICANN recently decided that it would allow anyone who could pony up the requisite $200,000 could have their own TLD effectively opening the market up to custom domain suffixes.
For an individual such a price seems ludicrous so it’s unlikely you’ll see .johndoe type domain names popping up all over the place. For most companies though securing this new form of brand identity is worth far more than the asking price and so many have signed up to do so. ICANN has since released a list of all the requested gTLDs and having a look through it has lead me, and everyone else it seems, to make some interesting conclusions about the big players in this custom TLD space (I made an excel spreadsheet of it for easy sleuthing).
The biggest player, although it’s not terribly obvious unless you sort by applicant name, is the newly founded donuts.co registry which has snagged some 300+ new gTLDs in order to start up its business. Donuts has $100 million in seed capital with which to play with which about 60% will be tied up solely in these domain suffix acquisitions. They all seem like your run of the mill SEO-y type words, being a large grab bag of words that the general public is likely to be interested in but are of no value for specific companies. Every domain also has its own associated LLC which isn’t a requirement of the application process so I’m wondering why they’ve done it. Likely it’s for isolating losses in the less than successful domains but it seems like an awful lot of work to do when that could be done in other ways.
They’re not the only ones doing that either. A quick search of other companies who’ve bought multiple domains although none of them have bought the same number that Donuts has. There also seems to be a few companies that are handling the gTLD for other big name companies ostensibly because they have no interest in actually running the gTLD but are just doing it for their brand identity. The biggest player in this space seems to be CSC Global who strangely enough did all their applications from another domain under their control, CSCInfo. It’s probably nothing significant but for a company that apparently specializes in brand identity you’d wonder why they’d apply with a different domain than their own.
What’s really got everyone going though is the domains that Amazon and Google have gone after. Whilst their war chests of gTLDs aren’t anything compared to Donut’s they’re still quite sizable with Amazon grabbing about 80 and Google grabbing just over 100. Some are taking this as being indicative of their future plans as Amazon has put in for gTLDs like mobile but realistically I can just most of them being augments to their current services (got an app on AWS? Get your .mobile domain today!). There’s also a bit of overlap for most of the popular domains that both these companies have gone after as well and I’m not sure what the resolution process for that is going to be.
While the 2000 odd applications seems to show that there’s some interest in these top level domains the real question of their value, at least for us web oriented folks, is whether the search engines will like them as much as other TLDs. There’s been a lot of heavy investment in current sites that reside on the regular TLDs and apart from marketing campaigns and new websites that are looking for a good name (http://this.movie.sucks seems like it’ll be created in no time) I question how much value these TLDs will bring. Sure there will be the initial gold rush of people looking to secure all the domains they can on these new TLDs but after that will there really be anything in them? Will businesses actually migrate to these gTLDs as their primary or will they simply just redirect them to their current sites? I don’t have answers to these questions but I’m very interested to see how these gTLDs get used.