The public cloud is a great solution to a wide selection of problems however there are times when its use is simply not appropriate. This is typical of organisations who have specific requirements around how their data is handled, usually due to data sovereignty or regulatory compliance. However whilst the public cloud is a great way to bolster your infrastructure on the cheap (although that’s debatable when you start ramping up your VM size) it doesn’t take advantage of the current investments in infrastructure that you’ve already made. For large, established organisations this is not insignificant and is why many of them were reluctant to transition fully to public cloud based services. This is why I believe the future of the cloud will be paved with hybrid solutions, something I’ve been saying for years now.
Microsoft has finally shown that they’ve understood this with the release of Windows Azure Pack for Server 2012R2. Sure there was beginnings of it with SCVMM 2012 allowing you to add in your Azure account and move VMs up there but that kind of thing has been available for ages through hosting partners. The Azure Pack on the other hand brings features that were hidden behind the public cloud wall down to the private level, allowing you to make full use of it without having to rely on Azure. If I’m honest I thought that Microsoft would probably be the only ones to try this given their presence in both the cloud and enterprise space but it seems other companies have begun to notice the hybrid trend.
Google has been working with the engineers at Red Hat to produce the Test Compatibility Kit for Google App Engine. Essentially this kit provides the framework for verifying the API level functionality of a private Google App Engine implementation, something which is achievable through an application called CapeDwarf. The vast majority of the App Engine functionality is contained within that application, enough so that current developers on the platform could conceivably use their code using on premises infrastructure if they so wished. There doesn’t appear to be a bridge between the two currently, like there is with Azure, as CapeDwarf utilizes its own administrative console.
They’ve done the right thing by partnering with RedHat as otherwise they’d lack the penetration in the enterprise market to make this a worthwhile endeavour. I don’t know how much presence JBoss/OpenShift has though so it might be less of using current infrastructure and more about getting Google’s platform into more places than it currently is. I can’t seem to find any solid¹ market share figures to see how Google currently rates compared to the other primary providers but I’d hazard a guess they’re similar to Azure, I.E. far behind Rackspace and Amazon. The argument could be made that such software would hurt their public cloud product but I feel these kinds of solutions are the foot in the door needed to get organisations thinking about using these services.
Whilst my preferred cloud is still Azure I’m still a firm believer that the more options we have to realise the hybrid dream the better. We’re still a long way from having truly portable applications that can move between freely between private and public platforms but the roots are starting to take hold. Given the rapid pace of IT innovation I’m confident that the next couple years will see the hybrid dream fully realised and then I’ll finally be able to stop pining for it.
¹This article suggests that Microsoft has 20% of the market which, since Microsoft has raked in $1 billion, would peg the total market at some $5 billion total which is way out of line with what Gartner says. If you know of some cloud platform figures I’d like to see them as apart from AWS being number 1 I can’t find much else.
With virtualization now being as much of as a pervasive idea in the datacentre as storage array networks or under floor cooling the way has been paved for the cloud to make its way there as well for quite some time now. There are now many commercial off the shelf solutions that allow you to incrementally implement the multiple levels of the cloud (IaaS -> PaaS -> SaaS) without the need for a large operational expenditure in developing the software stack at each level. The differentiation now comes from things like added services, geographical location and pricing although even that is already turning into a race to the bottom.
The big iron vendors (Dell, HP, IBM) have noticed this and whilst they could still sustain their current business quite well by providing the required tin to the cloud providers (the compute power is shifted, not necessarily reduced) they’re all starting to look to creating their own cloud solutions so that they can continue to grow their business. I covered HP’s cloud solution last week after the HP Cloud Tech day but recently there’s been a lot of news coming out regarding the other big players, both from the old big iron world and the more recently established cloud providers.
First cab off the rank I came across was Dell who are apparently gearing up to make a cloud play. Now if I’m honest that article, whilst it does contain a whole lot of factual information, felt a little speculative to me mostly because Dell hasn’t tried to sell me on the cloud idea when I’ve been talking to them recently. Still after doing a small bit of research I found that not only are Dell planning to build a global network of datacentres (where global usually means everywhere but Australia) they announced plans to build one in Australia just on a year ago. Combining this with their recent acquisition spree that included companies like Wyse it seems highly likely that this will be the backbone of their cloud offering. What that offering will be is still up for speculation however, but it wouldn’t surprise me if it was yet another OpenStack solution.
Mostly because RackSpace, probably the second biggest general cloud provider behind Amazon Web Services, just announced that their cloud will be compatible with the OpenStack API. This comes hot off the heels of another announcement that both IBM and RedHat would become contributers to the OpenStack initiative although no word yet on whether they have a view to implement the technology in the future. Considering that both HP and Dell have are already showing their hands with their upcoming cloud strategies it would seem like becoming OpenStack contributers will be the first step to seeing some form of IBM cloud. They’d be silly not to given their share of the current server market.
Taking all of this into consideration it seems that we’re approaching a point of convergence in the cloud computing industry. I wrote early last year that one of the biggest draw backs to the cloud was its proprietary nature and it seems like the big iron providers noticed that this was a concern. The reduction of vendor lock lowers the barriers to entry for many customers significantly and provides a whole host of other benefits like being able to take advantage of disparate cloud providers to provide service redundancy. As I said earlier the differentiation between providers will then predominately come from value-add services, much like it did for virtualization in the past.
This is the beginning of the cloud war, where all the big players throw their hats into the ring and duke it out for our business. It’s a great thing for both businesses and consumers as the quality of products will increase rapidly and the price will continue on a down hill trend. It’s quite an exciting time, one akin to the virtualization revolution that started happening almost a decade ago. Like always I’ll be following these developments keenly as the next couple years will be something of a proving ground for all cloud providers.
Maybe it’s my corporate IT roots but I’ve always thought that the best cloud strategy would be a combination of in house resources that would have the ability to offload elsewhere when extra resources were required. Such a deployment would mean that organisations could design their systems around base loads and have the peak handled by public clouds, saving them quite a bit of cash whilst still delivering services at an acceptable level. It would also gel well with management types as not many are completely comfortable being totally reliant on a single provider for any particular service which in light of recent cloud outages is quite prudent. For someone like myself I was more interested in setting up a few Azure instances so I could test my code against the real thing rather than the emulator that comes with Visual Studio as I’ve always found there’s certain gotchas that don’t show up until you’re running on a real instance.
Now the major cloud providers: Rackspace, AWS, et. al. haven’t really expressed much interest in supporting configurations like this which makes business sense for them since doing so would more than likely eat into their sales targets. They could license the technology of course but that brings with it a whole bunch of other problems like what are supported configurations and releasing some measure of control over the platform in order to enable end users to be able to deploy their own nodes. However I had long thought Microsoft, who has a long history of letting users install stuff on their own hardware, would eventually allow Azure to run in some scaled down fashion to facilitate this hybrid cloud idea.
Indeed many developments in their Azure product seemed to support this, the strongest of which being the VM role which allowed you to build your own virtual machine then run it on their cloud. Microsoft have offered their Azure Appliance product for a while as well, allowing large scale companies and providers the opportunity to run Azure on their own premises. Taking this all into consideration you’d think that Microsoft wasn’t too far away from offering a solution for medium organisations and developers that were seeking to go to the Azure platform but also wanted to maintain some form of control over their infrastructure.
After talking with a TechEd bound mate of mine however, it seems that idea is off the table.
VMware has had their hybrid cloud product (vCloud) available for quite some time and whilst it satisfies most of the things I’ve been talking about so far it doesn’t have the sexy cloud features like an in-built scalable NoSQL database or binary object storage. Since Microsoft had their Azure product I had assumed they weren’t interested in competing with VMware on the same level but after seeing one of the TechEd classes and subsequently browsing their cloud site it looks like they’re launching SCVMM 2012 as a direct competitor to vCloud. This means that Microsoft is basically taking the same route by letting you build your own private cloud, which is basically just a large pool of shared resources, foregoing any implementation of the features that make Azure so gosh darn sexy.
Figuring that out left me a little disappointed, but I can understand why they’re doing it.
Azure, as great as I think it is, probably doesn’t make sense in a deployment scenario of anything less than a couple hundred nodes. Much of Azure’s power, like any cloud provider, comes from its large number of distributed nodes which provide redundancy, flexibility and high performance. The Hyper-V based private cloud then is more tailored to the lower end where enterprises likely want more control that what Azure would provide, not to mention that experience in deploying Azure instances is limited to Microsoft employees and precious few from the likes of Dell, Fujitsu and HP. Hyper-V then is the better solution for those looking to deploy a private cloud and should they want to burst out to a public cloud they’ll just have to code their application to be able to do that. Such a feature isn’t impossible however, but it is an additional cost that will need to be considered.