Posts Tagged‘aws’

Google’s App Engine Available For On Premises Deployment.

The public cloud is a great solution to a wide selection of problems however there are times when its use is simply not appropriate. This is typical of organisations who have specific requirements around how their data is handled, usually due to data sovereignty or regulatory compliance. However whilst the public cloud is a great way to bolster your infrastructure on the cheap (although that’s debatable when you start ramping up your VM size) it doesn’t take advantage of the current investments in infrastructure that you’ve already made. For large, established organisations this is not insignificant and is why many of them were reluctant to transition fully to public cloud based services. This is why I believe the future of the cloud will be paved with hybrid solutions, something I’ve been saying for years now.

Microsoft has finally shown that they’ve understood this with the release of Windows Azure Pack for Server 2012R2. Sure there was beginnings of it with SCVMM 2012 allowing you to add in your Azure account and move VMs up there but that kind of thing has been available for ages through hosting partners. The Azure Pack on the other hand brings features that were hidden behind the public cloud wall down to the private level, allowing you to make full use of it without having to rely on Azure. If I’m honest I thought that Microsoft would probably be the only ones to try this given their presence in both the cloud and enterprise space but it seems other companies have begun to notice the hybrid trend.

Google App Engine

Google has been working with the engineers at Red Hat to produce the Test Compatibility Kit for Google App Engine. Essentially this kit provides the framework for verifying the API level functionality of a private Google App Engine implementation, something which is achievable through an application called CapeDwarf. The vast majority of the App Engine functionality is contained within that application, enough so that current developers on the platform could conceivably use their code using on premises infrastructure if they so wished. There doesn’t appear to be a bridge between the two currently, like there is with Azure, as CapeDwarf utilizes its own administrative console.

They’ve done the right thing by partnering with RedHat as otherwise they’d lack the penetration in the enterprise market to make this a worthwhile endeavour. I don’t know how much presence JBoss/OpenShift has though so it might be less of using current infrastructure and more about getting Google’s platform into more places than it currently is. I can’t seem to find any solid¹ market share figures to see how Google currently rates compared to the other primary providers but I’d hazard a guess they’re similar to Azure, I.E. far behind Rackspace and Amazon. The argument could be made that such software would hurt their public cloud product but I feel these kinds of solutions are the foot in the door needed to get organisations thinking about using these services.

Whilst my preferred cloud is still Azure I’m still a firm believer that the more options we have to realise the hybrid dream the better. We’re still a long way from having truly portable applications that can move between freely between private and public platforms but the roots are starting to take hold. Given the rapid pace of IT innovation I’m confident that the next couple years will see the hybrid dream fully realised and then I’ll finally be able to stop pining for it.

¹This article suggests that Microsoft has 20% of the market which, since Microsoft has raked in $1 billion, would peg the total market at some $5 billion total which is way out of line with what Gartner says. If you know of some cloud platform figures I’d like to see them as apart from AWS being number 1 I can’t find much else.

The Hybrid Cloud Paradigm Clash.

Maybe it’s my corporate IT roots but I’ve always thought that the best cloud strategy would be a combination of in house resources that would have the ability to offload elsewhere when extra resources were required. Such a deployment would mean that organisations could design their systems around base loads and have the peak handled by public clouds, saving them quite a bit of cash whilst still delivering services at an acceptable level. It would also gel well with management types as not many are completely comfortable being totally reliant on a single provider for any particular service which in light of recent cloud outages is quite prudent. For someone like myself I was more interested in setting up a few Azure instances so I could test my code against the real thing rather than the emulator that comes with Visual Studio as I’ve always found there’s certain gotchas that don’t show up until you’re running on a real instance.

Now the major cloud providers: Rackspace, AWS, et. al. haven’t really expressed much interest in supporting configurations like this which makes business sense for them since doing so would more than likely eat into their sales targets. They could license the technology of course but that brings with it a whole bunch of other problems like what are supported configurations and releasing some measure of control over the platform in order to enable end users to be able to deploy their own nodes. However I had long thought Microsoft, who has a long history of letting users install stuff on their own hardware, would eventually allow Azure to run in some scaled down fashion to facilitate this hybrid cloud idea.

Indeed many developments in their Azure product seemed to support this, the strongest of which being the VM role which allowed you to build your own virtual machine then run it on their cloud. Microsoft have offered their Azure Appliance product for a while as well, allowing large scale companies and providers the opportunity to run Azure on their own premises. Taking this all into consideration you’d think that Microsoft wasn’t too far away from offering a solution for medium organisations and developers that were seeking to go to the Azure platform but also wanted to maintain some form of control over their infrastructure.

After talking with a TechEd bound mate of mine however, it seems that idea is off the table.

VMware has had their hybrid cloud product (vCloud) available for quite some time and whilst it satisfies most of the things I’ve been talking about so far it doesn’t have the sexy cloud features like an in-built scalable NoSQL database or binary object storage. Since Microsoft had their Azure product I had assumed they weren’t interested in competing with VMware on the same level but after seeing one of the TechEd classes and subsequently browsing their cloud site it looks like they’re launching SCVMM 2012 as a direct competitor to vCloud. This means that Microsoft is basically taking the same route by letting you build your own private cloud, which is basically just a large pool of shared resources, foregoing any implementation of the features that make Azure so gosh darn sexy.

Figuring that out left me a little disappointed, but I can understand why they’re doing it.

Azure, as great as I think it is, probably doesn’t make sense in a deployment scenario of anything less than a couple hundred nodes. Much of Azure’s power, like any cloud provider, comes from its large number of distributed nodes which provide redundancy, flexibility and high performance. The Hyper-V based private cloud then is more tailored to the lower end where enterprises likely want more control that what Azure would provide, not to mention that experience in deploying Azure instances is limited to Microsoft employees and precious few from the likes of Dell, Fujitsu and HP. Hyper-V then is the better solution for those looking to deploy a private cloud and should they want to burst out to a public cloud they’ll just have to code their application to be able to do that. Such a feature isn’t impossible however, but it is an additional cost that will need to be considered.