Posts Tagged‘bizspark’

When Spending Limits Go Awry: An Azure Story.

As longtime readers will know I’m quite keen on Microsoft’s Azure platform and whilst I haven’t released anything on it I have got a couple projects running on it right now. For the most part it’s been great as previously I’d have to spend a lot of time getting my development environment right and then translate that onto another server in order to make sure everything worked as expected. Whilst this wasn’t beyond my capability it was more time burnt in activities that weren’t pushing the project forward and was often the cause behind me not wanting to bother with them anymore.

Of course as I continue down the Azure path I’ve run into the many different limitations, gotchas and ideology clashes that have caused me several headaches over the past couple years. I think most of them can be traced back to my decision to use Azure Table Storage as my first post on Azure development is how I ran up against some of the limitations I wasn’t completely aware of and this continued with several more posts dedicated to overcoming the shortcomings of Microsoft’s NOSQL storage backend. Since then I’ve delved into other aspects of the Azure platform but today I’m not going to talk about any of the technology per se, no today I’m going to tell you about what happens when you hit your subscription/spending limit, something which can happen with only a couple mouse clicks.

Azure Spending Limit

I’m currently on a program called Microsoft BizSpark a kind of partner program whereby Microsoft and several other companies provide resources to people looking to build their own start ups. Among the many awesome benefits I get from this (including a MSDN subscription that gives me access to most of the Microsoft catalogue of software, all for free) Microsoft also provides me with an Azure subscription that gives me access to a certain amount of resources. Probably the best part of this offer is the 1500 hours of free compute time which allows me to run 2 small instances 24/7. Additionally I’ve also got access to the upcoming Azure Websites functionality which I used for a website I developed for a friend’s wedding. However just before the wedding was about to go ahead the website suddenly became unavailable and I went to investigate why.

As it turned out I had somehow hit my compute hours limit for that month which results in all your services being suspended until the rollover period. It appears this was due to me switching the website from the free tier to the shared tier which then counts as consuming compute hours whenever someone hits the site. Removing the no-spend block on it did not immediately resolve the issue however a support query to Microsoft saw the website back online within an hour. However my other project, the one that would be chewing up the lion’s share of those compute hours, seemed to have up and disappeared even though the environment was still largely in tact.

This is in fact expected behaviour for when you hit either your subscription or spending limit for a particular month. Suspended VMs on Windows Azure don’t count as being inactive and will thus continue to cost you money even whilst they’re not in use. To get around this should you hit your spending limits those VMs will be deleted, saving you money but also causing some potential data loss. Now this might not be an issue for most people, for me all it entailed was republishing them from Visual Studio, but should you be storing anything critical on the local storage of an Azure role it will be gone forever. Whilst the nature of the cloud should make you wary of storing anything on non-permanent storage (like Azure Tables, SQL, blob storage) it’s still a gotcha that you probably wouldn’t be aware of until you ran into a situation similar to mine.

Like any platform there are certain aspects of Windows Azure that you have to plan for and chief among them is your spending limits. It’s pretty easy to simply put in your credit card details and then go crazy by provisioning as many VMs as you want but sooner or later you’ll be looking to put limits on it and it’s then that you have the potential to run into these kinds of issues.

 

The Killer Proprietary Cloud.

So I might be coming around to the whole cloud computing idea despite my history of being sceptical of the whole idea. Whilst I’ve yet to go any further than researching the possibilities the potential of the cloud to eliminate a lot of the problems encountered when scaling would mean a lot less time spent on those issues and more on developing a better product. However whilst the benefits of the cloud are potentially quite large there’s also the glaring issue of vendor dependency and lock in as no two cloud providers are the same nor is there any real standards around the whole cloud computing idea. This presents a very large problem for both cloud native services and those looking to migrate to the cloud platform as once you’ve made your choice you’re pretty much locked in unless you’re willing to pay for significant rework.

Right now my platform of choice is looking to be Windows Azure. Primarily this is because of the platform familiarity as whilst the cloud might be a whole new computing paradigm services built on the traditional Microsoft platform won’t have a hard time being migrated across. Additionally they’ve got a fantastic offer for MSDN subscribers, giving them a small instance and whole swath of other goodies to get them off the ground. This is good news for aspiring entrepreneurs like myself as Microsoft offers a free MSDN Premium subscription to start-ups (called BizSpark) who’ve been together for less than 3 years and less than $1 million in revenue. However as I was comparing this to the other cloud providers out there I noticed that no two were alike, in fact they were all at odds with each other.

Take the biggest cloud provider out there, Amazon’s EC2. Whilst the compute instances are pretty comparable since they’re just the operating system the other services (file storage, databases) are worlds away from each other and it’s not just the API calls that are different either. Amazon’s cloud offering is architecturally different from that of Microsofts, so much so that any code written for Azure will have to be wholly rewritten in order to function on their cloud. This means that when it comes time to move your service into the cloud you’ll have to make sure you trust the provider that you’re going to be going with. You could also double the entire budget and keep half of it in reserve should it ever come to that, but I doubt many people have that luxury.

Like the format wars that have raged for the past century such incompatibility between cloud providers only serves to harm the consumers of such services. Whilst I’m not suggesting the notion that there be one and only one cloud provider (corporations really can’t be trusted with monopolies and I’d hate to think what a government sanctioned cloud would look like) what’s missing from the current cloud environment is a level of interoperability that we’re so used to seeing in this Internet enabled age. The good news is that I’m not the only one to notice issues like this and there are several movements working towards an open set of standards for cloud operators to adopt. This not only provides the level of interoperability that’s currently lacking in the cloud world but would also give customers more confidence when working with smaller cloud operators, knowing that they wouldn’t be left high and dry should they fail.

Like any technology in its infancy cloud computing still has a ways to go before it can before it can be counted amongst its more mature predecessors. Still the idea has proven itself to be viable, usable and above all capable of delivering on some of the wild promises it made back when it was still called SaaS. With the d-day fast approaching for Lobaco I’ll soon be wandering into the cloud myself and I’m sure I’ll have much more to write about it when the time comes. For now though I’m happy to say that my previous outlook on the cloud was wrong, despite the problems I’ve cited here today.