Posts Tagged‘windows azure’

Building And Deploying My First Windows Azure App.

I talk a big game when it comes to cloud stuff and for quite a while it was just that: talk. I’ve had a lot of experience in enterprise IT with virtualization and the like, basically all the stuff that powers the cloud solutions we’re so familiar with today, but it wasn’t up until recently that I actually took the plunge and actually started using the cloud for what its good for. There were two factors at work here, the first being that cloud services usually require money to use them and I’m already spending enough on web hosting as it is (this has since been sorted by joining BizSpark) but mostly it was time constraints as learning to code for the cloud properly is no small feat.

My first foray into developing stuff for the cloud, specifically Windows Azure, was back sometime last year when I had an idea for a statistics website based around StarCraft 2 replays. After finding out that there was a library for parsing all the data I wanted (it’s PHP but thanks to Phalanger its only a few small modifications away from being .NET)  I thought it would be cool to see things like how your actions per minute changed over time and other stats that aren’t immediately visible through the various other sites that had similar ambitions. With that all in mind I set out to code myself up a web service and I actually got pretty far with it.

However due to the enormous amount of work required to get the site working the way I wanted to work it ultimately ended up falling flat long before I attempted to deploy it. Still I learnt all the valuable lessons of how to structure my data for cloud storage services, the different uses of worker and web roles and of course the introduction into ASP.NET MVC which is arguably the front end of choice for any new cloud application on the Windows Azure framework. I didn’t touch the cloud for a long time after that until just recently when I made the move to all things Windows 8 which comes hand in hand with Visual Studio 2012.

Visual Studio 2010 was a great IDE in its own right the cloud development experience on it wasn’t particularly great, requiring a fair bit of set up in order to get everything right. Visual Studio 2012 on the other hand is built with cloud development in mind and my most recent application, which I’m going to keep in stealth until it’s a bit more mature, was an absolute dream to build in comparison to my StarCraft stats application. The emulators remain largely the same but the SDK and tools available are far better than their previous incarnations. Best of all deploying the application can’t be much simpler.

In order to deploy my application onto the production fabric all I had to do was follow the bouncing ball after right clicking my solution and hitting “Publish”. I had already set up my Azure subscription (which Visual Studio picked up on and downloaded the profile file for me) but I hadn’t configured a single thing otherwise and the wizard did everything that was required to get my application running in the cloud. After that my storage accounts were available as a drop down option in the configuration settings for each of the cloud roles, no messing around with copying keys into service definition files or anything. After a few initial teething issues with a service that didn’t behave as expected when its table storage was empty I had the application up and running without incident and it’s been trucking along well ever since.

I really can’t overstate just how damn easy it was to go from idea to development to production using the full Microsoft suite. For all my other applications I’ve usually had to spend a good few days after I’ve reached a milestone configuring my production environment the same way as I had development and 90% of the time I won’t remember all the changes I made along the way. With Azure it’s pretty much a simple change to 2 settings files (via dropdowns), publishing and then waiting for the application to go live. Using WebDeploy I can also test code changes without the risk of breaking anything as a simple reboot to the instances will roll the code back to its previous version. It’s as fool proof as you can make it.

Now if Microsoft brought this kind of ease of development to traditional applications we’d start to see some real changes in the way developers build applications in the enterprise. Since the technology backing the Azure emulator is nothing more than a layer on top of SQL and general file storage I can’t envisage that wrapping that up to an enterprise level product would be too difficult and then you’d be able to develop real hybrid applications that were completely agnostic of their underlying platform. I won’t harp on about it again as I’ve done that enough already but suffice to say I really think that it needs to happen.

I’m really looking forward to developing more on the cloud as with the experience being so seamless it really reduces the friction I usually get when making something available to the public. I might be apprenhensive to release my application to the public right now but it’s no longer a case of whether it will work properly or not (I know it will since the emulator is pretty darn close to production) it’s now just a question of how many features I want to put in. I’m not denying that the latter could be a killer in its own right, as it has been in the past, but the less things I have to worry about the better and Windows Azure seems like a pretty good platform for alleviating a lot of my concerns.

My Preferred Cloud Strategy.

Working with Microsoft’s cloud over the past couple months has been a real eye opener. Whilst I used to scoff at all these people eschewing the norms that have (and continue to) serve us well in favor of the latest technology du’jour I’m starting to see the benefits of their ways, especially with the wealth of resources that Microsoft has on the subjects. Indeed the cloud aspects of my latest side project, whilst consuming a good chunk of time at the start, have required almost no tweaking whatsoever even after I change my data model or fundamental part of how the service works. There is one architectural issue that continues to bug me however and recent events have highlighted why it troubles me so.

The events I’m referring to are the recent outage to Amazon’s Elastic Block storage service that affected a great number of web services. In essence part of the cloud services that Amazon provides, in this case a cloud disk service that for all intents and purposes is the same as a hard drive in your computer, suffered a major outage in one of their availability zones. This meant for most users in that particular zone that relied on this service to store data their services would begin to fail just as your computer would if I ripped its hard drive out whilst you were using it. The cause of the events can be traced back to human error but it was significantly compounded by the high level of automation in the system, which would be needed for any kind of cloud service at this scale.

For a lot of the bigger users of Amazon’s cloud this wasn’t so much of an issue since they usually have replicas of their service on the geographically independent mirror of the cloud that Amazon runs. For a lot of users however their entire service is hosted in the single location, usually because they can’t afford the additional investment to geographically disperse their services. Additionally you have absolutely no control over any of the infrastructure so you leave yourself at the mercy of technicians of the cloud. Granted they’ve done a pretty good so far but you’re still outsourcing risk that you can’t mitigate, or at least not affordably.

My preferred way of doing the cloud, which I’ve liked ever since I started talking to VMware about their cloud offerings back in 2008, was to combine the ideas of both self hosted services with the extensibility of the cloud. For many services they don’t need all the power (nor the cost) of running multiple cloud instances and could function quite happily on a few co-hosted servers. Of course there are times when they’d require the extra power to service peak requests and that’s where the on-demand nature of cloud services would really shine. However apart from vCloud Express (which is barely getting trialled by the looks of things) none of the cloud operators give you the ability to host a small private cloud yourself and then offload to their big cloud as you see fit. Which is a shame since I think it could be one way cloud providers could wriggle their way into some of the big enterprise markets that have shunned them thus far.

Of course there are other ways of getting around the problems that cloud providers might suffer, most of which involve not using them for certain parts of your application. You could also build your app on multiple cloud platforms (there are some that even have compatible APIs now!) but that would add an inordinate amount of complexity to your solution, not to mention doubling the costs of developing it. The hybrid cloud solution feels like the best of both worlds however it’s highly unlikely that it’ll become a mainstream solution anytime soon. I’ve heard rumors of Microsoft providing something along those lines and their new VM Role offering certainly shows the beginnings of that becoming a reality but I’m not holding my breath. Instead I’ll code a working solution first and worry about scale problems when I get to scale, otherwise I’m just engaging in fancy procrastination.



Microsoft’s Blackmagic: A Double Edged Sword.

I’m a really big fan of Microsoft’s development tools. No other IDE that I’ve used to date can hold a candle to the mighty Visual Studio, especially when you couple it with things like ReSharper and the massive online communities dedicated to overcoming any of the shortcomings that you might encounter along the way. The same communities are also responsible for developing many additional frameworks in order to extend the Microsoft platforms even further, with many of them making their way into official SDKs. There have only been a few times when I’ve found myself treading new ground with Microsoft tools which no one has before, but every time I have I’ve discovered so much more than I initially set out to.

I’ve come to call these encounters “black magic moments”.

You see with the ease of developing with a large range of solutions already laid out for you it becomes quite tempting to slip into the habit of seeking out a completed solution, rather than building one of your own. Indeed there were a few design decisions in my previous applications that were driven by this, mostly because I didn’t want to dive under the hood of those solutions to develop the fix for my particular problem. It’s quite surprising how far you can get into developing something by doing this but eventually the decisions you make will corner you into a place where you have to make a choice between doing some real development or scraping a ton of work. Microsoft’s development ideals seem to encourage the latter (in favor of using one of their tried and true solutions) but stubborn engineers like me hate having to do rework.

This of course means diving beneath the surface of Microsoft’s black boxes and poking around to get an idea of what the hell is going on. My first real attempt at this was back in the early days of the Lobaco code base when I had decided that everything should be done via JSON. Everything was working out quite well until I started trying to POST a JSON object to my webservice, where upon it would throw out all sorts of errors about not being able to de-serialize the object. I spent the better part of 2 days trying to figure that problem out and got precisely no where, eventually posting my frustrations to the Silverlight forums. Whilst I didn’t get the actual answer from there they did eventually lead me down a path that got me there, but the solution is not documented anywhere nor does it seem that anyone else has attempted such a feat before (or after for that matter).

I hit another Microsoft black magic moment when I was working on my latest project that I had decided would be entirely cloud based. After learning my way around the ins and outs of the Windows Azure platform I took it upon myself to migrate the default authentication system built into ASP.NET MVC 3 onto Microsoft’s cloud. Thanks to a couple handy tutorials the process of doing so seemed fairly easy so I set about my task, converting everything into the cloud. However upon attempting to use the thing I just created I was greeted with all sorts of random errors and no amount of massaging the code would set it straight. After the longest time I found that it came down to a nuance of the Azure Tables storage part of Windows Azure, namely the way it structures data.

In essence Azure Tables is one of them new fangled NOSQL type databases and as such it relies on a couple properties in your object class  to uniquely identify a row and provide scalability. These two properties are called PartitionKey and RowKey and whilst you can leave them alone and your app will still work it won’t be able to leverage any of the cloud goodness. So in my implementation I had overridden these variables in order to get the scalability that I wanted but had neglected to include any setters for them. This didn’t seem to be a problem when storing objects in Azure Tables but when querying them it seems that Azure requires the setters to be there, even if they do nothing at all. Adding one in fixed nearly every problem I was encountering and brought me back to another problem I had faced in the past (more on that when I finally fix it!).

Like any mature framework that does a lot of the heavy lifting for you Microsoft’s solutions suffer when you start to tread unknown territory. Realistically though this is should be expected and I’ve found I spend the vast majority of my time on less than 20% of the code that ends up making the final solution. The upshot is of course that once these barriers are down progress accelerates at an extremely rapid pace, as I saw with both the Silverlight and iPhone clients for Lobaco. My cloud authentication services are nearly ready for prime time and since I struggled so much with this I’ll be open sourcing my solution so that others can benefit from the numerous hours I spent on this problem. It will be my first ever attempt at open sourcing something that I created and the prospect both thrills and scares me, but I’m looking forward to giving back a little to the communities that have given me so much.

The Killer Proprietary Cloud.

So I might be coming around to the whole cloud computing idea despite my history of being sceptical of the whole idea. Whilst I’ve yet to go any further than researching the possibilities the potential of the cloud to eliminate a lot of the problems encountered when scaling would mean a lot less time spent on those issues and more on developing a better product. However whilst the benefits of the cloud are potentially quite large there’s also the glaring issue of vendor dependency and lock in as no two cloud providers are the same nor is there any real standards around the whole cloud computing idea. This presents a very large problem for both cloud native services and those looking to migrate to the cloud platform as once you’ve made your choice you’re pretty much locked in unless you’re willing to pay for significant rework.

Right now my platform of choice is looking to be Windows Azure. Primarily this is because of the platform familiarity as whilst the cloud might be a whole new computing paradigm services built on the traditional Microsoft platform won’t have a hard time being migrated across. Additionally they’ve got a fantastic offer for MSDN subscribers, giving them a small instance and whole swath of other goodies to get them off the ground. This is good news for aspiring entrepreneurs like myself as Microsoft offers a free MSDN Premium subscription to start-ups (called BizSpark) who’ve been together for less than 3 years and less than $1 million in revenue. However as I was comparing this to the other cloud providers out there I noticed that no two were alike, in fact they were all at odds with each other.

Take the biggest cloud provider out there, Amazon’s EC2. Whilst the compute instances are pretty comparable since they’re just the operating system the other services (file storage, databases) are worlds away from each other and it’s not just the API calls that are different either. Amazon’s cloud offering is architecturally different from that of Microsofts, so much so that any code written for Azure will have to be wholly rewritten in order to function on their cloud. This means that when it comes time to move your service into the cloud you’ll have to make sure you trust the provider that you’re going to be going with. You could also double the entire budget and keep half of it in reserve should it ever come to that, but I doubt many people have that luxury.

Like the format wars that have raged for the past century such incompatibility between cloud providers only serves to harm the consumers of such services. Whilst I’m not suggesting the notion that there be one and only one cloud provider (corporations really can’t be trusted with monopolies and I’d hate to think what a government sanctioned cloud would look like) what’s missing from the current cloud environment is a level of interoperability that we’re so used to seeing in this Internet enabled age. The good news is that I’m not the only one to notice issues like this and there are several movements working towards an open set of standards for cloud operators to adopt. This not only provides the level of interoperability that’s currently lacking in the cloud world but would also give customers more confidence when working with smaller cloud operators, knowing that they wouldn’t be left high and dry should they fail.

Like any technology in its infancy cloud computing still has a ways to go before it can before it can be counted amongst its more mature predecessors. Still the idea has proven itself to be viable, usable and above all capable of delivering on some of the wild promises it made back when it was still called SaaS. With the d-day fast approaching for Lobaco I’ll soon be wandering into the cloud myself and I’m sure I’ll have much more to write about it when the time comes. For now though I’m happy to say that my previous outlook on the cloud was wrong, despite the problems I’ve cited here today.

There Might Just be a Silver Lining to the Cloud.

Cloud computing, it’s no secret that I don’t buy wholly into this idea mostly because everyone talking about it either a) doesn’t understand it or b) has forgotten that it’s an old paradigm that was tried a long time ago and failed for many reasons. Still as an IT professional who likes to stay current with emerging trends I’ve been researching the various cloud offerings to make sure I’m up to speed on them should a manager get the bright spark to try and use them in our environment. There’s also the flip side that my chosen specialisation vendor, VMware, has their own cloud product aimed at building your own internal cloud for hosting all your applications. Whilst I still sit on the sceptical fence about the cloud idea as a whole there are some fundamental underpinnings that I think I can make use of in my current endeavours. I might even stop feeling dirty every time I mention the cloud.

As any budding start up engineer will tell you one of the things that always plays on the back of your mind is how you’re going to scale your application out should it get popular. I’ve had a lot of experience with scaling corporate applications and so thought I’d have a decent handle on how to scale my application out. To give you an idea of how most corporate apps scale it’s usually along the lines of adding in additional servers in each tier to handle the data load and then load balancing across them (or in simple terms throwing more hardware at it). This works well when you’ve got buckets of money and your own data centre to play with but us lowly plebs trying to break out into the real world face similar problems of scalability without the capital to back us up.

Right now I host most of my stuff directly off my home connection on a single server box that I cobbled together for $300 some years ago. It’s done me well and is more than grunty enough to handle this web site but anything above that has seen it crumble under the pressure, sometimes spectacularly. When I was looking for hosting solutions for Lobaco I knew that shared hosting wasn’t going to cut it and getting a real server would cost far more than I was willing to pay at this early stage. In the end I found myself getting a Virtual Private Server from SoftSys Hosting for just under $600/year. At the time it was the perfect solution as it let me mirror my current test environment with the additional benefit of being backed up by a huge pipe and enterprise level hardware. It’s been so good that I’m even considering moving this blog up there, if for the only reason that it will mean it won’t go down again just because the net dropped at my place.

However my original ideas of scaling out the application don’t gel too well with the whole VPS idea. You see scaling out in that fashion would see me buying several of these each with the same price tag or higher. Just a simple load balanced web and database server farm would set me back $2400/year neglecting any other costs incurred to get it working. After looking at my various options I begrudgingly started looking at other solutions and that’s when I started to take cloud computing a little more seriously.

Whilst I’ve still only scratched the surface of most offerings the most compelling came in the form of Windows Azure. Apart from the fact that it’s Microsoft and should therefore be blindingly easy to use the fact that they provide free accounts (with discounted rates thereafter) to budding entrepreneurs like myself got me intrigued. A couple Google searches later showed that porting my WCF based services to the Azure platform shouldn’t prove to be too difficult and they provide in built loading balancing to boot. The pricing model is also attractive as it is on a unit basis, you only pay for what you actually use. Azure then could easily provide the required scalability without breaking the bank, leaving me to focus on the more pressing issues than whether or not it will work with a decent chunk of users.

Whilst I won’t be diving into the clouds just yet (the iPhone, she beckons) it’s now on the cards to port Lobaco across so that when it comes time to launch it I won’t have to watch my server like a hawk to make sure it isn’t dying a fiery death under the load. I don’t think the cloud will be the solution to all my scalability issues that might come up in the future but it’s looking more and more like a viable option that will enable me to build a robust service for a good number of users. Then once I’ve got my act together I can start planning out a real solution.

With blackjack, and hookers.