Posts Tagged‘cloud computing’

Windows Azure

Azure Tables: Watch Out For Closed Connections.

Windows Azure Tables are one of those newfangled NoSQL type databases that excels in storing giant swaths of structured data. For what they are they’re quite good as you can store very large amounts of data in there without having to pay through the nose like you would for a traditional SQL server or an Azure instance of SQL. However that advantage comes at a cost: querying the data on anything but the partition key (think of it as a partition of the data within a table) and the row key (the unique identifier within that partition) results in queries that take quite a while to run, especially when compared to its SQL counter parts. There are ways to get around this however no matter how well you structure your data eventually you’ll run up against this limitation and that’s where things start to get interesting.

By default whenever you do a large query against an Azure Table you’ll only get back 1000 records, even if the query will return more. However if your query did have more results than that you’ll be able to access them via a continuation token that you can add to your original query, telling Azure that you want the records past that point. For those of us coding on the native .NET platform we get the lovely benefit of having all of this handled for us directly by simply adding .AsTableServiceQuery() to the end of our LINQ statements (if that’s what you’re using) which will handle the continuation tokens for us. For most applications this is great as it means you don’t have to fiddle around with the rather annoying way of extracting those tokens out of the response headers.

Of course that leads you down the somewhat lazy path of not thinking about the kinds of queries you’re running against your Tables and this can lead to problems down the line. Since Azure is a shared service there are upper limits on how long queries can run and how much data they can return to you. These limits aren’t exactly set in stone and depending on how busy the particular server you’re querying is or the current network utilization at the time your query could either take an incredibly long time to return or could simply end up getting closed off. Anyone who’s developed for Azure in the past will know that this is pretty common, even for the more robust things like Azure SQL, but there’s one thing that I’ve noticed over the past couple weeks that I haven’t seen mentioned anywhere else.

As the above paragraphs might indicate I have a lot of queries that try and grab big chunks of data from Azure Tables and have, of course, coded in RetryPolicies so they’ll keep at it if they should fail. There’s one thing that all the policies in the world won’t protect you from however and that’s connections that are forcibly closed. I’ve had quite a few of these recently and I noticed that they appear to come in waves, rippling through all my threads causing unhandled exceptions and forcing them to restart themselves. I’ve done my best to optimize the queries since then and the errors have mostly subsided but it appears that should one long running query trigger Azure to force the connection closed all connections from that instance to the same Table storage will also be closed.

Depending on how your application is coded this might not be an issue however for mine, where the worker role has about 8 concurrent threads running at any one time all attempting to access the same Table Storage account, it means one long running query that gets terminated triggers a cascade of failures across the rest of threads. For the most part this was avoided by querying directly on row and partition keys however the larger queries had to be broken up using the continuation tokens and then the results concatenated in memory. This introduces another limit on particular queries (as storing large lists in memory isn’t particularly great) which you’ll have to architect your code around. It’s by no means an unsolvable problem however it was one that has forced me to rethink certain parts of my application which will probably need to be on Azure SQL rather than Azure Tables.

Like any cloud platform Azure is a great service which requires you to understand what its various services are good for and what they’re not. I initially set out to use Azure Tables for everything and have since found that it’s simply not appropriate for that, especially if you need to query on parameters that aren’t the row or partition keys. If you have connections being closed on you inexplicably be sure to check for any potentially long running queries on the same role as this post can attest they could very well be the source of what ales you.

Windows Azure Portal

Building And Deploying My First Windows Azure App.

I talk a big game when it comes to cloud stuff and for quite a while it was just that: talk. I’ve had a lot of experience in enterprise IT with virtualization and the like, basically all the stuff that powers the cloud solutions we’re so familiar with today, but it wasn’t up until recently that I actually took the plunge and actually started using the cloud for what its good for. There were two factors at work here, the first being that cloud services usually require money to use them and I’m already spending enough on web hosting as it is (this has since been sorted by joining BizSpark) but mostly it was time constraints as learning to code for the cloud properly is no small feat.

My first foray into developing stuff for the cloud, specifically Windows Azure, was back sometime last year when I had an idea for a statistics website based around StarCraft 2 replays. After finding out that there was a library for parsing all the data I wanted (it’s PHP but thanks to Phalanger its only a few small modifications away from being .NET)  I thought it would be cool to see things like how your actions per minute changed over time and other stats that aren’t immediately visible through the various other sites that had similar ambitions. With that all in mind I set out to code myself up a web service and I actually got pretty far with it.

However due to the enormous amount of work required to get the site working the way I wanted to work it ultimately ended up falling flat long before I attempted to deploy it. Still I learnt all the valuable lessons of how to structure my data for cloud storage services, the different uses of worker and web roles and of course the introduction into ASP.NET MVC which is arguably the front end of choice for any new cloud application on the Windows Azure framework. I didn’t touch the cloud for a long time after that until just recently when I made the move to all things Windows 8 which comes hand in hand with Visual Studio 2012.

Visual Studio 2010 was a great IDE in its own right the cloud development experience on it wasn’t particularly great, requiring a fair bit of set up in order to get everything right. Visual Studio 2012 on the other hand is built with cloud development in mind and my most recent application, which I’m going to keep in stealth until it’s a bit more mature, was an absolute dream to build in comparison to my StarCraft stats application. The emulators remain largely the same but the SDK and tools available are far better than their previous incarnations. Best of all deploying the application can’t be much simpler.

In order to deploy my application onto the production fabric all I had to do was follow the bouncing ball after right clicking my solution and hitting “Publish”. I had already set up my Azure subscription (which Visual Studio picked up on and downloaded the profile file for me) but I hadn’t configured a single thing otherwise and the wizard did everything that was required to get my application running in the cloud. After that my storage accounts were available as a drop down option in the configuration settings for each of the cloud roles, no messing around with copying keys into service definition files or anything. After a few initial teething issues with a service that didn’t behave as expected when its table storage was empty I had the application up and running without incident and it’s been trucking along well ever since.

I really can’t overstate just how damn easy it was to go from idea to development to production using the full Microsoft suite. For all my other applications I’ve usually had to spend a good few days after I’ve reached a milestone configuring my production environment the same way as I had development and 90% of the time I won’t remember all the changes I made along the way. With Azure it’s pretty much a simple change to 2 settings files (via dropdowns), publishing and then waiting for the application to go live. Using WebDeploy I can also test code changes without the risk of breaking anything as a simple reboot to the instances will roll the code back to its previous version. It’s as fool proof as you can make it.

Now if Microsoft brought this kind of ease of development to traditional applications we’d start to see some real changes in the way developers build applications in the enterprise. Since the technology backing the Azure emulator is nothing more than a layer on top of SQL and general file storage I can’t envisage that wrapping that up to an enterprise level product would be too difficult and then you’d be able to develop real hybrid applications that were completely agnostic of their underlying platform. I won’t harp on about it again as I’ve done that enough already but suffice to say I really think that it needs to happen.

I’m really looking forward to developing more on the cloud as with the experience being so seamless it really reduces the friction I usually get when making something available to the public. I might be apprenhensive to release my application to the public right now but it’s no longer a case of whether it will work properly or not (I know it will since the emulator is pretty darn close to production) it’s now just a question of how many features I want to put in. I’m not denying that the latter could be a killer in its own right, as it has been in the past, but the less things I have to worry about the better and Windows Azure seems like a pretty good platform for alleviating a lot of my concerns.

Microsoft Should Break The Public Cloud Wall.

Like all industry terms the definitions of what constitutes a cloud service have become somewhat loose as every vendor puts their own particular spin on it. Whilst many cloud products share a baseline of particular features (I.E. high automation, abstraction from underlying hardware, availability as far as your credit card will go) what’s available after that point becomes rather fluid which leads to the PR department making some claims that don’t necessairly line up with reality, or at least what I believe the terms actually mean. For Microsoft’s cloud offering in Azure this became quite clear during the opening keynotes of TechEd 2012 and the subsequent sessions I attended made it clear that the current industry definitions need some work in order to ensure that there’s no confusion around what the capabilities of each of these cloud services actually are.

If this opening paragraph is sound familiar then I’m flattered, you read one of my LifeHacker posts, but there was something I didn’t dive into in that post that I want to explore here.

It’s clear that there’s actually 3 different clouds in Microsoft’s arsenal: the private cloud that’s a combination of System Centre Configuration Manager and Windows Server, the what I’m calling Hosted Private Cloud (referred to as Public by Microsoft) which is basically the same as the previous definition except its running on Microsoft’s hardware and lastly Windows Azure which is the true public cloud. All of these have their own set of pros and cons and I still stand by my statements that the dominant cloud structure in the future will be some kind of hybrid version of all of these but right now the reality is that not a single provider manages to bridge all these gaps, and this is where Microsoft could step in.

The future might be looking more and more cloudy by the day however there’s still a major feature gap between what’s available in Windows Azure when compared to the traditional Microsoft offerings. I can understand that some features might not be entirely feasible at a small scale (indeed many will ask what the point of having something like Azure Table Storage working on a single server would achieve, but hear me out) but Microsoft could make major inroads to Azure adoption by making many of the features installable in Windows Server 2012. They don’t have to come all at once, indeed many of the features in Azure become available in a piecemeal fashion, but there are some key features that I believe could provide tremendous value for the enterprise and ease them into adoption of Microsoft’s public cloud offerings.

SQL Azure Federations for instance could provide database sharding to standalone MSSQL servers giving a much easier route to scaling out SQL than the current clustering solution. Sure there would probably need to be some level of complexity added in for it to function in smaller environments but the principles behind it could easily translate down into the enterprise level. If Microsoft was feeling particularly smart they could even bundle in the option to scale records out onto SQL Azure databases, giving enterprises that coveted cloud burst capability that everyone talks about but no one seems to be able to do.

In fact I believe that pretty much every service provided by Azure, from Table storage all the way down to the CDN interface, could be made available as a feature on Windows Server 2012. They wouldn’t be exact replicas of their cloudified brethren but you could offer API consistency between private and public clouds. This I feel is the ultimate cloud service as it would allow companies to start out with cheap on premise infrastructure (or more likely leverage current investments) and then build out from there. Peaky demands cloud then be easily scaled out to the public cloud and, if the cost is low enough, the whole service could simply transition there.

These features aren’t something that will readily port overnight but if Microsoft truly is serious about bringing cloud capabilities to the masses (and not just hosted virtual machine solutions) then they’ll have to seriously look at providing them. Heck just taking some of the ideals and integrating them into their enterprise products would be a step in the right direction, one that I feel would win them almost universal praise from their consumers.

TechEd 2012 Australia Keynote Synaecide Kinect Music600px

TechEd Day 1: Toys, Technology and Technobabble.

Having been given the choice of coming up here late last night or early this morning I did what any enterprising person would do and elected to spend the extra night up here at the Gold Coast so I could enjoy a leisurely start to my day. It was worth it too as instead of having to get up at 4:30 in the morning I was able to stroll out of bed at 8am, wander aimlessly around Broadbeach for a while looking for food and then casually make my way over to my hotel for the rest of the week. After wasting a couple hours on Reddit waiting for the appointed hour to arrive I headed on down to the convention centre and met up with the guys from LifeHacker, Allure Media and the other contest winners. It was great to finally meet everyone and to put names to the faces (like Terry Lynch and Craig Naumann) and of course I didn’t at all mind that I was then presented with the shiny new ASUS Zenbook and Nokia Lumia 900 to take home. Whilst I’ve given the Zenbook something of a workout already I haven’t had a chance to play with the Lumia thanks to my sim being of the large variety and it needing a micro.

Hopefully I’ll get some time spare to sort that out tomorrow.

We then headed off for lunch where I met one of their videographers and talked shop with everyone for a good couple hours over steak, wine and honeycomb bark. As an informal affair it was great and we were pretty much told that there weren’t any restrictions on what we could talk about, so long as they were at least tangentially related to Windows Server 2012. Thankfully it looks like the focus of this year’s TechEd is going to be about Server 2012 anyway so even if we were going to go off the rails we really wouldn’t have far to go. Still I was pleased to find out that our choices of sessions provided a good mix so that we were all able to go to the ones we wanted to. I’ve chosen to cover primarily Windows Azure and the cloud integration aspects of Server 2012 as whilst I’m sure there’s a lot going on below that level my interest, at least in recent times, has been focused on just how Microsoft is going to bring cloud down to all those loyal system administrators who’ve been with Microsoft for decades.

The keynote was equal parts run-of-the-mill tech announcements coupled with, dare I say it, strange forays into the lands of philosophy and technology futurism. Now I can’t claim complete innocence here as I did make a couple snarky tweets whilst Jason Silva was up on stage but in reality whilst his speeches and videos were thought provoking I struggled to see how they were relevant to the audience. TechEd, whilst being full of creative and dedicated people, isn’t exactly TED; I.E. it’s not a big ideas kind of deal. It’s a tech show, one where system administrators, architects and developers come together to get a glimpse at the latest from Microsoft. Delving into the philosophy of how technology is changing humanity is great but there are better times for presentations like that like say TEDx Canberra which was just on recently.

The technology part of the keynote was interesting even if it was your usual high level overview that lacked any gritty detail. For me the take away from the whole thing was that Microsoft is now heavily dedicated to not only being a cloud provider but becoming the cloud platform that powers enterprises in the future. Windows Server 2012 appears to be a key part of that and if what they’re alluding to turns out to be true you’ll soon have a unified development platform that will stretch all the way from your own personal cloud all the way back to a fully managed public cloud that Microsoft and its partners provide. If that promise is sounding familiar to you it should as HP said pretty much the same thing not too long ago and I’m very keen to see how their offering works in comparison.

There were also some performances from various artists like the one from Synaecide above in which he utilizes as Kinect controller to manipulate the music with his movements. It was certainly impressive, especially in comparison to the interpretive dancer who obviously had zero control over what was happening on screen, and these are the kinds of things I’d like to see more of as they show off the real innovative uses of Microsoft technology rather than just the usual PowerPoint to death followed by a highly scripted demo. After this all finished we were allowed to go off and have a look around the showcase where all the Microsoft partners had set up shop and were giving out the usual swag which was when I decided to take my leave (after raiding the buffet, of course!).

With all this being said I’m really looking forward to getting stuck into the real meat of TechEd 2012: the new technology. It’s all great to sell ideas, visions and concepts but nothing is more powerful to me than demonstrable technology that I can go home and use right away. Those of you following me on Twitter will know that I’ve already expressed scepticism at some of the claims has made during the keynote but don’t let that fool you. Whilst I might be among Microsoft’s critics I’m also one of their long time fans so you can rest assured that any amazing leaps will be reoported and missteps pointed out and ridiculed for your amusement.

Now I’d best be off, I’ve got an early start tomorrow.

The Cloud Wars Are About to Begin.

With virtualization now being as much of as a pervasive idea in the datacentre as storage array networks or under floor cooling the way has been paved for the cloud to make its way there as well for quite some time now. There are now many commercial off the shelf solutions that allow you to incrementally implement the multiple levels of the cloud (IaaS -> PaaS -> SaaS) without the need for a large operational expenditure in developing the software stack at each level. The differentiation now comes from things like added services, geographical location and pricing although even that is already turning into a race to the bottom.

The big iron vendors (Dell, HP, IBM) have noticed this and whilst they could still sustain their current business quite well by providing the required tin to the cloud providers (the compute power is shifted, not necessarily reduced) they’re all starting to look to creating their own cloud solutions so that they can continue to grow their business. I covered HP’s cloud solution last week after the HP Cloud Tech day but recently there’s been a lot of news coming out regarding the other big players, both from the old big iron world and the more recently established cloud providers.

First cab off the rank I came across was Dell who are apparently gearing up to make a cloud play. Now if I’m honest that article, whilst it does contain a whole lot of factual information, felt a little speculative to me mostly because Dell hasn’t tried to sell me on the cloud idea when I’ve been talking to them recently. Still after doing a small bit of research I found that not only are Dell planning to build a global network of datacentres (where global usually means everywhere but Australia) they announced plans to build one in Australia just on a year ago. Combining this with their recent acquisition spree that included companies like Wyse it seems highly likely that this will be the backbone of their cloud offering. What that offering will be is still up for speculation however, but it wouldn’t surprise me if it was yet another OpenStack solution.

Mostly because RackSpace, probably the second biggest general cloud provider behind Amazon Web Services, just announced that their cloud will be compatible with the OpenStack API. This comes hot off the heels of another announcement that both IBM and RedHat would become contributers to the OpenStack initiative although no word yet on whether they have a view to implement the technology in the future. Considering that both HP and Dell have are already showing their hands with their upcoming cloud strategies it would seem like becoming OpenStack contributers will be the first step to seeing some form of IBM cloud. They’d be silly not to given their share of the current server market.

Taking all of this into consideration it seems that we’re approaching a point of convergence in the cloud computing industry. I wrote early last year that one of the biggest draw backs to the cloud was its proprietary nature and it seems like the big iron providers noticed that this was a concern. The reduction of vendor lock lowers the barriers to entry for many customers significantly and provides a whole host of other benefits like being able to take advantage of disparate cloud providers to provide service redundancy. As I said earlier the differentiation between providers will then predominately come from value-add services, much like it did for virtualization in the past.

This is the beginning of the cloud war, where all the big players throw their hats into the ring and duke it out for our business. It’s a great thing for both businesses and consumers as the quality of products will increase rapidly and the price will continue on a down hill trend. It’s quite an exciting time, one akin to the virtualization revolution that started happening almost a decade ago. Like always I’ll be following these developments keenly as the next couple years will be something of a proving ground for all cloud providers.

HP Cloud Tech Day.

So as you’re probably painfully aware (thanks to my torrent of tweets today) I spent all of today sitting down with a bunch of like minded bloggers for HP’s Cloud Tech Day which primarily focused on their recent announcement that they’d be getting into the cloud business. They were keen to get our input as to what the current situation was in the real world in relation to cloud services adoption and what customers were looking for with some surprising results. If I’m completely honest it was more aimed at strategic level rather than the nuts and bolts kind of tech day I’m used to, but I still got some pretty good insights out of it.

For starters HP is taking a rather unusual approach to the cloud. Whilst it will be offering something along the lines of the traditional public cloud like all other providers they’re also going to  attempt to make inroads into the private cloud market whilst also creating a new kind of cloud offering they’re dubbing “managed cloud”. The kicker being that should you implement an application on any of those cloud platforms you’ll be able to move it seamlessly between them, effectively granting you the elusive cloud bursting ability that everyone wants but no one really has. All the tools between all 3 platforms are the same too, enabling you to have a clear idea of how your application is behaving no matter where its hosted.

The Managed Cloud idea is an interesting one. Basically it takes the idea of a private cloud, I.E. one you host yourself, and instead of you hosting it HP will host it for you. Basically it takes away the infrastructure management worry that a private cloud still presents whilst allowing you to have most of the benefits of a private cloud. They mentioned that they already have a customer using this kind of deployment for their email infrastructure which had the significant challenge of keeping all data on Australian shores and the IT department still wanting some level of control over it.

How they’re going to go about this is still something of a mystery but there are some little tid bits that give us insight into their larger strategy. HP isn’t going to offer a new virtualization platform to underpin this technology, it will in fact utilize whatever current virtual infrastructure you have. What HP’s solution will do is abstract that platform away so you’re given a consistent environment to implement against which is what enables HP Cloud enabled apps to work between the varying cloud platforms.

Keen readers will know that this was the kind of cloud platform I’ve been predicting (and pining for) for some time. Whilst I’m still really keen to get under the hood of this solution to see what makes it tick and how applicable it will be I have to say that HP has done their research before jumping into this. Many see cloud computing as some kind of panacea to all their IT ills when in reality cloud computing is just another solution for a specific set of IT problems. Right now that’s centred around commodity services like email, documents, ERP and CRM and of course that umbrella will continue to expand into the future but there will always be those niche apps which won’t fit well into the cloud paradigm. Well not at the price point customers would be comfortable anyway.

What really interested me was the parallels that could be easily drawn between the virtualization revolution and the burgeoning cloud industry. Back in the day there was really only one player (VMware, Amazon) but as time went on many other players came online. Initially those competitors had to play feature catch up with the number 1. The biggest player noticed they were catching up quickly (through a combination of agility, business savvy and usually snapping up a couple disgruntled employees) and reacted by providing value add services above the base functionality level. The big players in virtualization (Microsoft, VMware and CITRIX) are just all about on feature parity for base hypervisor capabilities but VMware has stayed ahead by creating a multitude of added services, but their lead is starting to shrink which I’m hoping will push for a fresh wave of innovation.

Applying this to the cloud world it’s clear that HP has seen that there’s no reason in competing at a base level with cloud providers; it’s a fools gambit. Amazon has the cheap bulk computing services thing nailed and if all you’re doing is giving the same services then the only differentiator you’ll have is price. That’s not exactly a weapon against Amazon who could easily absorb losses for a quarter whilst it watches you squirm as your margins plunge into the red. No instead HP is positioning themselves as a value add cloud provider, having a cloud level that works at multiple levels. The fact that you can seamlessly between them is probably all the motivation most companies will need to give them a shot.

Of course I’m still a bit trepidatious about the idea because I haven’t seen much past the marketing blurb. As with all technology products there will be limitations and until I can get my hands on the software (hint hint) then I can’t get too excited about it. It’s great to see HP doing so much research and engaging with the public in this way but the final proof will be in the pudding, something I’m dying to see.

Transitioning From an IT Admin to a Cloud Admin.

I’ve gone on record saying that whilst the cloud won’t kill the IT admin there is a very real (and highly likely) possibility that the skills required to be a general IT administrator will change significantly over the next decade. Realistically this is no different from any other 10 year span in technology as you’d struggle to find many skills that were as relevant today as they were 10 years ago. Still the cloud does represent some fairly unique paradigm shifts and challenges to regular IT admins, some of which will require significant investment in re-skilling in order to stay relevant in a cloud augmented future.

The most important skill that IT admins will need to develop is their skills in programming. Now most IT admins have some level of experience with this already, usually with automation scripts based in VBScript, PowerShell or even (shudder) batch. Whilst these provide some of the necessary foundations for working in a cloud future they’re not the greatest for developing (or customizing) production level programs that will be used on a daily basis. The best option then is to learn some kind of formal programming language, preferably one that has reference libraries for all cloud platforms. My personal bias would be towards C# (and should be yours if your platform is Microsoft) as it’s a great language and you get the world’s best development environment to work in: Visual Studio.

IT admins should also look to gaining a deep understanding of virtualization concepts, principles and implementations as these are what underpins nearly all cloud services today. Failing to understand these concepts means that you won’t be able to take advantage of many of the benefits that a cloud platform can provide as they function very differently to the traditional 3 tier application model.

The best way to explain this is to use Microsoft’s Azure platform as an example. Whilst you can still get the 3 tier paradigm working in the Azure environment (using a Web Role, Worker Role and SQL Azure) this negates the benefits of using things like Azure Table Storage, Blob Storage and Azure Cache. The difference comes down to having to manually scale an application like you would do normally instead of enabling the application to scale itself in response to demand. In essence there’s another level of autonomy you take advantage of, one that makes capacity planning a thing of the past¹.

It’s also worth your time to develop a lot of product knowledge in the area of cloud services. As I mentioned in my previous blog cloud services are extremely good at some things and wildly inappropriate for others. However in my experience most cloud initiatives attempt to be too ambitious, looking to migrate as many services into the cloud as possible whether there are benefits to be had or not. It’s your job then to advise management as to where cloud services will be most appropriate and you can’t do this without a deep knowledge of the products on offer. A good rule of thumb is that cloud services are great at replacing commodity services (email, ERP, CRM etc.) but aren’t so great at replacing custom systems or commodity systems that have had heavy modifications to them. Still it’s worth researching the options out there to ensure you know how the cloud provider’s capabilities match up with your requirements, hopefully prior to attempting to implement them.

This is by no means an exhaustive list and realistically your strategy will have to be custom made to your company and your potential career path. However I do believe that investing in the skills I mentioned above will give you a good footing for transition from just a regular IT admin to a cloud admin. For me I find it exciting as whilst I don’t believe the cloud will overtake anything and everything in the corporate IT environment it will provide us with some amazing new capabilities.

¹Well technically it just moves the problem from you to the cloud service provider. There’s still some capacity planning to be done on your end although it comes down financial rather than computational, so that’s usually left to the finance department of your organisation. They’re traditionally much better at financial planning than IT admins are at capacity planning.

Many thanks to Derek Singleton of Software Advice for inspiring this post with his blog on Cloud Career Plans.

Will The Cloud Kill The IT Admin?

IT is one of the few services that all companies require to compete in today’s markets. IT support then is one of those rare industries where jobs are always around to be had, even for those working in entry level positions. Of course this assumes that you put in the required effort to stay current as letting your skills lapse for 2 or more years will likely leave you a generation of technology behind, making employment difficult. This is of course due to the IT industry constantly evolving and changing itself and much like other industries certain jobs can be made completely redundant by technological advancements.

For the past couple decades though the types of jobs you expect to see in IT support have remained roughly the same, save for the specializations brought on by technology. As more and more enterprises came online and technology began to develop a multitude of specializations became available, enabling then generic “IT guys” to become highly skilled workers in their targeted niche. I should I know, just on a decade ago I was one of those generic IT support guys and today I’m considered to be a specialist when it comes to hardware and virtualization. Back when I started my career the latter of those two skills wasn’t even in the vernacular of the IT community, let alone a viable career path.

Like any skilled position though specialists aren’t exactly cheap, especially for small to medium enterprises (SMEs). This leads to an entire second industry of work-for-hire specialists (usually under the term “consultants”) and companies looking to take the pain out of utilizing the technology without having to pay for the expertise to come in house. This isn’t really a surprise (any skilled industry will develop these secondary markets) but with IT there’s a lot more opportunity to automate and leverage economies of scale, more so than any other industry.

This is where Cloud Computing comes in.

The central idea behind cloud computing is that an application can be developed to run on a platform which can dynamically deliver resources to it as required. The idea is quite simple but the execution of it is extraordinarily complicated requiring vast levels of automation and streamlining of processes. It’s just an engineering problem however, one that’s been surmounted by several companies and used to great effect by many other companies who have little wish to maintain their own infrastructure. In essence this is just outsourcing taken to the next level, but following this trend to its logical conclusion leads to some interesting (and, if you’re an IT support worker, troubling) predictions.

For SMEs the cost of running their own local infrastructure, as well as the support staff that goes along with it, can be one of their largest cost centres. Cloud computing and SaaS offers the opportunity for SMEs to eliminate much of the cost whilst keeping the same level of functionality, giving them more capital to either reinvest in the business or bolster their profit margins. You would think then that this would just be a relocation of jobs from one place to another but cloud services utilize much fewer staff due to the economies of scale that they employ, leaving fewer jobs available for those who had skills in those area.

In essence cloud computing eliminates the need for the bulk of skilled jobs in the IT industry. There will still be need for most of the entry level jobs that cater to regular desktop users but the back end infrastructure could easily be handled by another company. There’s nothing fundamentally wrong with this, pushing back against such innovation never succeeds, but it does call into question those jobs that these IT admins currently hold and where their future lies.

Outside of high tech and recently established businesses the adoption rate of cloud services hasn’t been that high. Whilst many of the fundamentals of the cloud paradigm (virtualization, on-demand resourcing, infrastructure agnostic frameworks) have found their way into the datacenter the next logical step, migrating those same services into the cloud, hasn’t occurred. Primarily I believe this is due to the lack of trust and control in the services as well as companies not wanting to write off the large investments they have in infrastructure. This will change over time of course, especially as that infrastructure begins to age.

For what its worth I still believe that the ultimate end goal will be some kind of hybrid solution, especially for governments and the like. Cloud providers, whilst being very good at what they do, simply can’t satisfy the need of all customers. It is then highly likely that many companies will outsource routine things to the cloud (such as email, word processing, etc) but still rely on in house expertise for the customer applications that aren’t, and probably will never be, available in the cloud. Cloud computing then will probably see a shift in some areas of specialization but for the most part I believe us IT support guys won’t have any trouble finding work.

We’re still in the very early days of cloud computing and its effects on the industry are still hard to judge. There’s no doubt that cloud computing has the potential to fundamentally change the way the world does IT services and whatever happens those of us in IT support will have to change to accommodate it. Whether that comes in the form of reskilling, training or looking for a job in a different industry is yet to be determined but suffice to say that the next decade will see some radical changes in the way businesses approach their IT infrastructure.

The Woes of Azure Table Storage.

I’m a stickler for avoiding rework where I can, opting instead to make the most of what I already have before I set out on trying to rework something. You’d think that’d lead me to create overly complicated systems that have multiple nuances and edge cases but since I know I hate reworking stuff I’ll go out of my way to make things right the first time, even if it costs me a bit more initially. For the most part this works well and even when it comes time to dump something and start over again much of my previous work will make it into the reworked product, albeit it in a different form.

I hit such a dilemma last weekend when I was working on my latest project. As long time readers will know I’m a pretty big fan of Microsoft’s Azure services and I decided to use them as the platform for my next endeavour. For the most part it’s been quite good, getting started with the development environment was painless and once I got familiar with the features and limitations of the Azure platform I was able to create the basic application in almost no time at all. Everything was going great until I started to hit some of the fundamental limitations of one of Azure services, namely the Table Storage.

For the uninitiated Azure Table Storage is like a database, but not in the traditional sense. It’s one of them new fan dangled NoSQL type databases, the essential difference being that this kind of database doesn’t have a fixed schema or layout of how the data is stored. Considering that having a fixed layout of how the data is stored is where a database draws many of its advantages from you’d wonder what doing away with it would do for you. What it does is allow for a much higher level of scalability than a traditional database does and thus NoSQL type databases power many large apps, including things like Facebook and Twitter. Figuring that the app might be big one day (and Microsoft’s rather ludicrous pricing for SQL Azure) I settled on using it as my main data store.

However whilst there’s a lot of good things about Azure Table Storage there’s one downside that really hurts it’s usability: it’s limited query engine. You see whilst you can query it with good old fashioned LINQ the query parameters it supports are rather limited. In fact they’re limited to single parameter matches or boolean equivalences which, whilst working for a lot of use cases, doesn’t cater towards user constructed queries quite well. Indeed in my application where someone could search for a single name but the object could contain up to 8 (some of them set, some of them not) meant that I had to construct the query on the fly for the user. No problem I hear you say, LINQKit’s Predicate Builder can build that for you! Well you’d be wrong unfortunately since the resulting LINQ statement confuses the poor Azure Storage Client and the query errors out. 

So at this point I was faced with a difficult decision: manually crank out all the queries (which would end up being huge and ridiculously unmaintainable) whilst keeping my Table Storage back end or bite the bullet and move everything into SQL Azure. Whilst I knew that writing out the queries would be a one time only task (a very time consuming one) I couldn’t shake that feeling that doing that would just be the wrong thing to do in the long run, leaving me with an unmaintainable system that I’d curse constantly. I haven’t made the changes yet, that’s this weekend’s goal, but I know it’s not going to be as trouble free as I hope it will.

Sometimes you just have to swallow that bitter pill and it’s usually better to do it sooner rather than later. Azure Table Storage was perfect for me in the beginning  but as my requirements evolved the reality of the situation became apparent and I’m stuck in the unfortunate position of having to do rework that I tried so hard to avoid. My project and I will be better for it but it’s always tough when you’ve tried everything you could in order to avoid it and came up empty. 

The Hybrid Cloud Paradigm Clash.

Maybe it’s my corporate IT roots but I’ve always thought that the best cloud strategy would be a combination of in house resources that would have the ability to offload elsewhere when extra resources were required. Such a deployment would mean that organisations could design their systems around base loads and have the peak handled by public clouds, saving them quite a bit of cash whilst still delivering services at an acceptable level. It would also gel well with management types as not many are completely comfortable being totally reliant on a single provider for any particular service which in light of recent cloud outages is quite prudent. For someone like myself I was more interested in setting up a few Azure instances so I could test my code against the real thing rather than the emulator that comes with Visual Studio as I’ve always found there’s certain gotchas that don’t show up until you’re running on a real instance.

Now the major cloud providers: Rackspace, AWS, et. al. haven’t really expressed much interest in supporting configurations like this which makes business sense for them since doing so would more than likely eat into their sales targets. They could license the technology of course but that brings with it a whole bunch of other problems like what are supported configurations and releasing some measure of control over the platform in order to enable end users to be able to deploy their own nodes. However I had long thought Microsoft, who has a long history of letting users install stuff on their own hardware, would eventually allow Azure to run in some scaled down fashion to facilitate this hybrid cloud idea.

Indeed many developments in their Azure product seemed to support this, the strongest of which being the VM role which allowed you to build your own virtual machine then run it on their cloud. Microsoft have offered their Azure Appliance product for a while as well, allowing large scale companies and providers the opportunity to run Azure on their own premises. Taking this all into consideration you’d think that Microsoft wasn’t too far away from offering a solution for medium organisations and developers that were seeking to go to the Azure platform but also wanted to maintain some form of control over their infrastructure.

After talking with a TechEd bound mate of mine however, it seems that idea is off the table.

VMware has had their hybrid cloud product (vCloud) available for quite some time and whilst it satisfies most of the things I’ve been talking about so far it doesn’t have the sexy cloud features like an in-built scalable NoSQL database or binary object storage. Since Microsoft had their Azure product I had assumed they weren’t interested in competing with VMware on the same level but after seeing one of the TechEd classes and subsequently browsing their cloud site it looks like they’re launching SCVMM 2012 as a direct competitor to vCloud. This means that Microsoft is basically taking the same route by letting you build your own private cloud, which is basically just a large pool of shared resources, foregoing any implementation of the features that make Azure so gosh darn sexy.

Figuring that out left me a little disappointed, but I can understand why they’re doing it.

Azure, as great as I think it is, probably doesn’t make sense in a deployment scenario of anything less than a couple hundred nodes. Much of Azure’s power, like any cloud provider, comes from its large number of distributed nodes which provide redundancy, flexibility and high performance. The Hyper-V based private cloud then is more tailored to the lower end where enterprises likely want more control that what Azure would provide, not to mention that experience in deploying Azure instances is limited to Microsoft employees and precious few from the likes of Dell, Fujitsu and HP. Hyper-V then is the better solution for those looking to deploy a private cloud and should they want to burst out to a public cloud they’ll just have to code their application to be able to do that. Such a feature isn’t impossible however, but it is an additional cost that will need to be considered.