Posts Tagged‘virtualization’

VMware Targets OpenStack with vSphere 6.

Despite the massive inroads that other virtualization providers have made into the market VMware still stands out as the king of the enterprise space. Part of this is due to the maturity of their toolset which is able to accommodate a wide variety of guests and configurations but they’ve also got the largest catalogue of value adds which helps vastly in driving adoption of their hypervisor. Still the asking price for any of their products has become something of a sore point for many and their proprietary platform has caused consternation for those looking to leverage public cloud services. With their latest release of their vSphere product VMware is looking to remedy at least the latter issue, embracing OpenStack compatibility for one of their distributions.

vmware_vsphere

The list of improvements that are coming with this new release are numerous (and I won’t bother repeating them all here) but suffice to say that most of them were expected and  in-line with what we’ve gotten previously. Configuration maximums have gone up for pretty much every aspect, feature limitations have been extended and there’s a handful of new features that will enable vSphere based clusters to do things that were previously impossible. In my mind the key improvements that VMware have made in this release come down to Virtual SAN 6, Long Distance vMotion and, of course, their support for OpenStack via their VMware Integrated OpenStack release.

Virtual SAN always felt like a bit of an also-ran when it first came out due to the rather stringent requirements it had around its deployment. I remember investigating it as part of a deployment I was doing at the time, only to be horrified at the fact that I’d have to deploy a vSphere instance at every site that I wanted to use it at. The subsequent releases have shifted the product’s focus significantly and now presents a viable option for those looking to bring software defined datacenter principles to their environment. The improvements that come in 6 are most certainly cloud focused with things like Fault Domains and All Flash configurations. I’ll be very interested to see how the enterprise reacts to this offering, especially for greenfields deployments.

Long Distance vMotion might sound like a minor feature but as someone who’s worked in numerous large, disparate organisations the flexibility that this feature will bring is phenomenal. Right now the biggest issue most organisations face when maintaining two sites (typically for DR purposes) is the ability to get workloads between the sites, often requiring a lengthy outage process to do it. With Long Distance vMotion making both sites active and simply vMotioning workloads between sites is a vastly superior solution and provides many of the benefits of SRM without the required investment and configuration.

The coup here though is, of course, the OpenStack compatibility through VMware’s integrated distribution. OpenStack is notorious for being a right pain in the ass to get running properly, even if you already have staff that have had some experience with the product set in the past. VMware’s solution to this is to provide a pre-canned build which exposes all the resources in a VMware cloud through the OpenStack APIs for developers to utilize. Considering that OpenStack’s lack of good management tools has been, in my mind, one of the biggest challenges to its adoption this solution from VMware could be the kick in the pants it needs to see some healthy adoption rates.

It’s good to see VMware jumping on the hybrid cloud idea as the solution going forward as I’ve long been of the mind that that will be the solution going forward. Cloud infrastructure is great and all but there are often requirements it simply can’t meet due to its commodity nature. Going hybrid with OpenStack as the intermediary layer will allow enterprises to take advantage of these APIs whilst still leveraging their investment in core infrastructure, utilizing the cloud on an as-needed basis. Of course that’s the nirvana state but it seems to get closer to realisation with every new release so here’s hoping VMware will be the catalyst to finally see it succeed.

VMware Has Always Been Playing The Long Game.

VMware has always been the market leader in terms of functionality in the virtualization space. Initially this was because they were the only real player in the market with every other alternative being either far too specific for widespread adoption or, dare I say it, too hard for your run of the mill system administrator to understand. That initial momentum allowed them to stay ahead of the curve for quite a long time enabling them to justify their licensing fees based on the functionality they could deliver. In recent years however the fundamental features that are required of a base hypervisor have, in essence, reached parity for the all the major players seemingly eliminating the first to market advantage that VMware had been exploiting for the better part of a decade.

However it’s not like VMware wasn’t aware of this. Back when I first started doing large virtualization projects the features of the base hypervisor were very rarely the first things you’d discuss with your local VMware representative. Indeed they were much more focused on the layers on top of the base hypervisor which they could provide. Whilst Microsoft and CITRIX struggled for a long time to provide even the most basic of services like vMotion/Live Migration VMware knew that it was only a matter of time before their base product offered feature parity to theirs. As such VMware now has an extensive catalogue of value add products for environments based on their hypervisor and that’s where the true value is.

Which is why I get surprised when I see articles like this one from ArsTechnica. There’s no doubting that VMware is undergoing a small transformation at the moment having back peddled on the controversial vRAM issue and even taking the unprecedented step of joining OpenStack. However their lead in terms of functionality and value add services for their hypervisor really can’t be matched by any of the current competitors and this is why they can truthfully say that they still have the upper hand. Just take a look at the features being offered in Hyper-V 3.0 and then look up how long VMware has had that feature. For the vast majority of them it’s been available for years through VMware and is only just becoming available for Hyper-V.

Having a feature first might not sound like a big advantage when most people only want your hypervisor but that can be a very critical factor, especially for risk adverse organisations. Being able to demonstrate that a feature has been developed, released and used in the field gives those kinds of customers the confidence they need in order to use that feature. Most organisations won’t trust a new version of Windows until the first service pack is out and it’s been my experience that that same thinking applies to hypervisors as well. Microsoft might be nipping at VMware’s heels but they’ve still got a lot of ground to make up before they’re in contention for the virtualization crown.

Indeed I believe their current direction is indicative of how they see the virtualization market transforming and how they fit in to it. Undeniably the shift is now away from pure virtualization and more into cloud services and with so many big players backing OpenStack it would be foolish of them to ignore it lest they be left behind or seen as a walled garden solution in an increasingly open world. They certainly don’t have the market dominance they used to however the market has significantly increased in the time that they’ve been active and thus complete domination of it is no longer necessary for them to still be highly profitable. VMware will still have to be careful though as Microsoft could very well eat their lunch should they try to rest on their laurels.

Microsoft Takes (Mis)Steps Towards The Hybrid Cloud.

I’ve long been of the mind that whilst we’re seeing a lot of new businesses being able to fully cloudify their operations, mostly because they have the luxury of designing their processes around these cloud services, established organisations will more than likely never achieve full cloud integration. Whether this is because of data sovereignty issues, lack of trust in the services themselves or simply fear of changing over doesn’t really matter as it’s up to the cloud providers to offer solutions that will ease their customer’s transition onto the cloud platform. From my perspective it seems clear that the best way to approach this is by offering hybrid cloud solutions, ones that can leverage their current investment in infrastructure whilst giving them the flexibility of cloud services. Up until recently there weren’t many companies looking at this approach but that has changed significantly in the past few months.

However there’s been one major player in the cloud game that’s been strangely absent in the hybrid cloud space. I am, of course, referring to Microsoft as whilst they have extensive public cloud offerings in the form of their hosted services as well as Azure they haven’t really been able to offer anything past their usual Hyper-V plus System Centre suite of products. Curiously though Microsoft, and many others it seems, have been running with the definition of a private cloud being just that: highly virtualized environment with dynamic resourcing. I’ll be honest I don’t share that definition at all as realistically that’s just Infrastructure as a Service, a critical part of any cloud service but not a cloud service in its own.

They are however attempting to make inroads to the private cloud area with their latest announcement called the Service Management Portal. When I first read about this it was touted as Microsoft opening the doors to service providers to host their own little Azure cloud but its in fact nothing like that at all. Indeed it just seems to be an extension of their current Software as a Service offerings which is really nothing that couldn’t be achieved before with the current tools available. System Centre Configuration Manager 2012 appears to make this process a heck of a lot easier mind you but with it only being 3 months after its RTM release I can’t say that it’d be in production use at scale anywhere bar Microsoft at this current point in time.

It’s quite possible that they’re trying a different approach to this idea after their ill-failed attempt at trying to get Azure clouds up elsewhere via the  Azure Appliance initiative. The problem with that solution was the scale required as the only provider I know of that actually offers the Azure services is Fujitsu and try as you might you won’t be able to sign up for that service without engaging directly with them. That’s incredibly counter-intuitive to the way the cloud should work and so it isn’t surprising that Microsoft has struggled to make any sort of in roads using that strategy.

Microsoft really has a big opportunity here to use their captive market of organisations that are heavily invested in their product as leverage in a private/hybrid cloud strategy. First they’d need to make the Azure platform available as a Server Role on Windows Server 2012. This would then allow the servers to become part of the private computing cloud which could have applications deployed on them. Microsoft could then make their core applications (Exchange, SharePoint, etc.) available as Azure applications, nullifying the need for administrators to do rigorous architecture work in order to deploy the applications. The private cloud can then be leveraged by the developers in order to build the required applications which could, if required, burst out into the public cloud for additional resources. If Microsoft is serious about bringing the cloud to their large customers they’ll have to outgrow the silly notion that SCCM + Hyper-V merits the cloud tag as realistically it’s anything but.

I understand that no one is really doing this sort of thing currently (HP’s cloud gets close, but I’ve yet to hear about anyone who wasn’t a pilot customer seriously look at it) but Microsoft is the kind of company that has the right combination of established infrastructure in organisations, cloud services and technically savy consumer base to make such a solution viable. Until they offer some deployable form of Azure to their end users any product they offer as a private cloud solution will be that only in name. Making Azure deployable though could be a huge boon to their business and could very well form a sort of reformation of the way they do computing.

Transitioning From an IT Admin to a Cloud Admin.

I’ve gone on record saying that whilst the cloud won’t kill the IT admin there is a very real (and highly likely) possibility that the skills required to be a general IT administrator will change significantly over the next decade. Realistically this is no different from any other 10 year span in technology as you’d struggle to find many skills that were as relevant today as they were 10 years ago. Still the cloud does represent some fairly unique paradigm shifts and challenges to regular IT admins, some of which will require significant investment in re-skilling in order to stay relevant in a cloud augmented future.

The most important skill that IT admins will need to develop is their skills in programming. Now most IT admins have some level of experience with this already, usually with automation scripts based in VBScript, PowerShell or even (shudder) batch. Whilst these provide some of the necessary foundations for working in a cloud future they’re not the greatest for developing (or customizing) production level programs that will be used on a daily basis. The best option then is to learn some kind of formal programming language, preferably one that has reference libraries for all cloud platforms. My personal bias would be towards C# (and should be yours if your platform is Microsoft) as it’s a great language and you get the world’s best development environment to work in: Visual Studio.

IT admins should also look to gaining a deep understanding of virtualization concepts, principles and implementations as these are what underpins nearly all cloud services today. Failing to understand these concepts means that you won’t be able to take advantage of many of the benefits that a cloud platform can provide as they function very differently to the traditional 3 tier application model.

The best way to explain this is to use Microsoft’s Azure platform as an example. Whilst you can still get the 3 tier paradigm working in the Azure environment (using a Web Role, Worker Role and SQL Azure) this negates the benefits of using things like Azure Table Storage, Blob Storage and Azure Cache. The difference comes down to having to manually scale an application like you would do normally instead of enabling the application to scale itself in response to demand. In essence there’s another level of autonomy you take advantage of, one that makes capacity planning a thing of the past¹.

It’s also worth your time to develop a lot of product knowledge in the area of cloud services. As I mentioned in my previous blog cloud services are extremely good at some things and wildly inappropriate for others. However in my experience most cloud initiatives attempt to be too ambitious, looking to migrate as many services into the cloud as possible whether there are benefits to be had or not. It’s your job then to advise management as to where cloud services will be most appropriate and you can’t do this without a deep knowledge of the products on offer. A good rule of thumb is that cloud services are great at replacing commodity services (email, ERP, CRM etc.) but aren’t so great at replacing custom systems or commodity systems that have had heavy modifications to them. Still it’s worth researching the options out there to ensure you know how the cloud provider’s capabilities match up with your requirements, hopefully prior to attempting to implement them.

This is by no means an exhaustive list and realistically your strategy will have to be custom made to your company and your potential career path. However I do believe that investing in the skills I mentioned above will give you a good footing for transition from just a regular IT admin to a cloud admin. For me I find it exciting as whilst I don’t believe the cloud will overtake anything and everything in the corporate IT environment it will provide us with some amazing new capabilities.

¹Well technically it just moves the problem from you to the cloud service provider. There’s still some capacity planning to be done on your end although it comes down financial rather than computational, so that’s usually left to the finance department of your organisation. They’re traditionally much better at financial planning than IT admins are at capacity planning.

Many thanks to Derek Singleton of Software Advice for inspiring this post with his blog on Cloud Career Plans.

Why Macs and Enterprise Computing Don’t Mix.

I’m a big fan of technology that makes users happy. As an administrator anything that keeps users satisfied and working productively means more time for me to make the environment even better for them. It’s a great positive feedback loop that builds on itself continually, leading to an environment that’s stable, cutting edge and just plain fun to use and administer. Of course the picture I’ve just painted is something of an IT administrator nirvana, a great dream that is rarely achieved even by those who have unlimited freedom with the budgets to match. That doesn’t mean we shouldn’t try to achieve it however and I’ll be damned if I haven’t tried at every place I’ve ever worked at.

The one thing that always come up is “Why don’t we use Macs in the office? They’re so easy to use!”. Indeed my two month long soiree into the world of OSX and all things Mac showed that it was indeed an easy operating system to pick up and I could easily see why so many people use it as their home operating system. Hell at my current work place I can count several long time IT geeks who’ve switched their entire household over to solely Apple gear because it just works and as anyone who works in IT will tell you the last thing you want to be doing at  home is fixing up PCs.

You’d then think that Macs would be quite prevalent in the modern workspace, what with their ease of use and popularity amongst the unwashed masses of users. Whilst their usage in the enterprise is growing considerably they’re still hovering just under 3% market share, or about the same amount of market share that Windows Phone 7 has in the smart phone space. That seems pretty low but it’s in line with world PC figures with Apple being somewhere in the realms of 5% or so. Still there’s a discrepancy there so the question still remains as to why Macs aren’t seen more often in the work place.

The answer is simple, Apple simply doesn’t care about the enterprise space.

I had my first experience with Apple’s enterprise offerings very early on in my career, way back when I used to work for the National Archives of Australia. As part of the Digital Preservation Project we had a small data centre that housed 2 similar yet completely different systems. They were designed in such a way that should a catastrophic virus wipe out the entire data store on one the replica on the other should be unaffected since it was built from completely different software and hardware. One of these systems utilized a few shelves of Apple’s Xserve RAID Array storage. In essence they were just a big lump of direct attached storage and for that purpose they worked quite well. That was until we tried to do anything with it.

Initially I just wanted to provision some of the storage that wasn’t being used. Whilst I was able to do some of the required actions through the web UI the unfortunate problem was that the advanced features required installing the Xserve tools on a Mac computer. Said computer also had to have a fibre channel card installed, something of a rarity to find in a desktop PC. It didn’t stop there either, we also tried to get Xsan installed (so it would be, you know, an actual SAN) only to find out that we’d need to buy yet more Apple hardware in order to be able to use it. I left long before I got too far down that rabbit hole and haven’t really touched Apple enterprise gear since.

You could write that off as a bad experience but Apple has continued to show that the enterprise market is simply not their concern. No less than 2 years after I last touched a Xserve RAID Array did Apple up and cancel production of them, instead offering up a rebadged solution from Promise. 2 years after that Apple then discontinued production of its Xserve servers and lined up their Mac Pros as a replacement. As any administrator will tell you the replacements are anything but and since most of their enterprise software hasn’t recieved a proper update in years (Xsan’s last major release was over 3 years ago) no one can say that Apple has the enterprise in mind.

It’s not just their enterprise level gear that’s failing in corporate environments. Whilst OSX is easy to use it’s an absolute nightmare to administer on anything larger than a dozen or so PCs as all of the management tools available don’t support it. Whilst they do integrate with Active Directory there’s a couple limitations that don’t exist for Windows PCs on the same infrastructure. There’s also the fact that OSX can’t be virtualized unless it runs on Apple hardware which kills it off as a virtualization candidate. You might think that’s a small nuisance but it means that you can’t do a virtual desktop solution using OSX (since you can’t buy the hardware at scale to make it worthwhile) and you can’t utilize any of your current investment in virtual infrastructure to run additional OSX servers.

If you still have any doubts that Apple is primarily a hardware company then I’m not sure what planet you’re on.

For what its worth Apple hasn’t been harmed by ignoring the enterprise as it’s consumer electronics business has more than made up for the losses that they’ve incurred. Still I often find users complaining about how their work computers can’t be more like their Macs at home, ignorant of the fact that Apple’s in the enterprise would be an absolutely atrocious experience. Indeed it’s looking to get worse as Apple looks to iPhoneizing their entire product range including, unfortunately, OSX. I doubt Apple will ever change direction on this which is a real shame as OSX is the only serious competitor to Micrsoft’s Windows.

Virtual Machine CPU Over-provisioning: Results From The Real World.

Back when virtualization was just starting to make headway into the corporate IT market the main aim of the game was consolidation. Vast quantities of CPU, memory and disk resources were being squandered as servers sat idle for the vast majority of their lives, barely ever using the capacity that was assigned to them. Virtualization allowed IT shops the ability to run many low resource servers on the one box, significantly reducing the hardware requirement cost whilst providing a whole host of other features. It followed then that administrators looked towards over-provisioning their hosts, I.E. creating more virtual machines than the host was technically capable of handling.

The reason this works is because of a feature of virtualization platforms called scheduling. In essence when you put a virtual machine on an over-provisioned host it will not be guaranteed to get resources when it needs them, instead it’s scheduled on and in order to keep it and all the other virtual machines running properly. Surprisingly this works quite well as for the most part virtual machines spend a good part of their life idle and the virtualization platform uses this information to schedule busy machines ahead of idle ones. Recently I was approached to find out what the limits were of a new piece of hardware that we had procured and I’ve discovered some rather interesting results.

The piece of kit in question is a Dell M610x blade server with the accompanying chassis and interconnects. The specifications we got were pretty good being a dual processor arrangement (2 x Intel Xeon X5660) with 96GB of memory. What we were trying to find out was what kind of guidelines should we have around how many virtual machines could comfortably run on such hardware before performance started to degrade. There was no such testing done with previous hardware so I was working in the dark on this one, so I’ve devised my own test methodology in order to figure out the upper limits of over-provisioning in a virtual world.

The primary performance bottleneck for any virtual environment is the disk subsystem. You can have the fastest CPUs and oodles of RAM and still get torn down by slow disk. However most virtual hosts will use some form of shared storage so testing that is out of the equation. The two primary resources we’re left with then are CPU and memory and the latter is already a well known problem space. However I wasn’t able to find any good articles on CPU over-provisioning so I devised some simple tests to see how the systems would perform when under a load that was well above its capabilities.

The first test was a simple baseline, since the server has 12 available physical cores (HyperThreading might say you get another core, but that’s a pipe dream) I created 12 virtual machines each with a single core. I then fully loaded the CPUs to max capacity. Shown below is a stacked graph of each virtual machine’s ready time which is a representation of how long the virtual machine was ready¹ to execute some instruction but was not able to get scheduled onto the CPU.

The initial part of this graph shows the machines all at idle. Now you’d think at that stage that their ready times would be zero since there’s no load on the server. However since VMware’s hypervisor knows when a virtual machine is idle it won’t schedule it on as often as the idle loops are simply wasted CPU cycles. The jumpy period after that is when I was starting up a couple virtual machines at a time and as you can see those virtual machine’s ready times drop to 0. The very last part of the graph shows the ready time rocketing down to nothing for all the virtual machines with the top grey part of the graph being the ready time of the hypervisor itself. 

This test doesn’t show anything revolutionary as this is pretty much the expected behaviour of a virtualized system. It does however provide us with a solid baseline from which we can draw some conclusions from further tests. The next test I performed was to see what would happen when I doubled the work load on the server, increasing the virtual core count from 12 to a whopping 24. 

For comparison’s sake the first graph’s peak is equivalent to the first peak of the second graph. What this shows is that when the CPU is oversubscribed by 100% the CPU wait times rocket through the roof with the virtual machines waiting up to 10 seconds in some cases to get scheduled back onto the CPU. The average was somewhere around half a second which for most applications is an unacceptable amount of time. Just imagine trying to use your desktop and having it freeze for half a second every 20 seconds or so, you’d say it was unusable. Taking this into consideration we now know that there must be some level of happy medium in the centre. The next test then aimed right bang in the middle of these two extremes, putting 18 CPUs on a 12 core host.

Here’s where it gets interesting. The graph depicts the same test running over the entire time but as you can see there are very distinct sections depicting what I call different modes of operation. The lower end of the graph shows a time when the scheduler is hitting bang on its scheduling and the wait times are overall quite low. The second is when the scheduler gives much more priority to the virtual machines that are thrashing their cores and the machines that aren’t doing anything get pushed to the side. However in both instances the 18 cores running are able to get the serviced in a maximum of 20 milliseconds or so, well within the acceptable range of most programs and user experience guidelines.

Taking this all into consideration it’s then reasonable to say that the maximum you can oversubscribe a virtual host in regards to CPU is 1.5 times the number of physical cores. You can extrapolate that further by taking into consideration the average load and if it’s below 100% constantly then you can divide the number of CPUs by that percentage. For example if the average load of these virtual machines was 50% then theoretically you could support 36 single core virtual machines on this particular host. Of course once you get into the very high CPU count things like overhead start to come into consideration, but as a hard and fast rule it works quite well.

If I’m honest I was quite surprised with these results as I thought once I put a single extra thrashing virtual machine on the server it’d fall over in a screaming heap with the additional load. It seems though that VMware’s scheduler is smart enough to be able to service a load much higher than what the server should be capable of without affecting the other virtual machines that adversely. This is especially good news for virtual desktop deployments as typically the limiting factor there was the number of CPU cores available. If you’re an administrator of a virtual deployment I hope you found this informative and it will help you when planning future virtual deployments.

¹CPU ready time was chosen as the metric as it most aptly showcases a server’s ability to serve a virtual machine’s request of the CPU when in a heavy scheduling scenario. Usage wouldn’t be an accurate metric to use since for all these tests the blade was 100% utilized no matter the number of virtual machines running.

VMware’s Demise? More Like The Rise of Haggling.

In the eyes of corporate IT shops the word virtualization is synonymous with the VMware brand. The reason is this is simple, VMware was first to market with solutions that could actually deliver tangible results to the business. VMware then made the most of this first mover advantage quickly diversifying their product portfolio away from just straight up virtualization into a massive service catalogue that no competitor has yet to match. There’s no denying that they’re the most pricey of the solutions however but many IT shops have been willing to wear the costs due to the benefits that they receive. However in the past couple years or so the competitors, namely Hyper-V and Xen, have started to catch up in features and this has seen many IT shops questioning their heavy investment in VMware.

Undoubtedly this dissatisfaction with VMware’s products has been catalysed by the licensing change in vSphere 5 which definitely gave the small to medium section of the market some pause when it came to keeping VMware as a platform. For larger enterprises it wasn’t so much of a big deal since realistically they’d already licensed most of their capacity anyway. Still it’s been enough for most of them to cast a careful eye over their current spend levels on VMware’s products and seek to see if there’s perhaps a better way to spend all that cash. Indeed a recent survey commissioned by Veeam showed that 38% of virtualized businesses were looking to switch platforms in the near future.

The report doesn’t break down into exactly which platform they’re switching from and to but since the 3 biggest reasons cited are cost, alternative hypervisor features and licensing model (all long time complaints of the VMware platform) it’s a safe bet that most of those people are considering changing from VMware to another platform (typically Hyper-V). Indeed I can add that anecdotally the costs of VMware are enough now that business are seriously considering the platform swap because of the potential savings from a licensing perspective. Hyper-V is the main contender because most virtualization is done with Windows servers and under the typical licensing agreements the hypervisor is usually completely free. Indeed even the most basic of Windows server licenses gives you 1 free virtual machine to play with and it just gets better from there.

But why are so many considering switching from the market leader now when the problems cited have been around nearly half a decade? For the most part it has to do with the alternatives finally reaching feature parity with VMware when it comes to base level functionality. For the longest time VMware was the only one that was capable of doing live migrations between hosts with technology they called vMotion. Xen caught up quickly but their lack of Windows support meant that it saw limited use in corporate environments, even after the support was added in shortly after. Hyper-V on the other hand struggled to get it working only releasing it with Server 2008 R2. With Windows 2003 and XP now on the way out many IT shops are now looking to upgrade to 2008 R2 and that’s when they notice the capabilities of Hyper-V.

Strictly speaking though I’d say that whilst there’s a good few people considering making the jump from VMware to another hypervisor the majority are only doing so in order to get a better deal out of VMware. Like any business arrangement the difference between the retail price and the actual price anyone pays is quite large and VMware is no exception to this rule. I’ve seen quite a few decision makers wave the Hyper-V card without even the most rudimentary of understanding of what it’s capabilities are, nor any concrete plans to put it in motion. There’s also the fact that if you’re based on VMware now and you switch to another platform you’re going to have to make sure all your staff are retrained with the new product, a costly and time consuming exercise. So whilst the switch from VMware may look like the cheaper option if you just look at the licensing there’s a whole swath of hidden and intangible costs that need to be taken into consideration.

So with that all said is VMware staring down the barrel of a inevitable demise? I don’t believe so, their market capture and product lead means that they’ve got a solid advantage over everyone in the market. Should the other hypervisors begin eating away at their market share they have enough of a lead to be able to react in time, either by significantly reducing their prices or simply innovating their way ahead again. I will be interested to see how these figures shape up in say 3/9/12 months from now to see if those 38%ers made good on their pledge to change platforms but I’m pretty sure I know the outcome already.

Virtualized Smartphones: No Longer a Solution in Search of a Problem.

It was just under 2 years ago when I wrote my first (and only) post on smartphone virtualization approaching it with the enthusiasm that I do with most cool new technologies. At the time I guessed that VMware would eventually look to integrate this idea with some of their other products, in essence turning user’s phones into dumb terminals so that IT administrators could have more control over them. However the exact usefulness was still not clear as at the time most smartphones were only just capable of running a single instance, let alone another one with all the virtualization trimmings that’d inevitably slow it down. Android was also somewhat of a small time player back then as well having only 5% of the market (similar to Windows Phone 7 at the same stage in its life, funnily enough) making this a curiosity more than anything else.

Of course a lot has changed in the time between that post and now. Then market leader, RIM, is now struggling with single digit market share when it used to make up almost half the market. Android has succeeded in becoming the most popular platform surpassing Apple who maintained the crown for many years prior. Smartphones have also become wildly more powerful as well, with many of them touting dual cores, oodles of RAM and screen resolutions that would make my teenage self green with envy. With this all in mind then the idea of running some kind of virtualized environment on a smartphone doesn’t seem all that ludicrous any more.

Increasingly IT departments are dealing with users who want to integrate their mobile devices with their work space in lieu of using a separate, work specific device. Much of this pressure came initially from the iPhone with higher ups wondering why they couldn’t use their devices to access work related data. For us admin types the reasons were obvious: it’s an unapproved, untested device which by rights has no business being on the network. However the pressure to capitulate to their demands was usually quite high and work arounds were sought. Over the years these have taken many various forms, but the best answer would appear to lie within the world of smartphone virtualization.

VMware have been hard at work creating full blown virtualization systems for Android that allow a user to have a single device that contains both their personal handset as well as a secure, work approved environment. In essence they have an application that allows them to switch between the two of them, allowing the user to have whatever handset they want whilst still allowing IT administrators to create a standard, secure work environment. Android is currently the only platform that seems to support this wholly thanks to its open source status, although there are rumours of it coming to the iOS line of devices as well.

It doesn’t stop there either. I predicted that VMware would eventually integrate their smartphone virtualization technology into their View product, mostly so that the phones would just end up being dumb terminals. This hasn’t happened exactly, but VMware did go ahead and imbue their View product with the ability to present full blown workstations to tablet and smartphones through a secure virtual machine running on said devices. This means that you could potentially have your entire workforce running off smartphones with docking stations, enabling users to take their work environment with them wherever they want to go. It’s shockingly close to Microsoft’s Three Screens idea and with Google announcing that Android apps are now portable to Google TV devices you’d be forgiven for thinking that they outright copied the idea.

For most regular users though these kinds of developments don’t mean a whole lot, but it is signalling the beginning of the convergence of many disparate experiences into a single unified one. Whilst I’m not going to say that anyone one platform will eventually kill off the other (each one of the three screens has a distinct purpose) we will see a convergence in the capabilities of each platform, enabling users to do all the same tasks no matter what platform they are using. Microsoft and VMware are approaching this idea from two very different directions with the former unifying the development platform and the latter abstracting it away so it will be interesting to see which approach wins out or if they too eventually converge.

VMware vSphere 5: Technologically Awesome, Financially Painful.

I make no secret of the fact that I’ve pretty much built my career around a single line of products, specifically those from VMware. Initially I simply used their workstation line of products to help me through university projects that required Linux to complete but after one of my bosses caught wind of my “experience” with VMware’s products I was put on the fast line to become an expert in their technology. The timing couldn’t have been more perfect as virtualization then became a staple of every IT department I’ve had the pleasure of working with and my experience with VMware ensured that my resume always floated around near the top when it came time to find a new position.

In this time I’ve had a fair bit of experience with their flagship product now called vSphere. In essence it’s an operating system you can install on a server that lets you run multiple, distinct operating system instances on top of it. Since IT departments always bought servers with more capacity than they needed systems like vSphere meant they could use that excess capacity to run other, not so power hungry systems along side them. It really was a game changer and from then on servers were usually bought with virtualization being the key purpose in mind rather than them being for a specific system. VMware is still the leader in this sector holding an estimated 80% of the market and has arguably the most feature rich product suite available.

Yesterday saw the announcement of their latest product offering vSphere 5. From a technological standpoint it’s very interesting with many innovations that will put VMware even further ahead of their competition, at least technologically. Amongst the usual fanfare of bigger and better virtual machines and improvements to their current technologies vSphere 5 brings with it a whole bunch of new features aimed squarely at making vSphere the cloud platform for the future. Primarily these innovations are centred around automating certain tasks within the data centre, such as provisioning new servers and managing server loads including down to the disk level which wasn’t available previously. Considering that I believe the future of cloud computing (at least for government organisations and large scale in house IT departments) is a hybrid public/private model these improvements are a welcome change , even if I won’t be using them immediately.

The one place that VMware falls down and is (rightly) heavily criticized for is the price. With the most basic licenses costing around $1000 per core it’s not a cheap solution by any stretch of the imagination, especially if you want to take advantage of any of the advanced features. Still since the licencing was per processor it meant that you could buy a dual processor server (each with say, 6 cores) with oodles of RAM and still come out ahead of other virtualization solutions. However with vSphere 5 they’ve changed the way they do pricing significantly, to the point of destroying such a strategy (and those potential savings) along with it.

Licensing is still charged on a per-processor basis but instead of having an upper limit on the amount of memory (256GB for most licenses, Enterprise Plus gives you unlimited) you are now given a vRAM allocation per licence purchased. Depending on your licensing level you’ll get 24GB, 32GB or 48GB worth of vRAM which you’re allowed to allocate to virtual machines. Now for typical smaller servers this won’t pose much of a problem as a dual proc, 48GB RAM server (which is very typical) would be covered easily by the cheapest licensing. However should you exceed even 96GB of RAM, which is very easy to do, that same server will then require additional licenses to be purchased in order to be able to full utilize the hardware. For smaller environments this has the potential to make VMware’s virtualization solution untenable, especially when you put it beside the almost free competitor of Hyper-V from Microsoft.

The VMware user community has, of course, not reacted positively to this announcement. Whilst for many larger environments the problems won’t be so bad as the vRAM allocation is done at the data center level and not the server level (allowing over-allocated smaller servers to help out their beefier brethren) it does have the potential to hurt smaller environments especially those who heavily invested in RAM heavy, processor poor servers. It’s also compounded by the fact that you’ll only have a short time to choose to upgrade for free, thus risking having to buy more licenses, or abstain and then later have to pay an upgrade fee. It’s enough for some to start looking into moving to the competition which could cut into VMware’s market share drastically.

The reasoning behind these changes is simple: such pricing is much more favourable to a ubiquitous cloud environment than it is to the current industry norm for VMware deployments. VMware might be slightly ahead of the curve on this one however as most customers are not ready to deploy their own internal clouds with the vast majority of current cloud users being hosted solutions. Additionally many common enterprise applications aren’t compatible with VMware’s cloud and thus lock end users out of realising the benefits of a private cloud. VMware might be choosing to bite the bullet now rather than later in the hopes it will spur movement onto their cloud platform at a later stage. Whether this strategy works or not remains to be seen, but current industry trends are pushing very hard towards a cloud based future.

I’m definitely looking forward to working with vSphere 5 and there are several features that will definitely provide an immense amount of value to my current environment. The licensing issue, whilst I feel won’t be much of an issue, is cause for concern and whilst I don’t believe VMware will budge on it any time soon I do know that the VMware community is an innovative lot and it won’t be long before they work out how to make the best of this licensing situation. Still it’s definitely an in for the competition and whilst they might not have the technological edge they’re more than suitable for many environments.

Business Intelligence, Green Computing and Virtualization.

On the surface the world of IT seems to heavily focused on the Internet and the current social networking revolution. I’ve ranted and raved about social networking  before but what most people will miss out on are the trends that go on behind the scenes of these web giants. I’m sure all my technically inclined readers will see the above title and groan (as I did when they were first thrust upon me) but there is an interesting story behind each of the buzz terms.

Virtualization is a word that has become a household name in all IT shops. The concept is simple, take one large piece of hardware and carve it up into smaller pieces so that you can get more out of it than if you used it as one big one. My first encounter with this kind of software was for running Linux machines at home for my university projects. I was lucky enough to begin my system administrator career just as virtualization began making headways into large corporate environments and the little experience I had translated into my foot in the door for many large virtualization projects.

Arguably the most successful proprietor of virtualization would be VMware who started way back in 1998 and released their first product a year later. They were my first foray into the world of virtualization and were the ones that I and many others have built our IT careers off of. Many other players have joined the market since then all of them bringing something unique to the market. In essence virtualization enables IT based companies the ability to rapidly expand their business by leveraging their current investments.

Green computing is something I’ve mentioned in the past and is interesting to note that it was probably the next big after virtualization to rock the IT infrastructure world. With people virtualizing and consolidating their resources many found their IT environments using the majority of their resources. This in turn meant that these machines were working harder and using more electricity and cooling. Along comes the green initiative with people looking to cash in on companies who are trying to improve their corporate image and goodwill by taking up a green initiative. Due in part to the economic downturn you won’t hear many people talking up the green-ness of their IT centre anymore, although power usage reduction through technology like blade servers is something that quite a lot of shops are looking at to save costs.

The next big idea that I see coming along is that of business intelligence. I’ve begun noticing that over the past few years end user’s trust in their IT departments has been increasing. As anecdotal as this is the last 2 places that I worked in were initially distrusted by their end users. As time went by I could see the change in the perception in the business areas that I was interacting with as instead of telling me what they wanted they asked what the best solution would be. The business minded amongst us would recognise this as a step up in the capability maturity of the organisation which shifts end user’s expectations away from IT as a utility and more to a service.

Once such abstraction takes place however it is hard for business units to realise the value that IT is providing to the organisation as a whole. Enter the business intelligence solutions that perform detailed metrics on an environment in order to give business minded people the right information in order to make judgements about the direction of their IT environment. As more and more environments mature in their capability these kinds of intelligence solutions are going to become critical if IT is required to justify its existence, just as if it was a profit center.

And yes I know you could play buzzword bingo with this post, but I’ll forgive you if you do 😛