Posts Tagged‘vmware’

VMware Has Always Been Playing The Long Game.

VMware has always been the market leader in terms of functionality in the virtualization space. Initially this was because they were the only real player in the market with every other alternative being either far too specific for widespread adoption or, dare I say it, too hard for your run of the mill system administrator to understand. That initial momentum allowed them to stay ahead of the curve for quite a long time enabling them to justify their licensing fees based on the functionality they could deliver. In recent years however the fundamental features that are required of a base hypervisor have, in essence, reached parity for the all the major players seemingly eliminating the first to market advantage that VMware had been exploiting for the better part of a decade.

However it’s not like VMware wasn’t aware of this. Back when I first started doing large virtualization projects the features of the base hypervisor were very rarely the first things you’d discuss with your local VMware representative. Indeed they were much more focused on the layers on top of the base hypervisor which they could provide. Whilst Microsoft and CITRIX struggled for a long time to provide even the most basic of services like vMotion/Live Migration VMware knew that it was only a matter of time before their base product offered feature parity to theirs. As such VMware now has an extensive catalogue of value add products for environments based on their hypervisor and that’s where the true value is.

Which is why I get surprised when I see articles like this one from ArsTechnica. There’s no doubting that VMware is undergoing a small transformation at the moment having back peddled on the controversial vRAM issue and even taking the unprecedented step of joining OpenStack. However their lead in terms of functionality and value add services for their hypervisor really can’t be matched by any of the current competitors and this is why they can truthfully say that they still have the upper hand. Just take a look at the features being offered in Hyper-V 3.0 and then look up how long VMware has had that feature. For the vast majority of them it’s been available for years through VMware and is only just becoming available for Hyper-V.

Having a feature first might not sound like a big advantage when most people only want your hypervisor but that can be a very critical factor, especially for risk adverse organisations. Being able to demonstrate that a feature has been developed, released and used in the field gives those kinds of customers the confidence they need in order to use that feature. Most organisations won’t trust a new version of Windows until the first service pack is out and it’s been my experience that that same thinking applies to hypervisors as well. Microsoft might be nipping at VMware’s heels but they’ve still got a lot of ground to make up before they’re in contention for the virtualization crown.

Indeed I believe their current direction is indicative of how they see the virtualization market transforming and how they fit in to it. Undeniably the shift is now away from pure virtualization and more into cloud services and with so many big players backing OpenStack it would be foolish of them to ignore it lest they be left behind or seen as a walled garden solution in an increasingly open world. They certainly don’t have the market dominance they used to however the market has significantly increased in the time that they’ve been active and thus complete domination of it is no longer necessary for them to still be highly profitable. VMware will still have to be careful though as Microsoft could very well eat their lunch should they try to rest on their laurels.

VMware VIM SDK Gotchas (or Ghost NICs, Why Do You Haunt Me So?).

I always tell people that on the surface VMware’s products are incredibly simple and easy to use and for the most part that’s true. Anyone who’s installed an operating system can easily get a vSphere server up and running in no time at all and have a couple virtual machines up not long after. Of course with any really easy to use product the surface usability comes from an underlying system that’s incredibly complex. Those daring readers who read my last post on modifying ESXi to grant shell access to non-root users got just a taste of how complicated things can be and as you dive deeper and deeper into VMware’s world the more complicated things become.

I had a rather peculiar issue come up with one of the tools that I had developed. This tool wasn’t anything horribly complicated, all it did was change the IP address of some Windows servers and their ESXi hosts whilst switching the network over from the build VLAN to their proper production one. For the most part the tool worked as advertised and never encountered any errors, on its side at least. However people were noticing something strange about the servers that were being configured using my tool, some were coming up with a “Local Area Network 2″ and “vmxnet3 Ethernet Adapter #2″ as their network connection. This was strange as I wasn’t adding in any new network cards anywhere and it wasn’t happening consistently. Frustrated I dove into my code looking for answers.

After a while I figured the only place that the error could be originating from was when I was changing the server over from the build VLAN to the production one. Here’s the code, which I got from performing the same action in the VIClient proxied through Onyx, that I used to make the change:

            NameValueCollection Filter = new NameValueCollection();
            Filter.Add("name", "^" + ServerName);
            VirtualMachine Guest = (VirtualMachine)Client.FindEntityView(typeof(VirtualMachine), null, Filter, null);
            VirtualMachineConfigInfo Info = Guest.Config;
            VirtualDevice NetworkCard = new VirtualDevice();
            int DeviceKey = 4000;
            foreach (VirtualDevice Device in Info.Hardware.Device)
            {
                String Identifier = Device.ToString();
                if (Identifier == "VMware.Vim.VirtualVmxnet3")
                {
                    DeviceKey = Device.Key;
                    NetworkCard = Device;
                    Console.WriteLine("INFO - Device key for network card found, ID: " + DeviceKey);
                }
            }
            VirtualVmxnet3 Card = (VirtualVmxnet3)NetworkCard;
            VirtualMachineConfigSpec Spec = new VirtualMachineConfigSpec();
            Spec.DeviceChange = new VirtualDeviceConfigSpec[1];
            Spec.DeviceChange[0] = new VirtualDeviceConfigSpec();
            Spec.DeviceChange[0].Operation = VirtualDeviceConfigSpecOperation.edit;
            Spec.DeviceChange[0].Device.Key = DeviceKey;
            Spec.DeviceChange[0].Device.DeviceInfo = new VMware.Vim.Description();
            Spec.DeviceChange[0].Device.DeviceInfo.Label = Card.DeviceInfo.Label;
            Spec.DeviceChange[0].Device.DeviceInfo.Summary = "Build";
            Spec.DeviceChange[0].Device.Backing = new VMware.Vim.VirtualEthernetCardNetworkBackingInfo();
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).DeviceName = "Production";
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).UseAutoDetect = false;
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).InPassthroughMode = false;
            Spec.DeviceChange[0].Device.Connectable = new VMware.Vim.VirtualDeviceConnectInfo();
            Spec.DeviceChange[0].Device.Connectable.StartConnected = Card.Connectable.StartConnected;
            Spec.DeviceChange[0].Device.Connectable.AllowGuestControl = Card.Connectable.AllowGuestControl;
            Spec.DeviceChange[0].Device.Connectable.Connected = Card.Connectable.Connected;
            Spec.DeviceChange[0].Device.Connectable.Status = Card.Connectable.Status;
            Spec.DeviceChange[0].Device.ControllerKey = NetworkCard.ControllerKey;
            Spec.DeviceChange[0].Device.UnitNumber = NetworkCard.UnitNumber;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).AddressType = Card.AddressType;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).MacAddress = Card.MacAddress;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).WakeOnLanEnabled = Card.WakeOnLanEnabled;
            Guest.ReconfigVM_Task(Spec);

My first inclination was that I was getting the DeviceKey wrong which is why you see me iterating through all the devices to try and find it. After running this tool many times over though it seems that my initial idea of just using 4000 would work since they all had that same device key anyway (thanks to all being built in the same way). Now according to the VMware API documentation on this function nearly all of those parameters you see up there are optional and earlier revisions of the code included only enough to change the DeviceName to Production without the API throwing an error at me. Frustrated I added in all the required parameters only to be greeted by the dreaded #2 NIC upon reboot.

It wasn’t going well for me, I can tell you that.

After digging around in the API documentation for hours and fruitlessly searching the forums for someone who had had the same issue as me I went back to tweaking the code to see what I could come up with. I was basically passing all the information that I could back to it but the problem still persisted with certain virtual machines. It then occurred to me that I could in fact pass the network card back as a parameter and then only change the parts I wanted to. Additionally I found out where to get the current ChangeVersion of the VM’s configuration and when both of these combined I was able to change the network VLAN successfully without generating another NIC. The resultant code is below.

            VirtualVmxnet3 Card = (VirtualVmxnet3)NetworkCard;
            VirtualMachineConfigSpec Spec = new VirtualMachineConfigSpec();
            Spec.DeviceChange = new VirtualDeviceConfigSpec[1];
            Spec.ChangeVersion = Guest.Config.ChangeVersion;
            Spec.DeviceChange[0] = new VirtualDeviceConfigSpec();
            Spec.DeviceChange[0].Operation = VirtualDeviceConfigSpecOperation.edit;
            Spec.DeviceChange[0].Device = Card;
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).DeviceName = "Production";
            Guest.ReconfigVM_Task(Spec);

What gets me about this whole thing is that the VMware API says that all the other parameters are optional when its clear that there’s some unexpected behavior when they’re not supplied. Strange thing is if you check the network cards right after making this change they will appear to be fine, its only after reboot (and only on Windows hosts, I haven’t tested Linux) that these issues occur. Whether this is a fault of VMware, Microsoft or somewhere between the keyboard and chair is an exercise I’ll leave up to the reader but it does feel like there’s an issue with the VIM API. I’ll be bringing this up with our Technical Account Manager at our next meeting and I’ll post an update should I find anything out.

VMware CPU over commit ready times 18 cores on 12 core host

Virtual Machine CPU Over-provisioning: Results From The Real World.

Back when virtualization was just starting to make headway into the corporate IT market the main aim of the game was consolidation. Vast quantities of CPU, memory and disk resources were being squandered as servers sat idle for the vast majority of their lives, barely ever using the capacity that was assigned to them. Virtualization allowed IT shops the ability to run many low resource servers on the one box, significantly reducing the hardware requirement cost whilst providing a whole host of other features. It followed then that administrators looked towards over-provisioning their hosts, I.E. creating more virtual machines than the host was technically capable of handling.

The reason this works is because of a feature of virtualization platforms called scheduling. In essence when you put a virtual machine on an over-provisioned host it will not be guaranteed to get resources when it needs them, instead it’s scheduled on and in order to keep it and all the other virtual machines running properly. Surprisingly this works quite well as for the most part virtual machines spend a good part of their life idle and the virtualization platform uses this information to schedule busy machines ahead of idle ones. Recently I was approached to find out what the limits were of a new piece of hardware that we had procured and I’ve discovered some rather interesting results.

The piece of kit in question is a Dell M610x blade server with the accompanying chassis and interconnects. The specifications we got were pretty good being a dual processor arrangement (2 x Intel Xeon X5660) with 96GB of memory. What we were trying to find out was what kind of guidelines should we have around how many virtual machines could comfortably run on such hardware before performance started to degrade. There was no such testing done with previous hardware so I was working in the dark on this one, so I’ve devised my own test methodology in order to figure out the upper limits of over-provisioning in a virtual world.

The primary performance bottleneck for any virtual environment is the disk subsystem. You can have the fastest CPUs and oodles of RAM and still get torn down by slow disk. However most virtual hosts will use some form of shared storage so testing that is out of the equation. The two primary resources we’re left with then are CPU and memory and the latter is already a well known problem space. However I wasn’t able to find any good articles on CPU over-provisioning so I devised some simple tests to see how the systems would perform when under a load that was well above its capabilities.

The first test was a simple baseline, since the server has 12 available physical cores (HyperThreading might say you get another core, but that’s a pipe dream) I created 12 virtual machines each with a single core. I then fully loaded the CPUs to max capacity. Shown below is a stacked graph of each virtual machine’s ready time which is a representation of how long the virtual machine was ready¹ to execute some instruction but was not able to get scheduled onto the CPU.

The initial part of this graph shows the machines all at idle. Now you’d think at that stage that their ready times would be zero since there’s no load on the server. However since VMware’s hypervisor knows when a virtual machine is idle it won’t schedule it on as often as the idle loops are simply wasted CPU cycles. The jumpy period after that is when I was starting up a couple virtual machines at a time and as you can see those virtual machine’s ready times drop to 0. The very last part of the graph shows the ready time rocketing down to nothing for all the virtual machines with the top grey part of the graph being the ready time of the hypervisor itself. 

This test doesn’t show anything revolutionary as this is pretty much the expected behaviour of a virtualized system. It does however provide us with a solid baseline from which we can draw some conclusions from further tests. The next test I performed was to see what would happen when I doubled the work load on the server, increasing the virtual core count from 12 to a whopping 24. 

For comparison’s sake the first graph’s peak is equivalent to the first peak of the second graph. What this shows is that when the CPU is oversubscribed by 100% the CPU wait times rocket through the roof with the virtual machines waiting up to 10 seconds in some cases to get scheduled back onto the CPU. The average was somewhere around half a second which for most applications is an unacceptable amount of time. Just imagine trying to use your desktop and having it freeze for half a second every 20 seconds or so, you’d say it was unusable. Taking this into consideration we now know that there must be some level of happy medium in the centre. The next test then aimed right bang in the middle of these two extremes, putting 18 CPUs on a 12 core host.

Here’s where it gets interesting. The graph depicts the same test running over the entire time but as you can see there are very distinct sections depicting what I call different modes of operation. The lower end of the graph shows a time when the scheduler is hitting bang on its scheduling and the wait times are overall quite low. The second is when the scheduler gives much more priority to the virtual machines that are thrashing their cores and the machines that aren’t doing anything get pushed to the side. However in both instances the 18 cores running are able to get the serviced in a maximum of 20 milliseconds or so, well within the acceptable range of most programs and user experience guidelines.

Taking this all into consideration it’s then reasonable to say that the maximum you can oversubscribe a virtual host in regards to CPU is 1.5 times the number of physical cores. You can extrapolate that further by taking into consideration the average load and if it’s below 100% constantly then you can divide the number of CPUs by that percentage. For example if the average load of these virtual machines was 50% then theoretically you could support 36 single core virtual machines on this particular host. Of course once you get into the very high CPU count things like overhead start to come into consideration, but as a hard and fast rule it works quite well.

If I’m honest I was quite surprised with these results as I thought once I put a single extra thrashing virtual machine on the server it’d fall over in a screaming heap with the additional load. It seems though that VMware’s scheduler is smart enough to be able to service a load much higher than what the server should be capable of without affecting the other virtual machines that adversely. This is especially good news for virtual desktop deployments as typically the limiting factor there was the number of CPU cores available. If you’re an administrator of a virtual deployment I hope you found this informative and it will help you when planning future virtual deployments.

¹CPU ready time was chosen as the metric as it most aptly showcases a server’s ability to serve a virtual machine’s request of the CPU when in a heavy scheduling scenario. Usage wouldn’t be an accurate metric to use since for all these tests the blade was 100% utilized no matter the number of virtual machines running.

VMware’s Demise? More Like The Rise of Haggling.

In the eyes of corporate IT shops the word virtualization is synonymous with the VMware brand. The reason is this is simple, VMware was first to market with solutions that could actually deliver tangible results to the business. VMware then made the most of this first mover advantage quickly diversifying their product portfolio away from just straight up virtualization into a massive service catalogue that no competitor has yet to match. There’s no denying that they’re the most pricey of the solutions however but many IT shops have been willing to wear the costs due to the benefits that they receive. However in the past couple years or so the competitors, namely Hyper-V and Xen, have started to catch up in features and this has seen many IT shops questioning their heavy investment in VMware.

Undoubtedly this dissatisfaction with VMware’s products has been catalysed by the licensing change in vSphere 5 which definitely gave the small to medium section of the market some pause when it came to keeping VMware as a platform. For larger enterprises it wasn’t so much of a big deal since realistically they’d already licensed most of their capacity anyway. Still it’s been enough for most of them to cast a careful eye over their current spend levels on VMware’s products and seek to see if there’s perhaps a better way to spend all that cash. Indeed a recent survey commissioned by Veeam showed that 38% of virtualized businesses were looking to switch platforms in the near future.

The report doesn’t break down into exactly which platform they’re switching from and to but since the 3 biggest reasons cited are cost, alternative hypervisor features and licensing model (all long time complaints of the VMware platform) it’s a safe bet that most of those people are considering changing from VMware to another platform (typically Hyper-V). Indeed I can add that anecdotally the costs of VMware are enough now that business are seriously considering the platform swap because of the potential savings from a licensing perspective. Hyper-V is the main contender because most virtualization is done with Windows servers and under the typical licensing agreements the hypervisor is usually completely free. Indeed even the most basic of Windows server licenses gives you 1 free virtual machine to play with and it just gets better from there.

But why are so many considering switching from the market leader now when the problems cited have been around nearly half a decade? For the most part it has to do with the alternatives finally reaching feature parity with VMware when it comes to base level functionality. For the longest time VMware was the only one that was capable of doing live migrations between hosts with technology they called vMotion. Xen caught up quickly but their lack of Windows support meant that it saw limited use in corporate environments, even after the support was added in shortly after. Hyper-V on the other hand struggled to get it working only releasing it with Server 2008 R2. With Windows 2003 and XP now on the way out many IT shops are now looking to upgrade to 2008 R2 and that’s when they notice the capabilities of Hyper-V.

Strictly speaking though I’d say that whilst there’s a good few people considering making the jump from VMware to another hypervisor the majority are only doing so in order to get a better deal out of VMware. Like any business arrangement the difference between the retail price and the actual price anyone pays is quite large and VMware is no exception to this rule. I’ve seen quite a few decision makers wave the Hyper-V card without even the most rudimentary of understanding of what it’s capabilities are, nor any concrete plans to put it in motion. There’s also the fact that if you’re based on VMware now and you switch to another platform you’re going to have to make sure all your staff are retrained with the new product, a costly and time consuming exercise. So whilst the switch from VMware may look like the cheaper option if you just look at the licensing there’s a whole swath of hidden and intangible costs that need to be taken into consideration.

So with that all said is VMware staring down the barrel of a inevitable demise? I don’t believe so, their market capture and product lead means that they’ve got a solid advantage over everyone in the market. Should the other hypervisors begin eating away at their market share they have enough of a lead to be able to react in time, either by significantly reducing their prices or simply innovating their way ahead again. I will be interested to see how these figures shape up in say 3/9/12 months from now to see if those 38%ers made good on their pledge to change platforms but I’m pretty sure I know the outcome already.

vsphere 5 esxi create vm screenshot

VMware’s vSphere 5: First Impressions.

It’s no secret that I owe a large part of my IT career to virtualization. It was a combination of luck, timing and willingness to jump into the unknown that led me down the VMware path having my first workplace using VMware’s products which set the stage for every job there after seeing my experience and latching on to it with a crack-junkie like desire. Over the years then I’ve become intimately familiar with many virtualization solutions but inevitably I find myself coming back to VMware because simply put they’re the market leaders and pretty much everyone who can afford to use them does so. So you can imagine then I was somewhat excited when I saw the release of vSphere 5 and I’ve been putting it through its paces over the past couple weeks.

On the surface ESXi 5 and vSphere 5 look almost identical to their predecessors. ESXi 5 is really only distinguishable from 4 thanks to the slightly different layout and changed font, whilst vSphere 5 is exactly the same spare for some new icons and additional links to new features. I guess with any new product version I’ve just come to expect a UI revamp even if it adds nothing to the end product so the fact that VMware decided to stick with their current UI came as somewhat of a surprise but I can’t really fault them for doing so. The real meat of the vSphere 5 is under the hood and there have been some major improvements from my initial testing.

vSphere 5 brings with it Virtual Machine Version 8 which amongst the usual more CPUs/more memory upgrades brings along with it support for 3D accelerated graphics, UEFI for the BIOS (which technically means it can OSX Lion although that will never happen¹) and USB 3.0 support. There’s also a few new options available when creating a new virtual machine like the ability to add virtual sockets (not just virtual cores) and the choice between eager and lazy zeroed disks.

 

The one overall impression that vSphere 5 has left on me though is that it’s fast, like really fast. The UI is much more responsive, operations that used to take minutes are now done in seconds and in the few performance tests we’ve done ESXi 5 seems to be consistently faster than its 4.1 Update 1 counterpart. According to my sources close to the matter this is because ESXi 5 is all new code from the ground up, enabling them to enhance performance significantly. From my first impressions with it I’d say that they’ve succeed in doing this and I’m looking forward to seeing how it handles real production loads in the very near future.

What really amazed me was a lot of the code that I had developed for vSphere 4 was 100% compatible with vSphere 5. I had been dreading having to rewrite the near 2000 lines of code that I had developed for the build system in order to get ESXi 5 into our environment but every command worked without a hitch, showing VMware’s dedication to backwards compatibility is extremely good, approaching the king of compatibility Microsoft. Indeed those looking to migrate to vSphere 5 don’t have much to worry about as pretty much every feature of the previous version is supported, and migrating to the newer platform is quite painless.

I’ve yet to have a chance to fiddle with some of the new features (like the storage appliance, which looks incredibly cool) but overall my first impressions of vSphere 5 are quite good, along the lines of what I’ve come to expect from VMware. I haven’t yet run into major gotchas yet but I’ve only had a couple VMs running in an isolated vSphere instance so my sample size is rather limited. I’m sure once I start throwing some real applications at it I’ll start running into some more interesting problems but suffice to say that VMware has done well with this release and I can see vSphere 5 making its home in all IT departments where VMware is already deployed.

¹The stipulation for all Apple products is that they run on Apple hardware, including virtualized instances. Since the only things you can buy with OSX Server installed on them are Mac Mini Servers or Mac Pros, neither of which are on the Hardware Compatability List, running your own virtualized copies of OSX Server (legitimately) simply can’t happen. Yet I still get looks of amazement when I tell people Apple is a hardware company, figures.

Virtualized Smartphones: No Longer a Solution in Search of a Problem.

It was just under 2 years ago when I wrote my first (and only) post on smartphone virtualization approaching it with the enthusiasm that I do with most cool new technologies. At the time I guessed that VMware would eventually look to integrate this idea with some of their other products, in essence turning user’s phones into dumb terminals so that IT administrators could have more control over them. However the exact usefulness was still not clear as at the time most smartphones were only just capable of running a single instance, let alone another one with all the virtualization trimmings that’d inevitably slow it down. Android was also somewhat of a small time player back then as well having only 5% of the market (similar to Windows Phone 7 at the same stage in its life, funnily enough) making this a curiosity more than anything else.

Of course a lot has changed in the time between that post and now. Then market leader, RIM, is now struggling with single digit market share when it used to make up almost half the market. Android has succeeded in becoming the most popular platform surpassing Apple who maintained the crown for many years prior. Smartphones have also become wildly more powerful as well, with many of them touting dual cores, oodles of RAM and screen resolutions that would make my teenage self green with envy. With this all in mind then the idea of running some kind of virtualized environment on a smartphone doesn’t seem all that ludicrous any more.

Increasingly IT departments are dealing with users who want to integrate their mobile devices with their work space in lieu of using a separate, work specific device. Much of this pressure came initially from the iPhone with higher ups wondering why they couldn’t use their devices to access work related data. For us admin types the reasons were obvious: it’s an unapproved, untested device which by rights has no business being on the network. However the pressure to capitulate to their demands was usually quite high and work arounds were sought. Over the years these have taken many various forms, but the best answer would appear to lie within the world of smartphone virtualization.

VMware have been hard at work creating full blown virtualization systems for Android that allow a user to have a single device that contains both their personal handset as well as a secure, work approved environment. In essence they have an application that allows them to switch between the two of them, allowing the user to have whatever handset they want whilst still allowing IT administrators to create a standard, secure work environment. Android is currently the only platform that seems to support this wholly thanks to its open source status, although there are rumours of it coming to the iOS line of devices as well.

It doesn’t stop there either. I predicted that VMware would eventually integrate their smartphone virtualization technology into their View product, mostly so that the phones would just end up being dumb terminals. This hasn’t happened exactly, but VMware did go ahead and imbue their View product with the ability to present full blown workstations to tablet and smartphones through a secure virtual machine running on said devices. This means that you could potentially have your entire workforce running off smartphones with docking stations, enabling users to take their work environment with them wherever they want to go. It’s shockingly close to Microsoft’s Three Screens idea and with Google announcing that Android apps are now portable to Google TV devices you’d be forgiven for thinking that they outright copied the idea.

For most regular users though these kinds of developments don’t mean a whole lot, but it is signalling the beginning of the convergence of many disparate experiences into a single unified one. Whilst I’m not going to say that anyone one platform will eventually kill off the other (each one of the three screens has a distinct purpose) we will see a convergence in the capabilities of each platform, enabling users to do all the same tasks no matter what platform they are using. Microsoft and VMware are approaching this idea from two very different directions with the former unifying the development platform and the latter abstracting it away so it will be interesting to see which approach wins out or if they too eventually converge.

VMware Capitulates, Shocking Critics (Including Me).

It’s a sad truth that once a company reaches a certain level of success they tend to stop listening to their users/customers, since by that point they have enough validation to continue down whatever path suits them. It’s a double edged sword for the company as whilst they now have much more freedom to experiment since they don’t have to fight for every customer they also have enough rope to hang themselves should they be too ambitious. This happens more in traditional business rather than say Web 2.0 companies since the latter’s bread and butter is their users and the community that surrounds them, leaving them a lot less wiggle room when it comes to going against the grain of their wishes.

I recently blogged about VMware’s upcoming release of vSphere 5 which whilst technologically awesome did have the rather unfortunate aspect of screwing over the small to medium size enterprises that had heavily invested in the platform. At the time I didn’t believe that VMware would change their mind on the issue, mostly because their largest customers would most likely be unaffected by it (especially the cloud providers) but just under three weeks later VMware has announced that they are changing the licensing model, and boy is it generous:

We are a company built on customer goodwill and we take customer feedback to heart.  Our primary objective is to do right by our customers, and we are announcing three changes to the vSphere 5 licensing model that address the three most recurring areas of customer feedback:

  • We’ve increased vRAM entitlements for all vSphere editions, including the doubling of the entitlements for vSphere Enterprise and Enterprise Plus.

  • We’ve capped the amount of vRAM we count in any given VM, so that no VM, not even the “monster” 1TB vRAM VM, would cost more than one vSphere Enterprise Plus license.

  • We adjusted our model to be much more flexible around transient workloads, and short-term spikes that are typical in test & dev environments for example.

The first 2 points are the ones that will matter to most people with the bottom end licenses getting a 33% boost to 32GB of vRAM allocation and every other licensing level getting their allocations doubled. Now for the lower end that doesn’t mean a whole bunch but the standard configuration just gained another 16GB of vRAM which is nothing to sneeze at. At the higher end however these massive increases start to really pile on, especially for a typical configuration that has 4 physical CPUs which now sports a healthy 384GB vRAM allocation with default licensing. The additional caveat of virtual machines not using more than 96GB of vRAM means that licensing costs won’t get out of hand for mega VMs but in all honesty if you’re running virtual machines that large I’d have to question your use of virtualization in the first place. Additionally the change from a monthly average to a 12 month average for the licensing check does go some way to alleviating the pain that some users will feel, even though they could’ve worked around it by asking VMware nicely for one of those unlimited evaluation licenses.

What these changes do is make vSphere 5 a lot more feasible for users who have already invested heavily in VMware’s platform. Whilst it’s no where near the current 2 processors + gobs of RAM deal that many have been used to it does now make the smaller end of the scale much more palatable, even if the cheapest option will leave you with a meagre 64GB of RAM to allocate. That’s still enough for many environments to get decent consolidation ratios of say 8 to 1 with 8GB VMs, even if that’s slightly below the desired industry average of 10 to 1. The higher end, whilst being a lot more feasible for a small number of ridiculously large VMs, still suffers somewhat as higher end servers will still need additional licenses to fully utilize their capacity. Of course not many places will need 4 processor, 512GB beasts in their environments but it’s still going to be a factor to count against VMware.

The licensing changes from VMware are very welcome and will go a long way for people like me who are trying to sell vSphere 5 to their higher ups. Whilst licensing was never an issue for me I do know that it was a big factor for the majority and these improvements will allow them to stay on the VMware platform without having to struggle with licensing concerns. I have to then give some major kudos to VMware for listening to their community and making these changes that will ultimately benefit both them and their customers as this kind of interaction is becoming increasingly rare as time goes on.

VMware vSphere 5: Technologically Awesome, Financially Painful.

I make no secret of the fact that I’ve pretty much built my career around a single line of products, specifically those from VMware. Initially I simply used their workstation line of products to help me through university projects that required Linux to complete but after one of my bosses caught wind of my “experience” with VMware’s products I was put on the fast line to become an expert in their technology. The timing couldn’t have been more perfect as virtualization then became a staple of every IT department I’ve had the pleasure of working with and my experience with VMware ensured that my resume always floated around near the top when it came time to find a new position.

In this time I’ve had a fair bit of experience with their flagship product now called vSphere. In essence it’s an operating system you can install on a server that lets you run multiple, distinct operating system instances on top of it. Since IT departments always bought servers with more capacity than they needed systems like vSphere meant they could use that excess capacity to run other, not so power hungry systems along side them. It really was a game changer and from then on servers were usually bought with virtualization being the key purpose in mind rather than them being for a specific system. VMware is still the leader in this sector holding an estimated 80% of the market and has arguably the most feature rich product suite available.

Yesterday saw the announcement of their latest product offering vSphere 5. From a technological standpoint it’s very interesting with many innovations that will put VMware even further ahead of their competition, at least technologically. Amongst the usual fanfare of bigger and better virtual machines and improvements to their current technologies vSphere 5 brings with it a whole bunch of new features aimed squarely at making vSphere the cloud platform for the future. Primarily these innovations are centred around automating certain tasks within the data centre, such as provisioning new servers and managing server loads including down to the disk level which wasn’t available previously. Considering that I believe the future of cloud computing (at least for government organisations and large scale in house IT departments) is a hybrid public/private model these improvements are a welcome change , even if I won’t be using them immediately.

The one place that VMware falls down and is (rightly) heavily criticized for is the price. With the most basic licenses costing around $1000 per core it’s not a cheap solution by any stretch of the imagination, especially if you want to take advantage of any of the advanced features. Still since the licencing was per processor it meant that you could buy a dual processor server (each with say, 6 cores) with oodles of RAM and still come out ahead of other virtualization solutions. However with vSphere 5 they’ve changed the way they do pricing significantly, to the point of destroying such a strategy (and those potential savings) along with it.

Licensing is still charged on a per-processor basis but instead of having an upper limit on the amount of memory (256GB for most licenses, Enterprise Plus gives you unlimited) you are now given a vRAM allocation per licence purchased. Depending on your licensing level you’ll get 24GB, 32GB or 48GB worth of vRAM which you’re allowed to allocate to virtual machines. Now for typical smaller servers this won’t pose much of a problem as a dual proc, 48GB RAM server (which is very typical) would be covered easily by the cheapest licensing. However should you exceed even 96GB of RAM, which is very easy to do, that same server will then require additional licenses to be purchased in order to be able to full utilize the hardware. For smaller environments this has the potential to make VMware’s virtualization solution untenable, especially when you put it beside the almost free competitor of Hyper-V from Microsoft.

The VMware user community has, of course, not reacted positively to this announcement. Whilst for many larger environments the problems won’t be so bad as the vRAM allocation is done at the data center level and not the server level (allowing over-allocated smaller servers to help out their beefier brethren) it does have the potential to hurt smaller environments especially those who heavily invested in RAM heavy, processor poor servers. It’s also compounded by the fact that you’ll only have a short time to choose to upgrade for free, thus risking having to buy more licenses, or abstain and then later have to pay an upgrade fee. It’s enough for some to start looking into moving to the competition which could cut into VMware’s market share drastically.

The reasoning behind these changes is simple: such pricing is much more favourable to a ubiquitous cloud environment than it is to the current industry norm for VMware deployments. VMware might be slightly ahead of the curve on this one however as most customers are not ready to deploy their own internal clouds with the vast majority of current cloud users being hosted solutions. Additionally many common enterprise applications aren’t compatible with VMware’s cloud and thus lock end users out of realising the benefits of a private cloud. VMware might be choosing to bite the bullet now rather than later in the hopes it will spur movement onto their cloud platform at a later stage. Whether this strategy works or not remains to be seen, but current industry trends are pushing very hard towards a cloud based future.

I’m definitely looking forward to working with vSphere 5 and there are several features that will definitely provide an immense amount of value to my current environment. The licensing issue, whilst I feel won’t be much of an issue, is cause for concern and whilst I don’t believe VMware will budge on it any time soon I do know that the VMware community is an innovative lot and it won’t be long before they work out how to make the best of this licensing situation. Still it’s definitely an in for the competition and whilst they might not have the technological edge they’re more than suitable for many environments.

Adapt or Die: Why I’m Keen on the Cloud.

Anyone who works in IT or a slightly related field will tell you that you’ve got to be constantly up to date with the latest technology lest you find yourself quickly obsoleted. Depending on what your technology platform of choice is the time frame you have to work in can vary pretty wildly, but you’d be doing yourself (and your career) a favour by skilling up in either a new or different technology every 2 years or so. Due to the nature of my contracts though I’ve found myself learning completely new technologies at least every year and its only in this past contract that I’ve come back full circle to the technology I initially made my career on, but that doesn’t mean the others I learnt in the interim haven’t helped immensely.

If I was honest though I couldn’t say that in the past I that I actively sought out new technologies to become familiar with. Usually I would start a new job based on the skills that I had from a previous engagement only to find that they really required something different. Being the adaptable sort I’d go ahead and skill myself up in that area, quickly becoming proficient enough to do the work they required. Since most of the places I worked in were smaller shops this worked quite well since you’re always required to be a generalist in these situations. It’s only been recently that I’ve turned my eyes towards the future to figure out where I should place my next career bet.

It was a conversation that came up between me and a colleague of mine whilst I was on a business trip with them overseas. He asked me where I thought were some of the IT trends that were going to take off in the coming years and I told him that I thought cloud based technologies were the way to go. At first he didn’t believe me, which was understandable since we work for a government agency and they don’t typically put any of their data in infrastructure they don’t own. I did manage to bring him around to the idea eventually though, thanks in part to my half decade of constant reskilling.

Way back when I was just starting out as a system administrator I was fortunate enough to start out working with VMware’s technology stack, albeit in a strange incarnation of running their workstation product on a server. At the time I didn’t think it was anything revolutionary but as time went on I saw how much money was going to waste as many servers sat idle for the majority of their lives, burning power and providing little in return. Virtualization then was a fundamental change to the way that back end infrastructure would be designed, built and maintained and I haven’t encountered any mid to large sized organisation who isn’t using it in some form.

Cloud technologies then represent the evolution of this idea. I reference cloud technologies and not “the cloud” deliberately as whilst the idea of relying on external providers to do all the heavy lifting for you is extremely attractive it unfortunately doesn’t work for everyone, especially for those who simply cannot outsource. Cloud technologies and principles however, like the idea of having massive pools of compute and storage resources that can be carved up dynamically, have the potential to change the way back end services are designed and provisioned. Most importantly it would decouple the solution design from the underlying infrastructure meaning that neither would dictate the other. That in itself is enough for most IT shops want to jump on the cloud bandwagon, and some are even doing so already.

It’s for that exact reason why I started developing on the Windows Azure platform and researching into VMware’s vCloud solution. Whilst the consumer space is very much in love with the cloud and the benefits it provides large scale IT is a much slower moving beast and it’s only just now coming around to the cloud idea. With the next version of Windows shaping up to be far more cloud focused than any of its predecessors it seems quite prudent for us IT administrators to start becoming familiar with the benefits cloud technology provides, lest we be left behind by those up and comers who are betting on this burgeoning platform.