The story of the majority of IT workers is eerily similar. Most get their beginnings in a call centre, slaving away behind a headset troubleshooting various issues for either their end users or as part of a bigger help desk that services dozens of clients. Some are a little more lucky, landing a job as the sole IT guy at a small company which grants them all the creative freedom they could wish for but also being shouldered with the weight of being the be all and end all of their company’s IT infrastructure. No matter how us IT employees got our start all of us eventually look towards getting certified in the technologies we deal with every day and, almost instantly after getting our first, become incredibly cynical about what they actually represent.
For many the first certification they will pursue will be something from Microsoft since it’s almost guaranteed that every IT job you’ll come across will utilize it in some fashion. Whilst the value of the online/eLearning packages is debatable there’s little question that you’ll likely learn something that you didn’t already know, even if it’s completely esoteric and has no application in the real world. For anyone who’s spent a moderate amount of time with the product in question these exams aren’t particularly challenging as most of them focus on regurgitating the Microsoft way of doing things. This, in turn, feeds into their greatest weakness as they favour rote memorization over higher order concepts and critical thinking (at least at the introductory/intermediate levels).
This has led to a gray market which is solely focused on passing the exams for these tests. Whilst there are some great resources which fall into this area (Like CBT Nuggets) there are many, many more which skirt the boundaries of what’s appropriate. For anyone with a modicum of Google skills it’s not hard to track down copies of the exams themselves, many with the correct answers highlighted for your convenience. In the past this meant that you could go in knowing all the answers in advance and whilst there’s been a lot of work done to combat this there are still many, many people carrying certifications thanks to these resources.
The industry term for such people is “paper certs”.
People with qualifications gained in this way are usually quite easy to spot as rote memorization of the answers does not readily translate into real world knowledge of the product. However for those looking to hire someone this often comes too late as interview questions can only go so far to root these kinds of people out. Ultimately this makes those entry level certifications relatively worthless as having one of them is no guarantee that you’ll be an effective employee. Strangely however employers still look to them as a positive sign and, stranger still, companies looking to hire on talent from outsourcers again look for these qualifications in the hopes that they will get someone with the skills they require.
I say this as someone who’s managed to skate through the majority of his career without the backing of certs to get me through. Initially I thought this was due to my degree, which whilst being tangentially related to IT is strictly speaking an engineering one, but the surprise I’m met with when I mention that I’m an engineer by training has led me to believe that most of my former employers had no idea. Indeed what usually ended up sealing the position for me was my past experiences, even in positions where they stated certain certs were a requirement of the position. Asking my new employers about it afterwards had them telling me that those position descriptions are usually a wish list of things they’d like but it’s rare that anyone will actually have them all.
So we have this really weird situation where the majority of certifications are worthless, which is known by all parties involved, but are still used as a barrier to entry for some positions/opportunities but that can be wholly overwritten if you have enough experience in that area. If that’s sounding like the whole process is, for want of a better word, worthless than you’d be of the same opinion of most of the IT workers that I know.
There are some exceptions to this rule, CISCO’s CCIE exams being chief among them, but the fact that the training and certification programs are run by the companies who develop the products are the main reason why the majority of them are like this. Whilst I’m not entirely sure that having an independent certification body would solve all the issues (indeed some of those non-vendor specific certs are just as bad) it would at least remove the financial driver to churn as many people through the courses/exams as they currently do. Whilst I abhor artificial scarcity one of the places it actually helps is in qualifications, but that’d only be the first few tentative steps to solving this issue.
If you’ve worked in the IT industry it’s safe to assume that you’re familiar with ITIL or at least however it’s manage to manifest itself within your organisation. It’s probably one of the longest lasting ideals in IT today having been around for a good 20+ years in its current form, surprising for an industry that considers anything over 3 years archaic. Indeed anyone who’s been involved in implementing, maintaining or attempting to change an ITIL based process will likely call it that anyway and whilst I’m inclined to agree with them I think the problems stem more from the attitudes around these processes rather than the actual processes themselves.
Change management is by far the best example of this. The idea behind it is solid: any major changes to a system have to go through a review process that determines what impacts the change has and demands that certain requirements be met before it can be done. In an ideal world these are the kind of things you would do regardless of whether an external process required you to or not however the nature of IT tends towards many admins starting off in areas where such process aren’t required and thus, when they move onto bigger and better environments, processes like these are required to make sure they don’t unintentionally wreck havoc on larger systems. However change management is routinely seen as a barrier to getting actual work done and in many cases it is.
This is where the attitude problems start to occur. ITIL based processes (no one should be using pure ITIL, that’s crazy talk) should not be a hindrance to getting work done and the second they start becoming so is when they lose their value. Indeed the reason behind implementing an ITIL process like change management is to extract more value out of the process than is currently being derived, not to impede the work is being done. Essentially it should only be an extension of work that would be undertaken in the first place and if it isn’t then you either need to look at your implementation of the change process or why your current IT practices aren’t working with it.
Predominantly I think this comes from being far too strict with these kinds of processes with the prevailing attitudes in industry being that deviation from them will somehow lead to an downward spiral of catastrophes from which there is no escape. If these ITIL process are being routinely circumvented or if the amount of work required to complete the process outweighs the actual work itself then it’s not the people who are to blame, it is the process itself. Realistically instead of trying to mold people to the process, like I’ve seen it done countless times over, the process should be reworked to suit the people. Whilst this is by far more difficult to do than simply sending people on ITIL courses the benefits will far outweigh the costs of doing so and you’ll probably find that more people stick to it rather than attempt to circumvent it.Indeed much of the process revolution that has happened in the past decade has been due to these people rather than process focused ideals.
Whilst ITIL might be getting a little long in the tooth many of the ideals it touches on are fundamental in nature and are things that persist beyond changes in technology. Like many ideas however their application has been less than ideal with the core idea of turning IT in a repeatable, dependable process usurped by laborious processes that add no value. I believe changing the current industry view from focusing on ITIL based processes to people focused ones that utilize ITIL fundamentals would trigger a major shift in the way corporate IT entities do business.
A shift that I believe would be all for the better.
I often find myself trusted with doing things I’ve never done before thanks to my history of delivering on these things but I always make people well aware of my inexperience in such areas before I pursue such things. I do this because I know I’m not the greatest engineer/system administrator/coder around but I do know that, given enough time, I can deliver something that’s exactly what they required. It’s actually an unfortunate manifestation of the imposter syndrome whereby I’m constantly self assessing my own skills, wondering if anything I’ve done was really that good or simply the product of all the people I worked with. Of course I’ve worked with people who know they are the best at what they do, even if the reality doesn’t quite match up to their own self-image.
Typically these kinds of people take one of 2 forms, the first one of which I’ll call The Guns. Guns are awesome people, they know everything there is to know about their job and they’re incredibly helpful, a real treasure for the organisation. I’m happy to say that I’ve encountered more of these than the second type and they’re in no small part responsible for a lot of the things that I know today. They are usually vastly under-appreciated for their talents however as since they usually enjoy what they do to such a great extent they don’t attempt to upset the status quo and toil away in relative obscurity. These are the kinds of people I have infinite amounts of time for and are usually the ones I look to when I’m looking for help.
Then there’s the flip side: the Alpha Nerds.
These guys are typically responsible for some part of a larger system and to their credit they know it inside and out. I’d say on average about half of them got to that level of knowledge by simply being there for an inordinate amount of time and through that end up being highly valuable because of their vast amount of corporate knowledge. However the problem with these guys, as opposed to The Guns, is that they know this and use it to their advantage in almost every opportunity they get. Simple change to their system? Be prepared to do a whole bunch of additional work for them before it’ll happen. A problem that you’re responsible for but is out of your control due to other arrangements? They’ll drill you on it in order to reinforce their status with everyone else. I can’t tell you how detrimental these people are to the organisation even if their system knowledge and expertise appears invaluable.
Of course this delineation of Guns and Alpha Nerds isn’t a hard and fast line, there’s a wide spectrum between the two extremes, but there is an inflexion point where a Gun starts to turn Alpha and the benefits to the organisation start to tank. Indeed I had such a thing happen to me during my failed university project where I failed to notice that a Gun was turning Alpha on me, burning them out and leaving the project in a state where no one else could work on it even if they wanted to. Whilst the blame still rests solely on my shoulders for failing to recognise that it still highlights how detrimental such behaviour can be when technical expertise isn’t coupled with a little bit of humility.
Indeed if your business is building products that are based on the talents of said people then it’s usually to your benefit to remove Alpha Nerds from your team, even if they are among the most talented people in your team. This is especially true if you’re trying to invest in developing people professionally as typically Alphas will end up being the de-facto contacts for the biggest challenges, stifling the skill growth of members of the team. Whilst they might be worth 2.5 times of your average performers you’re likely limiting the chances of the team being more productive than they currently are, quite possibly to the tune of much more than what the Alpha is capable of delivering.
Like I said before though I’m glad these kinds of people tend towards being less common than their Gun counterparts. I believe this is because during the nascent stages of someone’s career you’re likely to run up against an Alpha and see the detrimental impacts they have. Knowing that you’re then much more likely to work against becoming like them and should you become an expert in your chosen area you’ll make a point of being approachable. Some people fail to do that however and proceed to make our lives a lot more difficult than they should be but I’m sure this isn’t unique to IT and is innate to organisations both big and small.
I’ve been working in public sector IT for the better part of 7 years now, starting off as a lowly help desk operator and working my way up through the ranks to the senior technical consultant position I find myself in today. I’m not telling you this to brag (indeed I don’t believe I’m completely unique in this regard) rather I want to impress upon you the level of familiarity I have when it comes to government IT systems. I’ve worked in departments ranging from mere hundreds of employees to the biggest public service organisation that exists within Australia. So when I say Tony Abbott’s office isn’t giving us the full story on this whole Peter Slipper incident and the subsequent time zone argument they used to defend their position you’ll know that I’m not just making stuff up.
For reference his whole argument has been thoroughly debunked by Sortius in his brilliant 10 hours of bullshit where he shows that the document has had its date modified to show a 10 hour discrepancy. Back when it was first published he was just going off public information but recent updates to the post have seen him get his hands on the original press release with an unmodified date on them, showing that the press release was indeed drafted the night before. You’d think that’d be the last of it (and indeed if it was I would’ve simply tweeted it again) however the Department of Parliamentary Services (DPS) has gone on record saying that they have identified a problem with the time stamps on the files in question and have backed up Abbott’s side of the story.
Reporters have since been granted access to the PC and shown similar files which seem to suffer the same Zulu time zone problem that apparently plagues the press release in question. What wasn’t investigated was whether or not files created in the way that Sortius has shown suffer from the same issue, I.E. is there an on-going technical issue with that particular computer or are those files the result of the same kind of tampering that the press release appears to have undergone. That would go some way to explaining what’s going on here but it doesn’t explain why the time stamp shows a Zulu time zone which Microsoft word isn’t capable of producing.
Indeed doing a little research for myself shows that PDFs created from Microsoft Word’s PDF creator plugin will always show created/modified dates that are more or less identical and reflect the current time it was created (not the time when the original word document was created). If we’re to believe that there was some problem with the PC that caused the Z to appear it follows that it should have been the same for both the created date and the modified date. The fact that there’s a discrepancy gives credence to the idea that the PDF was first created using the Word PDF exporter and then modified afterwards using another program. The original document, the one shown in the final update from Sortius, shows some differences in created/modified times however it appears that was created using the PDFMaker Plugin for Word and then later modified in Adobe Distiller (not the same way as the metadata in the modified press release indicates).
Now this doesn’t necessarily mean that Abbott was aware of this information but it does implicate that someone working for him did. In attempting to track down just who it was who created the PDF I came across 2 probable people (one person who I think works at DPS and a Brisbane based ghost writer) but I wasn’t able to verify it was actually one or the other. Whoever did write it would be able to provide some insights into this whole thing but it’s unlikely that they’ll ever come forward, especially considering the fact that they would’ve been working for Abbott at the time (and may still be).
All of this points in the direction that something is going on over there and that further investigation is definitely warranted. I know there’s several other things I could do to either verify or debunk this theory completely should I have more open access to said system but I doubt we’ll get anything more than the guided tour that was given to the ABC journalists already. If I still had people I knew working at DPS you can be assured that I’d get the full story from them but alas, I came up dry on this one. Sortius is still on the case though and I’m very interested to see what DPS has to say about the current discrepancies and will keep you posted on the progress.
Canberra is a weird little microcosm as its existence is purely because the 2 largest cities in Australia couldn’t agree on who could be the capital of the country and they instead decided to meet, almost literally, in the middle. Much like Washington DC this means that all of the national level government agencies are concentrated in this area meaning that the vast majority of the 360,000 or so population work either directly or indirectly for the government. This concentration of services in a small area has distorted many of the markets that exist in your typical city centres and probably most notable of them all is the jobs market.
To put it in perspective there’s a few figures that will help me illustrate my point more clearly. For starters the average salary of a Canberran worker is much higher than the Australian average even beating out commodity rich states which are still reaping the benefits of the mining boom. Additionally Canberra’s unemployment is among the lowest in Australia hovering around a staggering 3.7%. This means that the labour market here is somewhat distorted and that’s especially true for the IT industry. However, like the manufacturing industry in the USA, there are still many who will bellyache endlessly about the lack of qualified people available to fill the needs of even this small city.
The problem is, as it always has been, simple economics.
I spent a good chunk of my career working directly for the public service, jumping straight out of university in a decent paying job that I figured I’d be in for quite a while. However it didn’t take long for me to realise that there was another market out there for people with my exact same skills, one that was offering a substantial amount more to do the same work. Like any rational person I jumped at this opportunity and have been continuing to do so for the past 6 years. However I still see positions similar to mine advertised with salaries attached to them that are, to be fair, embarrassing for anyone with those kinds of skills to take when they can get so much more for doing the same amount of work. This has led to a certain amount of tension between Canberra’s IT workers and the government that wishes to employ them with many agencies referring to this as a skills shortage.
The schism is partly due to the double faceted nature of the Canberran IT market. One the one hand the government will pay you a certain amount if you’re permanently employed with them and another if you’re hired as an outside contractor. However these positions are, for the most part, identical except that one pays an extraordinary amount more at the cost of some of the benefits (flex time, sick/annual leave, etc.). It follows that many IT workers are savy enough to take advantage of this and plan their lives around those lack of benefits accordingly and thus will never even consider the lower paid option because it just doesn’t make sense for them.
This hasn’t stopped the government from trying however. The Gershon report had been the main driver behind this, although its effects have been waning for the past 2 years, but now its the much more general cost reductions that are coming in as part of the overall budget goal of delivering a surplus. The problem here however, as I mentioned in the post I just linked, is that once you’re above a certain pay grade in the public service you’re expected to facilitate some kind of management function which doesn’t really align with the requirements of IT specialists. Considering that even outside of Canberra’s arguably inflated jobs market such specialists are able to make far more than the highest, non-managerial role in the government it comes as no surprise that the contractor market had flourished the way it did and why the implementation of the Gershon report did nothing but decimate the government’s IT capability.
Simply put the skills/labour shortage that’s been experienced in many places, not just Canberra, is primarily due a disconnect between the skills required and the amount organisations are willing to pay for said skills. The motivation behind the lower wage costs is obvious but the outcome should not be unexpected when you try to drive the price down but the supply remains the same. Indeed many of the complaints about a labour shortage are quickly followed by calls for incentives and education in the areas where there’s a skills shortage rather than looking at the possibility that people are simply becoming more market savy and are not willing to put up with lower wages when they know they can do better elsewhere.
I had personally only believed that this applied to the Canberra IT industry but in doing the research for this post it seems like it applies far more broadly than I had first anticipated. In all honesty this does nothing but hurt the industry as it only helps to increase tensions between employers and employees when there’s a known disconnect between the employee’s market value and their compensation. I’d put the challenge to most employers to see how many good, skilled applicants they get if they start paying better rates as I’d hazard a guess their hit rate would vastly improve.
IT is one of the few services that all companies require to compete in today’s markets. IT support then is one of those rare industries where jobs are always around to be had, even for those working in entry level positions. Of course this assumes that you put in the required effort to stay current as letting your skills lapse for 2 or more years will likely leave you a generation of technology behind, making employment difficult. This is of course due to the IT industry constantly evolving and changing itself and much like other industries certain jobs can be made completely redundant by technological advancements.
For the past couple decades though the types of jobs you expect to see in IT support have remained roughly the same, save for the specializations brought on by technology. As more and more enterprises came online and technology began to develop a multitude of specializations became available, enabling then generic “IT guys” to become highly skilled workers in their targeted niche. I should I know, just on a decade ago I was one of those generic IT support guys and today I’m considered to be a specialist when it comes to hardware and virtualization. Back when I started my career the latter of those two skills wasn’t even in the vernacular of the IT community, let alone a viable career path.
Like any skilled position though specialists aren’t exactly cheap, especially for small to medium enterprises (SMEs). This leads to an entire second industry of work-for-hire specialists (usually under the term “consultants”) and companies looking to take the pain out of utilizing the technology without having to pay for the expertise to come in house. This isn’t really a surprise (any skilled industry will develop these secondary markets) but with IT there’s a lot more opportunity to automate and leverage economies of scale, more so than any other industry.
This is where Cloud Computing comes in.
The central idea behind cloud computing is that an application can be developed to run on a platform which can dynamically deliver resources to it as required. The idea is quite simple but the execution of it is extraordinarily complicated requiring vast levels of automation and streamlining of processes. It’s just an engineering problem however, one that’s been surmounted by several companies and used to great effect by many other companies who have little wish to maintain their own infrastructure. In essence this is just outsourcing taken to the next level, but following this trend to its logical conclusion leads to some interesting (and, if you’re an IT support worker, troubling) predictions.
For SMEs the cost of running their own local infrastructure, as well as the support staff that goes along with it, can be one of their largest cost centres. Cloud computing and SaaS offers the opportunity for SMEs to eliminate much of the cost whilst keeping the same level of functionality, giving them more capital to either reinvest in the business or bolster their profit margins. You would think then that this would just be a relocation of jobs from one place to another but cloud services utilize much fewer staff due to the economies of scale that they employ, leaving fewer jobs available for those who had skills in those area.
In essence cloud computing eliminates the need for the bulk of skilled jobs in the IT industry. There will still be need for most of the entry level jobs that cater to regular desktop users but the back end infrastructure could easily be handled by another company. There’s nothing fundamentally wrong with this, pushing back against such innovation never succeeds, but it does call into question those jobs that these IT admins currently hold and where their future lies.
Outside of high tech and recently established businesses the adoption rate of cloud services hasn’t been that high. Whilst many of the fundamentals of the cloud paradigm (virtualization, on-demand resourcing, infrastructure agnostic frameworks) have found their way into the datacenter the next logical step, migrating those same services into the cloud, hasn’t occurred. Primarily I believe this is due to the lack of trust and control in the services as well as companies not wanting to write off the large investments they have in infrastructure. This will change over time of course, especially as that infrastructure begins to age.
For what its worth I still believe that the ultimate end goal will be some kind of hybrid solution, especially for governments and the like. Cloud providers, whilst being very good at what they do, simply can’t satisfy the need of all customers. It is then highly likely that many companies will outsource routine things to the cloud (such as email, word processing, etc) but still rely on in house expertise for the customer applications that aren’t, and probably will never be, available in the cloud. Cloud computing then will probably see a shift in some areas of specialization but for the most part I believe us IT support guys won’t have any trouble finding work.
We’re still in the very early days of cloud computing and its effects on the industry are still hard to judge. There’s no doubt that cloud computing has the potential to fundamentally change the way the world does IT services and whatever happens those of us in IT support will have to change to accommodate it. Whether that comes in the form of reskilling, training or looking for a job in a different industry is yet to be determined but suffice to say that the next decade will see some radical changes in the way businesses approach their IT infrastructure.
I’m a big fan of technology that makes users happy. As an administrator anything that keeps users satisfied and working productively means more time for me to make the environment even better for them. It’s a great positive feedback loop that builds on itself continually, leading to an environment that’s stable, cutting edge and just plain fun to use and administer. Of course the picture I’ve just painted is something of an IT administrator nirvana, a great dream that is rarely achieved even by those who have unlimited freedom with the budgets to match. That doesn’t mean we shouldn’t try to achieve it however and I’ll be damned if I haven’t tried at every place I’ve ever worked at.
The one thing that always come up is “Why don’t we use Macs in the office? They’re so easy to use!”. Indeed my two month long soiree into the world of OSX and all things Mac showed that it was indeed an easy operating system to pick up and I could easily see why so many people use it as their home operating system. Hell at my current work place I can count several long time IT geeks who’ve switched their entire household over to solely Apple gear because it just works and as anyone who works in IT will tell you the last thing you want to be doing at home is fixing up PCs.
You’d then think that Macs would be quite prevalent in the modern workspace, what with their ease of use and popularity amongst the unwashed masses of users. Whilst their usage in the enterprise is growing considerably they’re still hovering just under 3% market share, or about the same amount of market share that Windows Phone 7 has in the smart phone space. That seems pretty low but it’s in line with world PC figures with Apple being somewhere in the realms of 5% or so. Still there’s a discrepancy there so the question still remains as to why Macs aren’t seen more often in the work place.
The answer is simple, Apple simply doesn’t care about the enterprise space.
I had my first experience with Apple’s enterprise offerings very early on in my career, way back when I used to work for the National Archives of Australia. As part of the Digital Preservation Project we had a small data centre that housed 2 similar yet completely different systems. They were designed in such a way that should a catastrophic virus wipe out the entire data store on one the replica on the other should be unaffected since it was built from completely different software and hardware. One of these systems utilized a few shelves of Apple’s Xserve RAID Array storage. In essence they were just a big lump of direct attached storage and for that purpose they worked quite well. That was until we tried to do anything with it.
Initially I just wanted to provision some of the storage that wasn’t being used. Whilst I was able to do some of the required actions through the web UI the unfortunate problem was that the advanced features required installing the Xserve tools on a Mac computer. Said computer also had to have a fibre channel card installed, something of a rarity to find in a desktop PC. It didn’t stop there either, we also tried to get Xsan installed (so it would be, you know, an actual SAN) only to find out that we’d need to buy yet more Apple hardware in order to be able to use it. I left long before I got too far down that rabbit hole and haven’t really touched Apple enterprise gear since.
You could write that off as a bad experience but Apple has continued to show that the enterprise market is simply not their concern. No less than 2 years after I last touched a Xserve RAID Array did Apple up and cancel production of them, instead offering up a rebadged solution from Promise. 2 years after that Apple then discontinued production of its Xserve servers and lined up their Mac Pros as a replacement. As any administrator will tell you the replacements are anything but and since most of their enterprise software hasn’t recieved a proper update in years (Xsan’s last major release was over 3 years ago) no one can say that Apple has the enterprise in mind.
It’s not just their enterprise level gear that’s failing in corporate environments. Whilst OSX is easy to use it’s an absolute nightmare to administer on anything larger than a dozen or so PCs as all of the management tools available don’t support it. Whilst they do integrate with Active Directory there’s a couple limitations that don’t exist for Windows PCs on the same infrastructure. There’s also the fact that OSX can’t be virtualized unless it runs on Apple hardware which kills it off as a virtualization candidate. You might think that’s a small nuisance but it means that you can’t do a virtual desktop solution using OSX (since you can’t buy the hardware at scale to make it worthwhile) and you can’t utilize any of your current investment in virtual infrastructure to run additional OSX servers.
If you still have any doubts that Apple is primarily a hardware company then I’m not sure what planet you’re on.
For what its worth Apple hasn’t been harmed by ignoring the enterprise as it’s consumer electronics business has more than made up for the losses that they’ve incurred. Still I often find users complaining about how their work computers can’t be more like their Macs at home, ignorant of the fact that Apple’s in the enterprise would be an absolutely atrocious experience. Indeed it’s looking to get worse as Apple looks to iPhoneizing their entire product range including, unfortunately, OSX. I doubt Apple will ever change direction on this which is a real shame as OSX is the only serious competitor to Micrsoft’s Windows.
It’s no secret that I owe a large part of my IT career to virtualization. It was a combination of luck, timing and willingness to jump into the unknown that led me down the VMware path having my first workplace using VMware’s products which set the stage for every job there after seeing my experience and latching on to it with a crack-junkie like desire. Over the years then I’ve become intimately familiar with many virtualization solutions but inevitably I find myself coming back to VMware because simply put they’re the market leaders and pretty much everyone who can afford to use them does so. So you can imagine then I was somewhat excited when I saw the release of vSphere 5 and I’ve been putting it through its paces over the past couple weeks.
On the surface ESXi 5 and vSphere 5 look almost identical to their predecessors. ESXi 5 is really only distinguishable from 4 thanks to the slightly different layout and changed font, whilst vSphere 5 is exactly the same spare for some new icons and additional links to new features. I guess with any new product version I’ve just come to expect a UI revamp even if it adds nothing to the end product so the fact that VMware decided to stick with their current UI came as somewhat of a surprise but I can’t really fault them for doing so. The real meat of the vSphere 5 is under the hood and there have been some major improvements from my initial testing.
vSphere 5 brings with it Virtual Machine Version 8 which amongst the usual more CPUs/more memory upgrades brings along with it support for 3D accelerated graphics, UEFI for the BIOS (which technically means it can OSX Lion although that will never happen¹) and USB 3.0 support. There’s also a few new options available when creating a new virtual machine like the ability to add virtual sockets (not just virtual cores) and the choice between eager and lazy zeroed disks.
The one overall impression that vSphere 5 has left on me though is that it’s fast, like really fast. The UI is much more responsive, operations that used to take minutes are now done in seconds and in the few performance tests we’ve done ESXi 5 seems to be consistently faster than its 4.1 Update 1 counterpart. According to my sources close to the matter this is because ESXi 5 is all new code from the ground up, enabling them to enhance performance significantly. From my first impressions with it I’d say that they’ve succeed in doing this and I’m looking forward to seeing how it handles real production loads in the very near future.
What really amazed me was a lot of the code that I had developed for vSphere 4 was 100% compatible with vSphere 5. I had been dreading having to rewrite the near 2000 lines of code that I had developed for the build system in order to get ESXi 5 into our environment but every command worked without a hitch, showing VMware’s dedication to backwards compatibility is extremely good, approaching the king of compatibility Microsoft. Indeed those looking to migrate to vSphere 5 don’t have much to worry about as pretty much every feature of the previous version is supported, and migrating to the newer platform is quite painless.
I’ve yet to have a chance to fiddle with some of the new features (like the storage appliance, which looks incredibly cool) but overall my first impressions of vSphere 5 are quite good, along the lines of what I’ve come to expect from VMware. I haven’t yet run into major gotchas yet but I’ve only had a couple VMs running in an isolated vSphere instance so my sample size is rather limited. I’m sure once I start throwing some real applications at it I’ll start running into some more interesting problems but suffice to say that VMware has done well with this release and I can see vSphere 5 making its home in all IT departments where VMware is already deployed.
¹The stipulation for all Apple products is that they run on Apple hardware, including virtualized instances. Since the only things you can buy with OSX Server installed on them are Mac Mini Servers or Mac Pros, neither of which are on the Hardware Compatability List, running your own virtualized copies of OSX Server (legitimately) simply can’t happen. Yet I still get looks of amazement when I tell people Apple is a hardware company, figures.
It was just under 2 years ago when I wrote my first (and only) post on smartphone virtualization approaching it with the enthusiasm that I do with most cool new technologies. At the time I guessed that VMware would eventually look to integrate this idea with some of their other products, in essence turning user’s phones into dumb terminals so that IT administrators could have more control over them. However the exact usefulness was still not clear as at the time most smartphones were only just capable of running a single instance, let alone another one with all the virtualization trimmings that’d inevitably slow it down. Android was also somewhat of a small time player back then as well having only 5% of the market (similar to Windows Phone 7 at the same stage in its life, funnily enough) making this a curiosity more than anything else.
Of course a lot has changed in the time between that post and now. Then market leader, RIM, is now struggling with single digit market share when it used to make up almost half the market. Android has succeeded in becoming the most popular platform surpassing Apple who maintained the crown for many years prior. Smartphones have also become wildly more powerful as well, with many of them touting dual cores, oodles of RAM and screen resolutions that would make my teenage self green with envy. With this all in mind then the idea of running some kind of virtualized environment on a smartphone doesn’t seem all that ludicrous any more.
Increasingly IT departments are dealing with users who want to integrate their mobile devices with their work space in lieu of using a separate, work specific device. Much of this pressure came initially from the iPhone with higher ups wondering why they couldn’t use their devices to access work related data. For us admin types the reasons were obvious: it’s an unapproved, untested device which by rights has no business being on the network. However the pressure to capitulate to their demands was usually quite high and work arounds were sought. Over the years these have taken many various forms, but the best answer would appear to lie within the world of smartphone virtualization.
VMware have been hard at work creating full blown virtualization systems for Android that allow a user to have a single device that contains both their personal handset as well as a secure, work approved environment. In essence they have an application that allows them to switch between the two of them, allowing the user to have whatever handset they want whilst still allowing IT administrators to create a standard, secure work environment. Android is currently the only platform that seems to support this wholly thanks to its open source status, although there are rumours of it coming to the iOS line of devices as well.
It doesn’t stop there either. I predicted that VMware would eventually integrate their smartphone virtualization technology into their View product, mostly so that the phones would just end up being dumb terminals. This hasn’t happened exactly, but VMware did go ahead and imbue their View product with the ability to present full blown workstations to tablet and smartphones through a secure virtual machine running on said devices. This means that you could potentially have your entire workforce running off smartphones with docking stations, enabling users to take their work environment with them wherever they want to go. It’s shockingly close to Microsoft’s Three Screens idea and with Google announcing that Android apps are now portable to Google TV devices you’d be forgiven for thinking that they outright copied the idea.
For most regular users though these kinds of developments don’t mean a whole lot, but it is signalling the beginning of the convergence of many disparate experiences into a single unified one. Whilst I’m not going to say that anyone one platform will eventually kill off the other (each one of the three screens has a distinct purpose) we will see a convergence in the capabilities of each platform, enabling users to do all the same tasks no matter what platform they are using. Microsoft and VMware are approaching this idea from two very different directions with the former unifying the development platform and the latter abstracting it away so it will be interesting to see which approach wins out or if they too eventually converge.
If there’s one thing that us system administrators loathe more than dealing with users its dealing with users who have a bit of IT smarts around them. On the surface they’re the perfect user, being able to articulate their problems and requirements aptly so we have to spend considerably less time fulfilling their requests. However more often than not they’re also the ones attempting to circumvent safeguards and policies in order to get a system to work the way they want it to. They’re also the ones who will push for much more radical changes to systems since they will have already experimented with such things at home and will again want to replicate that in their work environment.
Collectively such people are known as shadow IT departments.
Such departments are a recent phenomena with a lot of credit (or blame) being levelled at those of my generation, the first to grow up as digital natives. Since the vast majority of us have used computers and the Internet from an early age we’ve come to expect certain things to be available to us when using them and don’t appreciate it when they are taken away. This doesn’t gel too well with the corporate world of IT where lock downs and restrictions are the norm, even if they’re for the user’s benefit, and thus they seek to circumvent such problems causing endless headaches for their system administrators. Still they’re a powerful force for driving change in the work place, enough so that I believe these shadow IT departments are shaping the future of corporate environments and the technologies that support them.
Most recently I’ve seen this occurring with mobility solutions, a fancy way of saying tablets and phones that users want to use on the corporate network. Now it’s hard to argue with a user that doing such a thing isn’t technically feasible but in the corporate IT world bringing in uncontrolled devices onto your network is akin to throwing a cat into a chicken coup (I.E. no one but the cat benefits and you’re left with an awful mess to clean up). Still all it takes is one of the higher ups to request such a thing for it to become a mandate for the IT department to implement. Unfortunately for us IT guys the technology du jour doesn’t lend itself well to being tightly controlled by a central authority so most resort to hacks and work arounds in order to make them work as required.
As the old saying goes the unreasonable person is the one who changes the world to suit themselves and therefore much of the change in the corporate IT world is being made by these shadow IT departments. At the head of these movements are my fellow Gen Y and Zers who are struggling with the idea that what they do at home can’t be replicated at work:
“The big challenge for the enterprise space is that people will expect to bring their own devices and connect in to the office networks and systems,” Henderson said. “That change is probably coming a lot quicker than just five years’ time. I think it will be a lot sooner than that.”
Dr Keiichi Nakata, reader in social informatics at Henley Business School at the University of Reading, who was also at the roundtable, said the university has heard feedback from students who have met companies for interviews and been “very surprised” that technologies they use every day are not being utilised inside those businesses.
It’s true that the corporate IT world is a slow moving beast when compared to the fast paced consumer market and companies aren’t usually willing to wear the risk of adopting new technologies until they’ve proven themselves. Right now any administrator being asked to do something like “bring your own computer” will likely tell you its impossible, lest you open yourselves up to being breached. However technologies like virtualization are making it possible to create a standard work environment that runs practically everywhere and I think this is where a bring your own device world could be possible.
Of course this shifts the problem from the IT department to the virtualization product developer but companies like VMware and CITRIX have both already demonstrated the ability to run full virtual desktop environments on smart phone level hardware. Using such technologies then users would be able to bring in almost any device that would then be loaded with a secure working environment, enabling them to complete the work they are required to do with the device they choose. This would also allow IT departments to become a lot more flexible with their offerings since they wouldn’t have to spend so much time providing support to the underlying infrastructure. Of course there are many other issues to consider (like asset life cycles, platform vetting, etc) but a future where your work environment is independent of the hardware is not so far fetched after all.
The disjunct between what’s possible with IT and what is the norm in computer environments has been one of those frustrating curiosities that has plagued my IT career. Of course I understand that the latest isn’t always the greatest, especially if you’re looking for stability, but the lack of innovation in the corporate space has always been one of pet peeves. With more and more digital natives joining the ranks however the future looks bright for a corporate IT world that’s not too unlike the consumer one that we’re all used to, possibly one that even innovates ahead of it.