Back in 1996 one of the incoming Howard government’s core promises was to reduce their expenditure dramatically, specifically with regards to their IT. The resulting policy was dubbed the IT Initiative and promised to find some $1 billion dollars in savings in the following years primarily through outsourcing many functions to the private sector. It was thought that the private sector, which was well versed in projects of the government’s scale and beyond, would be able to perform the same function at a far reduced cost to that of permanent public servants. The next decade saw many companies rush in to acquire these lucrative IT outsourcing arrangements but the results, both in terms of services delivered and apparent savings, never matched that which was promised.
For many the reasons behind the apparent failure were a mystery. Many of the organisations involved in providing IT services to the government weren’t fly by night operations, indeed many of them were large multi-national companies with proven track records, but they just didn’t achieve the same outcomes when it came to the government contracts. After nearly a decade of attempting to make outsourcing work many departments began insourcing their IT departments again and relied on a large contractor workforce to bring in the skills required to keep their projects functioning. Of course costs were still above what many had expected them to be, result in the Gershon Report that recommended heavy cuts to said contractor workforce.
This all stems from the one glaring failure that the government has still yet to realise: it can’t negotiate contracts.
I used to work for a large outsourcer in the Canberra region, swept up while I was still fresh out of university into a job that paid me a salary many took years to attain. The outsourcer had won this contract away from the incumbent to provide desktop and infrastructure services whilst the numerous other outsourcers involved in the contract retained ownership of their respective systems. After spending about 6 months as a system admin my boss approached me about moving into the project management space, something I had mentioned that I was keen on pursuing. It was in this position that I found out just how horrible the Australian government was at contract negotiation and how these service providers were the only winners in their arrangements.
My section was dedicated to “new business”, essentially work that we’d be responsible for implementing that wasn’t in scope as part of the broader outsourcing contract. Typically these would be small engagements, most not requiring tender level documentation, and in all honesty would have been considered by any reasonable individual to fall under the original contract. Of course many of the users who I came back to with a bill detailing how much it would cost to do the work they needed often responded with much surprise and often would simply drop the request than try to seek approval for the cost.
The issue still exists today primarily because many of the positions that handle contract negotiations don’t require specific skills or training. This means whilst the regulations in place stop most government agencies from entering into catastrophically bad arrangements the more subtle ones often slip through the cracks and it’s only after everything is said and done that oversights are found. All of the large outsourcers in Canberra know this and it’s why there’s been no force working to correct the problem for the better part of 2 decades. It’s why Canberra exists as a strange microcosm of IT expertise, with salaries that you won’t see anywhere else in Australia.
The solution is to simply start hiring contract negotiators away from the private sector and get them working for the Australian government. Get contract law experts to review large IT outsourcing arrangements and start putting the screws to those outsourcers to deliver more for the same amount of money. It’s not an easy road to tread and it won’t likely win the government any friends but unless they start doing something outsourcing is always going to be seen as a boondoggle, only for those with too much cash and not enough sense.
It’s every system administrator’s dream to only be working on the latest hardware running the most recent software available. This is partially due to our desire to be on the cutting edge of all things, where new features abound and functionality is at its peak. However the reality is always far from that nirvana with the majority of our work being on systems that are years old running pieces of software that haven’t seen meaningful updates in years. That’s why few tears have been shed by administrators worldwide about XP’s impending demise as it signals the end of the need to support something that’s now over a decade old. Of course this is much to the chagrin of end users and big enterprises who have still yet to make the transition.
Indeed big enterprises are rarely on the cutting edge and thus rely on extended support programs in order to keep their fleet maintained. This is partially due to the amount of inertia big corporations have, as making the change to potentially thousands of endpoints takes some careful planning an execution. Additionally the impacts to the core business cannot be underestimated and must be taken into careful consideration before the move to a new platform is made. With this in mind it’s really no surprise that corporations often buy support contracts that go for 3 or 5 years for the underlying hardware as that ensures that they won’t have to make disruptive changes during that time frame.
So when HP announced recently that it would be requiring customers to have a valid warranty or support agreement with them in order to get updates I found myself in two minds about it. For most enterprises this will be a non-issue as running hardware that’s out of warranty is begging for trouble and not many have the appetite for that kind of risk. Indeed I actually thought this would be a good thing for enterprise level IT as it would mean that I wouldn’t be cornered into supporting out of warranty hardware, something which has caused me numerous headaches in the past. On the flip side though this change does affect something that is near and dear to my heart: my little HP Mircoserver.
This new decision means that this little server only gets updates for a year after purchase after which you’re up for at least $100 for a HP Care Pack which extends the warranty out to 5 years and provides access to all the updates. Whilst I missed the boat on the install issues that plagued its initial release (I got mine after the update came out) I can see it happening again with similar hardware models. Indeed the people hit hardest by this change are likely the ones who would be least able to afford a support plan of this nature (I.E. smaller businesses) who are the typical candidates for running hardware that’s out of a support arrangement. I can empathise with their situation but should I find myself in a situation where I needed an update for them and couldn’t get it due to their lack of support arrangements I’d be the first one to tell them so.
Indeed the practice isn’t too uncommon with the majority of other large vendors requiring something on the order of a subscription in order to get product updates with the only notable exception being Dell (full disclosure: I work for them). I’ll agree that it appears to be a bit of a cash grab as HP’s server business hasn’t been doing too well in the recent quarters (although no one has done particularly well, to be honest) although I doubt they’re going to make up much to counter act the recent downfall. This might also spur some customers on to purchase newer hardware whilst freeing up resources within HP that no longer need to support previous generations of hardware.
So I guess what I’m getting at is that whilst I can empathise with the people who will be hard done by with this change I, as someone who has to deal with warranty/support calls, don’t feel too hard done by. Indeed any admin worth their salt could likely get their hands on the updates anyway without having to resort to the official source anyway. If the upkeep on said server is too much for you to afford then it’s likely time to rethink your IT strategy, potentially looking at cloud based solutions that have a very low entry point cost when compared to upgrading a server.
The story of the majority of IT workers is eerily similar. Most get their beginnings in a call centre, slaving away behind a headset troubleshooting various issues for either their end users or as part of a bigger help desk that services dozens of clients. Some are a little more lucky, landing a job as the sole IT guy at a small company which grants them all the creative freedom they could wish for but also being shouldered with the weight of being the be all and end all of their company’s IT infrastructure. No matter how us IT employees got our start all of us eventually look towards getting certified in the technologies we deal with every day and, almost instantly after getting our first, become incredibly cynical about what they actually represent.
For many the first certification they will pursue will be something from Microsoft since it’s almost guaranteed that every IT job you’ll come across will utilize it in some fashion. Whilst the value of the online/eLearning packages is debatable there’s little question that you’ll likely learn something that you didn’t already know, even if it’s completely esoteric and has no application in the real world. For anyone who’s spent a moderate amount of time with the product in question these exams aren’t particularly challenging as most of them focus on regurgitating the Microsoft way of doing things. This, in turn, feeds into their greatest weakness as they favour rote memorization over higher order concepts and critical thinking (at least at the introductory/intermediate levels).
This has led to a gray market which is solely focused on passing the exams for these tests. Whilst there are some great resources which fall into this area (Like CBT Nuggets) there are many, many more which skirt the boundaries of what’s appropriate. For anyone with a modicum of Google skills it’s not hard to track down copies of the exams themselves, many with the correct answers highlighted for your convenience. In the past this meant that you could go in knowing all the answers in advance and whilst there’s been a lot of work done to combat this there are still many, many people carrying certifications thanks to these resources.
The industry term for such people is “paper certs”.
People with qualifications gained in this way are usually quite easy to spot as rote memorization of the answers does not readily translate into real world knowledge of the product. However for those looking to hire someone this often comes too late as interview questions can only go so far to root these kinds of people out. Ultimately this makes those entry level certifications relatively worthless as having one of them is no guarantee that you’ll be an effective employee. Strangely however employers still look to them as a positive sign and, stranger still, companies looking to hire on talent from outsourcers again look for these qualifications in the hopes that they will get someone with the skills they require.
I say this as someone who’s managed to skate through the majority of his career without the backing of certs to get me through. Initially I thought this was due to my degree, which whilst being tangentially related to IT is strictly speaking an engineering one, but the surprise I’m met with when I mention that I’m an engineer by training has led me to believe that most of my former employers had no idea. Indeed what usually ended up sealing the position for me was my past experiences, even in positions where they stated certain certs were a requirement of the position. Asking my new employers about it afterwards had them telling me that those position descriptions are usually a wish list of things they’d like but it’s rare that anyone will actually have them all.
So we have this really weird situation where the majority of certifications are worthless, which is known by all parties involved, but are still used as a barrier to entry for some positions/opportunities but that can be wholly overwritten if you have enough experience in that area. If that’s sounding like the whole process is, for want of a better word, worthless than you’d be of the same opinion of most of the IT workers that I know.
There are some exceptions to this rule, CISCO’s CCIE exams being chief among them, but the fact that the training and certification programs are run by the companies who develop the products are the main reason why the majority of them are like this. Whilst I’m not entirely sure that having an independent certification body would solve all the issues (indeed some of those non-vendor specific certs are just as bad) it would at least remove the financial driver to churn as many people through the courses/exams as they currently do. Whilst I abhor artificial scarcity one of the places it actually helps is in qualifications, but that’d only be the first few tentative steps to solving this issue.
If you’ve worked in the IT industry it’s safe to assume that you’re familiar with ITIL or at least however it’s manage to manifest itself within your organisation. It’s probably one of the longest lasting ideals in IT today having been around for a good 20+ years in its current form, surprising for an industry that considers anything over 3 years archaic. Indeed anyone who’s been involved in implementing, maintaining or attempting to change an ITIL based process will likely call it that anyway and whilst I’m inclined to agree with them I think the problems stem more from the attitudes around these processes rather than the actual processes themselves.
Change management is by far the best example of this. The idea behind it is solid: any major changes to a system have to go through a review process that determines what impacts the change has and demands that certain requirements be met before it can be done. In an ideal world these are the kind of things you would do regardless of whether an external process required you to or not however the nature of IT tends towards many admins starting off in areas where such process aren’t required and thus, when they move onto bigger and better environments, processes like these are required to make sure they don’t unintentionally wreck havoc on larger systems. However change management is routinely seen as a barrier to getting actual work done and in many cases it is.
This is where the attitude problems start to occur. ITIL based processes (no one should be using pure ITIL, that’s crazy talk) should not be a hindrance to getting work done and the second they start becoming so is when they lose their value. Indeed the reason behind implementing an ITIL process like change management is to extract more value out of the process than is currently being derived, not to impede the work is being done. Essentially it should only be an extension of work that would be undertaken in the first place and if it isn’t then you either need to look at your implementation of the change process or why your current IT practices aren’t working with it.
Predominantly I think this comes from being far too strict with these kinds of processes with the prevailing attitudes in industry being that deviation from them will somehow lead to an downward spiral of catastrophes from which there is no escape. If these ITIL process are being routinely circumvented or if the amount of work required to complete the process outweighs the actual work itself then it’s not the people who are to blame, it is the process itself. Realistically instead of trying to mold people to the process, like I’ve seen it done countless times over, the process should be reworked to suit the people. Whilst this is by far more difficult to do than simply sending people on ITIL courses the benefits will far outweigh the costs of doing so and you’ll probably find that more people stick to it rather than attempt to circumvent it.Indeed much of the process revolution that has happened in the past decade has been due to these people rather than process focused ideals.
Whilst ITIL might be getting a little long in the tooth many of the ideals it touches on are fundamental in nature and are things that persist beyond changes in technology. Like many ideas however their application has been less than ideal with the core idea of turning IT in a repeatable, dependable process usurped by laborious processes that add no value. I believe changing the current industry view from focusing on ITIL based processes to people focused ones that utilize ITIL fundamentals would trigger a major shift in the way corporate IT entities do business.
A shift that I believe would be all for the better.
I often find myself trusted with doing things I’ve never done before thanks to my history of delivering on these things but I always make people well aware of my inexperience in such areas before I pursue such things. I do this because I know I’m not the greatest engineer/system administrator/coder around but I do know that, given enough time, I can deliver something that’s exactly what they required. It’s actually an unfortunate manifestation of the imposter syndrome whereby I’m constantly self assessing my own skills, wondering if anything I’ve done was really that good or simply the product of all the people I worked with. Of course I’ve worked with people who know they are the best at what they do, even if the reality doesn’t quite match up to their own self-image.
Typically these kinds of people take one of 2 forms, the first one of which I’ll call The Guns. Guns are awesome people, they know everything there is to know about their job and they’re incredibly helpful, a real treasure for the organisation. I’m happy to say that I’ve encountered more of these than the second type and they’re in no small part responsible for a lot of the things that I know today. They are usually vastly under-appreciated for their talents however as since they usually enjoy what they do to such a great extent they don’t attempt to upset the status quo and toil away in relative obscurity. These are the kinds of people I have infinite amounts of time for and are usually the ones I look to when I’m looking for help.
Then there’s the flip side: the Alpha Nerds.
These guys are typically responsible for some part of a larger system and to their credit they know it inside and out. I’d say on average about half of them got to that level of knowledge by simply being there for an inordinate amount of time and through that end up being highly valuable because of their vast amount of corporate knowledge. However the problem with these guys, as opposed to The Guns, is that they know this and use it to their advantage in almost every opportunity they get. Simple change to their system? Be prepared to do a whole bunch of additional work for them before it’ll happen. A problem that you’re responsible for but is out of your control due to other arrangements? They’ll drill you on it in order to reinforce their status with everyone else. I can’t tell you how detrimental these people are to the organisation even if their system knowledge and expertise appears invaluable.
Of course this delineation of Guns and Alpha Nerds isn’t a hard and fast line, there’s a wide spectrum between the two extremes, but there is an inflexion point where a Gun starts to turn Alpha and the benefits to the organisation start to tank. Indeed I had such a thing happen to me during my failed university project where I failed to notice that a Gun was turning Alpha on me, burning them out and leaving the project in a state where no one else could work on it even if they wanted to. Whilst the blame still rests solely on my shoulders for failing to recognise that it still highlights how detrimental such behaviour can be when technical expertise isn’t coupled with a little bit of humility.
Indeed if your business is building products that are based on the talents of said people then it’s usually to your benefit to remove Alpha Nerds from your team, even if they are among the most talented people in your team. This is especially true if you’re trying to invest in developing people professionally as typically Alphas will end up being the de-facto contacts for the biggest challenges, stifling the skill growth of members of the team. Whilst they might be worth 2.5 times of your average performers you’re likely limiting the chances of the team being more productive than they currently are, quite possibly to the tune of much more than what the Alpha is capable of delivering.
Like I said before though I’m glad these kinds of people tend towards being less common than their Gun counterparts. I believe this is because during the nascent stages of someone’s career you’re likely to run up against an Alpha and see the detrimental impacts they have. Knowing that you’re then much more likely to work against becoming like them and should you become an expert in your chosen area you’ll make a point of being approachable. Some people fail to do that however and proceed to make our lives a lot more difficult than they should be but I’m sure this isn’t unique to IT and is innate to organisations both big and small.
I’ve been working in public sector IT for the better part of 7 years now, starting off as a lowly help desk operator and working my way up through the ranks to the senior technical consultant position I find myself in today. I’m not telling you this to brag (indeed I don’t believe I’m completely unique in this regard) rather I want to impress upon you the level of familiarity I have when it comes to government IT systems. I’ve worked in departments ranging from mere hundreds of employees to the biggest public service organisation that exists within Australia. So when I say Tony Abbott’s office isn’t giving us the full story on this whole Peter Slipper incident and the subsequent time zone argument they used to defend their position you’ll know that I’m not just making stuff up.
For reference his whole argument has been thoroughly debunked by Sortius in his brilliant 10 hours of bullshit where he shows that the document has had its date modified to show a 10 hour discrepancy. Back when it was first published he was just going off public information but recent updates to the post have seen him get his hands on the original press release with an unmodified date on them, showing that the press release was indeed drafted the night before. You’d think that’d be the last of it (and indeed if it was I would’ve simply tweeted it again) however the Department of Parliamentary Services (DPS) has gone on record saying that they have identified a problem with the time stamps on the files in question and have backed up Abbott’s side of the story.
Reporters have since been granted access to the PC and shown similar files which seem to suffer the same Zulu time zone problem that apparently plagues the press release in question. What wasn’t investigated was whether or not files created in the way that Sortius has shown suffer from the same issue, I.E. is there an on-going technical issue with that particular computer or are those files the result of the same kind of tampering that the press release appears to have undergone. That would go some way to explaining what’s going on here but it doesn’t explain why the time stamp shows a Zulu time zone which Microsoft word isn’t capable of producing.
Indeed doing a little research for myself shows that PDFs created from Microsoft Word’s PDF creator plugin will always show created/modified dates that are more or less identical and reflect the current time it was created (not the time when the original word document was created). If we’re to believe that there was some problem with the PC that caused the Z to appear it follows that it should have been the same for both the created date and the modified date. The fact that there’s a discrepancy gives credence to the idea that the PDF was first created using the Word PDF exporter and then modified afterwards using another program. The original document, the one shown in the final update from Sortius, shows some differences in created/modified times however it appears that was created using the PDFMaker Plugin for Word and then later modified in Adobe Distiller (not the same way as the metadata in the modified press release indicates).
Now this doesn’t necessarily mean that Abbott was aware of this information but it does implicate that someone working for him did. In attempting to track down just who it was who created the PDF I came across 2 probable people (one person who I think works at DPS and a Brisbane based ghost writer) but I wasn’t able to verify it was actually one or the other. Whoever did write it would be able to provide some insights into this whole thing but it’s unlikely that they’ll ever come forward, especially considering the fact that they would’ve been working for Abbott at the time (and may still be).
All of this points in the direction that something is going on over there and that further investigation is definitely warranted. I know there’s several other things I could do to either verify or debunk this theory completely should I have more open access to said system but I doubt we’ll get anything more than the guided tour that was given to the ABC journalists already. If I still had people I knew working at DPS you can be assured that I’d get the full story from them but alas, I came up dry on this one. Sortius is still on the case though and I’m very interested to see what DPS has to say about the current discrepancies and will keep you posted on the progress.
Canberra is a weird little microcosm as its existence is purely because the 2 largest cities in Australia couldn’t agree on who could be the capital of the country and they instead decided to meet, almost literally, in the middle. Much like Washington DC this means that all of the national level government agencies are concentrated in this area meaning that the vast majority of the 360,000 or so population work either directly or indirectly for the government. This concentration of services in a small area has distorted many of the markets that exist in your typical city centres and probably most notable of them all is the jobs market.
To put it in perspective there’s a few figures that will help me illustrate my point more clearly. For starters the average salary of a Canberran worker is much higher than the Australian average even beating out commodity rich states which are still reaping the benefits of the mining boom. Additionally Canberra’s unemployment is among the lowest in Australia hovering around a staggering 3.7%. This means that the labour market here is somewhat distorted and that’s especially true for the IT industry. However, like the manufacturing industry in the USA, there are still many who will bellyache endlessly about the lack of qualified people available to fill the needs of even this small city.
The problem is, as it always has been, simple economics.
I spent a good chunk of my career working directly for the public service, jumping straight out of university in a decent paying job that I figured I’d be in for quite a while. However it didn’t take long for me to realise that there was another market out there for people with my exact same skills, one that was offering a substantial amount more to do the same work. Like any rational person I jumped at this opportunity and have been continuing to do so for the past 6 years. However I still see positions similar to mine advertised with salaries attached to them that are, to be fair, embarrassing for anyone with those kinds of skills to take when they can get so much more for doing the same amount of work. This has led to a certain amount of tension between Canberra’s IT workers and the government that wishes to employ them with many agencies referring to this as a skills shortage.
The schism is partly due to the double faceted nature of the Canberran IT market. One the one hand the government will pay you a certain amount if you’re permanently employed with them and another if you’re hired as an outside contractor. However these positions are, for the most part, identical except that one pays an extraordinary amount more at the cost of some of the benefits (flex time, sick/annual leave, etc.). It follows that many IT workers are savy enough to take advantage of this and plan their lives around those lack of benefits accordingly and thus will never even consider the lower paid option because it just doesn’t make sense for them.
This hasn’t stopped the government from trying however. The Gershon report had been the main driver behind this, although its effects have been waning for the past 2 years, but now its the much more general cost reductions that are coming in as part of the overall budget goal of delivering a surplus. The problem here however, as I mentioned in the post I just linked, is that once you’re above a certain pay grade in the public service you’re expected to facilitate some kind of management function which doesn’t really align with the requirements of IT specialists. Considering that even outside of Canberra’s arguably inflated jobs market such specialists are able to make far more than the highest, non-managerial role in the government it comes as no surprise that the contractor market had flourished the way it did and why the implementation of the Gershon report did nothing but decimate the government’s IT capability.
Simply put the skills/labour shortage that’s been experienced in many places, not just Canberra, is primarily due a disconnect between the skills required and the amount organisations are willing to pay for said skills. The motivation behind the lower wage costs is obvious but the outcome should not be unexpected when you try to drive the price down but the supply remains the same. Indeed many of the complaints about a labour shortage are quickly followed by calls for incentives and education in the areas where there’s a skills shortage rather than looking at the possibility that people are simply becoming more market savy and are not willing to put up with lower wages when they know they can do better elsewhere.
I had personally only believed that this applied to the Canberra IT industry but in doing the research for this post it seems like it applies far more broadly than I had first anticipated. In all honesty this does nothing but hurt the industry as it only helps to increase tensions between employers and employees when there’s a known disconnect between the employee’s market value and their compensation. I’d put the challenge to most employers to see how many good, skilled applicants they get if they start paying better rates as I’d hazard a guess their hit rate would vastly improve.
IT is one of the few services that all companies require to compete in today’s markets. IT support then is one of those rare industries where jobs are always around to be had, even for those working in entry level positions. Of course this assumes that you put in the required effort to stay current as letting your skills lapse for 2 or more years will likely leave you a generation of technology behind, making employment difficult. This is of course due to the IT industry constantly evolving and changing itself and much like other industries certain jobs can be made completely redundant by technological advancements.
For the past couple decades though the types of jobs you expect to see in IT support have remained roughly the same, save for the specializations brought on by technology. As more and more enterprises came online and technology began to develop a multitude of specializations became available, enabling then generic “IT guys” to become highly skilled workers in their targeted niche. I should I know, just on a decade ago I was one of those generic IT support guys and today I’m considered to be a specialist when it comes to hardware and virtualization. Back when I started my career the latter of those two skills wasn’t even in the vernacular of the IT community, let alone a viable career path.
Like any skilled position though specialists aren’t exactly cheap, especially for small to medium enterprises (SMEs). This leads to an entire second industry of work-for-hire specialists (usually under the term “consultants”) and companies looking to take the pain out of utilizing the technology without having to pay for the expertise to come in house. This isn’t really a surprise (any skilled industry will develop these secondary markets) but with IT there’s a lot more opportunity to automate and leverage economies of scale, more so than any other industry.
This is where Cloud Computing comes in.
The central idea behind cloud computing is that an application can be developed to run on a platform which can dynamically deliver resources to it as required. The idea is quite simple but the execution of it is extraordinarily complicated requiring vast levels of automation and streamlining of processes. It’s just an engineering problem however, one that’s been surmounted by several companies and used to great effect by many other companies who have little wish to maintain their own infrastructure. In essence this is just outsourcing taken to the next level, but following this trend to its logical conclusion leads to some interesting (and, if you’re an IT support worker, troubling) predictions.
For SMEs the cost of running their own local infrastructure, as well as the support staff that goes along with it, can be one of their largest cost centres. Cloud computing and SaaS offers the opportunity for SMEs to eliminate much of the cost whilst keeping the same level of functionality, giving them more capital to either reinvest in the business or bolster their profit margins. You would think then that this would just be a relocation of jobs from one place to another but cloud services utilize much fewer staff due to the economies of scale that they employ, leaving fewer jobs available for those who had skills in those area.
In essence cloud computing eliminates the need for the bulk of skilled jobs in the IT industry. There will still be need for most of the entry level jobs that cater to regular desktop users but the back end infrastructure could easily be handled by another company. There’s nothing fundamentally wrong with this, pushing back against such innovation never succeeds, but it does call into question those jobs that these IT admins currently hold and where their future lies.
Outside of high tech and recently established businesses the adoption rate of cloud services hasn’t been that high. Whilst many of the fundamentals of the cloud paradigm (virtualization, on-demand resourcing, infrastructure agnostic frameworks) have found their way into the datacenter the next logical step, migrating those same services into the cloud, hasn’t occurred. Primarily I believe this is due to the lack of trust and control in the services as well as companies not wanting to write off the large investments they have in infrastructure. This will change over time of course, especially as that infrastructure begins to age.
For what its worth I still believe that the ultimate end goal will be some kind of hybrid solution, especially for governments and the like. Cloud providers, whilst being very good at what they do, simply can’t satisfy the need of all customers. It is then highly likely that many companies will outsource routine things to the cloud (such as email, word processing, etc) but still rely on in house expertise for the customer applications that aren’t, and probably will never be, available in the cloud. Cloud computing then will probably see a shift in some areas of specialization but for the most part I believe us IT support guys won’t have any trouble finding work.
We’re still in the very early days of cloud computing and its effects on the industry are still hard to judge. There’s no doubt that cloud computing has the potential to fundamentally change the way the world does IT services and whatever happens those of us in IT support will have to change to accommodate it. Whether that comes in the form of reskilling, training or looking for a job in a different industry is yet to be determined but suffice to say that the next decade will see some radical changes in the way businesses approach their IT infrastructure.
I’m a big fan of technology that makes users happy. As an administrator anything that keeps users satisfied and working productively means more time for me to make the environment even better for them. It’s a great positive feedback loop that builds on itself continually, leading to an environment that’s stable, cutting edge and just plain fun to use and administer. Of course the picture I’ve just painted is something of an IT administrator nirvana, a great dream that is rarely achieved even by those who have unlimited freedom with the budgets to match. That doesn’t mean we shouldn’t try to achieve it however and I’ll be damned if I haven’t tried at every place I’ve ever worked at.
The one thing that always come up is “Why don’t we use Macs in the office? They’re so easy to use!”. Indeed my two month long soiree into the world of OSX and all things Mac showed that it was indeed an easy operating system to pick up and I could easily see why so many people use it as their home operating system. Hell at my current work place I can count several long time IT geeks who’ve switched their entire household over to solely Apple gear because it just works and as anyone who works in IT will tell you the last thing you want to be doing at home is fixing up PCs.
You’d then think that Macs would be quite prevalent in the modern workspace, what with their ease of use and popularity amongst the unwashed masses of users. Whilst their usage in the enterprise is growing considerably they’re still hovering just under 3% market share, or about the same amount of market share that Windows Phone 7 has in the smart phone space. That seems pretty low but it’s in line with world PC figures with Apple being somewhere in the realms of 5% or so. Still there’s a discrepancy there so the question still remains as to why Macs aren’t seen more often in the work place.
The answer is simple, Apple simply doesn’t care about the enterprise space.
I had my first experience with Apple’s enterprise offerings very early on in my career, way back when I used to work for the National Archives of Australia. As part of the Digital Preservation Project we had a small data centre that housed 2 similar yet completely different systems. They were designed in such a way that should a catastrophic virus wipe out the entire data store on one the replica on the other should be unaffected since it was built from completely different software and hardware. One of these systems utilized a few shelves of Apple’s Xserve RAID Array storage. In essence they were just a big lump of direct attached storage and for that purpose they worked quite well. That was until we tried to do anything with it.
Initially I just wanted to provision some of the storage that wasn’t being used. Whilst I was able to do some of the required actions through the web UI the unfortunate problem was that the advanced features required installing the Xserve tools on a Mac computer. Said computer also had to have a fibre channel card installed, something of a rarity to find in a desktop PC. It didn’t stop there either, we also tried to get Xsan installed (so it would be, you know, an actual SAN) only to find out that we’d need to buy yet more Apple hardware in order to be able to use it. I left long before I got too far down that rabbit hole and haven’t really touched Apple enterprise gear since.
You could write that off as a bad experience but Apple has continued to show that the enterprise market is simply not their concern. No less than 2 years after I last touched a Xserve RAID Array did Apple up and cancel production of them, instead offering up a rebadged solution from Promise. 2 years after that Apple then discontinued production of its Xserve servers and lined up their Mac Pros as a replacement. As any administrator will tell you the replacements are anything but and since most of their enterprise software hasn’t recieved a proper update in years (Xsan’s last major release was over 3 years ago) no one can say that Apple has the enterprise in mind.
It’s not just their enterprise level gear that’s failing in corporate environments. Whilst OSX is easy to use it’s an absolute nightmare to administer on anything larger than a dozen or so PCs as all of the management tools available don’t support it. Whilst they do integrate with Active Directory there’s a couple limitations that don’t exist for Windows PCs on the same infrastructure. There’s also the fact that OSX can’t be virtualized unless it runs on Apple hardware which kills it off as a virtualization candidate. You might think that’s a small nuisance but it means that you can’t do a virtual desktop solution using OSX (since you can’t buy the hardware at scale to make it worthwhile) and you can’t utilize any of your current investment in virtual infrastructure to run additional OSX servers.
If you still have any doubts that Apple is primarily a hardware company then I’m not sure what planet you’re on.
For what its worth Apple hasn’t been harmed by ignoring the enterprise as it’s consumer electronics business has more than made up for the losses that they’ve incurred. Still I often find users complaining about how their work computers can’t be more like their Macs at home, ignorant of the fact that Apple’s in the enterprise would be an absolutely atrocious experience. Indeed it’s looking to get worse as Apple looks to iPhoneizing their entire product range including, unfortunately, OSX. I doubt Apple will ever change direction on this which is a real shame as OSX is the only serious competitor to Micrsoft’s Windows.
It’s no secret that I owe a large part of my IT career to virtualization. It was a combination of luck, timing and willingness to jump into the unknown that led me down the VMware path having my first workplace using VMware’s products which set the stage for every job there after seeing my experience and latching on to it with a crack-junkie like desire. Over the years then I’ve become intimately familiar with many virtualization solutions but inevitably I find myself coming back to VMware because simply put they’re the market leaders and pretty much everyone who can afford to use them does so. So you can imagine then I was somewhat excited when I saw the release of vSphere 5 and I’ve been putting it through its paces over the past couple weeks.
On the surface ESXi 5 and vSphere 5 look almost identical to their predecessors. ESXi 5 is really only distinguishable from 4 thanks to the slightly different layout and changed font, whilst vSphere 5 is exactly the same spare for some new icons and additional links to new features. I guess with any new product version I’ve just come to expect a UI revamp even if it adds nothing to the end product so the fact that VMware decided to stick with their current UI came as somewhat of a surprise but I can’t really fault them for doing so. The real meat of the vSphere 5 is under the hood and there have been some major improvements from my initial testing.
vSphere 5 brings with it Virtual Machine Version 8 which amongst the usual more CPUs/more memory upgrades brings along with it support for 3D accelerated graphics, UEFI for the BIOS (which technically means it can OSX Lion although that will never happen¹) and USB 3.0 support. There’s also a few new options available when creating a new virtual machine like the ability to add virtual sockets (not just virtual cores) and the choice between eager and lazy zeroed disks.
The one overall impression that vSphere 5 has left on me though is that it’s fast, like really fast. The UI is much more responsive, operations that used to take minutes are now done in seconds and in the few performance tests we’ve done ESXi 5 seems to be consistently faster than its 4.1 Update 1 counterpart. According to my sources close to the matter this is because ESXi 5 is all new code from the ground up, enabling them to enhance performance significantly. From my first impressions with it I’d say that they’ve succeed in doing this and I’m looking forward to seeing how it handles real production loads in the very near future.
What really amazed me was a lot of the code that I had developed for vSphere 4 was 100% compatible with vSphere 5. I had been dreading having to rewrite the near 2000 lines of code that I had developed for the build system in order to get ESXi 5 into our environment but every command worked without a hitch, showing VMware’s dedication to backwards compatibility is extremely good, approaching the king of compatibility Microsoft. Indeed those looking to migrate to vSphere 5 don’t have much to worry about as pretty much every feature of the previous version is supported, and migrating to the newer platform is quite painless.
I’ve yet to have a chance to fiddle with some of the new features (like the storage appliance, which looks incredibly cool) but overall my first impressions of vSphere 5 are quite good, along the lines of what I’ve come to expect from VMware. I haven’t yet run into major gotchas yet but I’ve only had a couple VMs running in an isolated vSphere instance so my sample size is rather limited. I’m sure once I start throwing some real applications at it I’ll start running into some more interesting problems but suffice to say that VMware has done well with this release and I can see vSphere 5 making its home in all IT departments where VMware is already deployed.
¹The stipulation for all Apple products is that they run on Apple hardware, including virtualized instances. Since the only things you can buy with OSX Server installed on them are Mac Mini Servers or Mac Pros, neither of which are on the Hardware Compatability List, running your own virtualized copies of OSX Server (legitimately) simply can’t happen. Yet I still get looks of amazement when I tell people Apple is a hardware company, figures.