Business

Steam Music UI

Please Valve, do for Other Content What You Did for Games.

I’m a big lover of Steam. Whilst it had a rather rocky start, something that was exacerbated by the fact that I was still on dial up, since then the platform has managed to make me part with many of my dollars and I have done so gladly. Sure part of this is due to me moving up in the world, no longer being a poor uni student whose only indulgence was his World of Warcraft subscription, however Steam providing titles at a very reasonable price has also led me to spend more than I would have otherwise. So when rumours start to spread that Steam might be bringing things like music, TV shows and movies to the platform you can imagine the excitement I have at that prospect.

Steam Music UIThere’s been talk of Steam expanding beyond it’s current games and software market for some time now, ever since Valve announced the Steam Music overlay at the beginning of this year. There’s also already a few movies on the platform, like Free to Play and Indie Game: The Movie, and whilst they’re specifically about games it’s not much of a stretch to think that they’d extend the platform further. The only precedent not set so far is for TV shows however it’s not much of a stretch to see the same system working for that kind of content. There’s still a few questions to be answered about the service (When will it debut? How will its costs compare to other services? ) however if Steam can do for what it did for games for movies, TV and music you can bet your bottom dollar that it will be an incredibly positive thing for consumers.

The reason, for me as an Australia at least, is that there’s really no other alternative available to us. I was excited when Dendy Direct was announced, mostly because I’m a fan of their cinemas, however their pricing is nothing short of insane with a single season of a show costing anywhere from $20 to $40. Other services available here are either similarly priced or simply don’t have the catalogue of shows that many of us want to watch. Even if the services available here do have the shows they’re either significantly delayed or released in such a way that’s incongruent to the way they were released overseas, like Netflix original series being released weekly instead of all in one hit.

There’s always the geo-unblocking tools to get us Netflix of course but that’s really only a stopgap to a better solution.

We’re getting closer to a proper solution though as there’s been at least one notable entrant into this field that’s not completely bullshit. AnimeLab, run by Madman (the Australian anime distributor), offers up complete anime series for any and all to watch for free, including ones that are only just being released in Japan. Whilst I’m sure the free ride won’t last forever it does show that there’s demand for such a service in Australia, even within the niche interest area that is anime. I’m hopeful that this will encourage other services to start considering branching out into Australia sooner rather than later as it honestly can’t come fast enough.

Tim Cook and Virgina Rometty

IBM Isn’t the Solution to Your Enterprise Woes, Apple.

There’s no question that Apple was the primary force behind the Bring Your Own Device (BYOD) movement. It didn’t take long for every executive to find themselves with an iPad in their hands, wondering why they had to use their god damn Blackberry when the email experience on their new tablet was so much better. Unfortunately, as is the case with most Apple products, the enterprise integration was severely lacking and the experience suffered as a result. Today the experience is much better although that’s mostly the result of third party vendors developing solutions, not so much Apple developing the capability themselves. It seems that after decades of neglecting the enterprise Apple is finally ready to make a proper attempt at it, although in the most ass backwards way possible.

Tim Cook and Virgina Rometty

Today Apple announced that it would be partnering with IBM in order to grow their mobility offerings starting with a focus on applications, cloud services and device supply and support. IBM is going to start off by developing 100 “industry specific” enterprise solutions, essentially native applications for the iPhone and iPad that are tailored for specific business needs. They’ll also be growing their cloud offering with services that are optimized for iOS with a focus on all the buzzwords that surround the BYOD movement (security, management, analytics and integration). You’ll also be able to source iOS devices from IBM with warranty backing by Cupertino, enabling IBM to really be your one stop shop for all things Apple related in the enterprise.

At a high level this would sound like an amazing thing for anyone who’s looking to integrate Apple products into their environment. You could engage IBM’s large professional services team to do much of the leg work for you, freeing you from worrying about the numerous issues that come from enabling a BYOD environment. The tailored applications would also seem to solve a big pain point for a lot of users as the only option most enterprises have available to them today is to build their own, a significantly costly endeavour. Plus if you’re already buying IBM equipment their supply chain will already be well known to you and your financiers, lowering the barrier to entry significantly.

Really it does sound amazing, except for the fact that this partnership is about 5 years late.

Ever since everyone wanted their work email on an iPhone there’s been vendors working on solutions to integrate non-standard hardware into the enterprise environment. The initial solutions were, frankly, more trouble than they were worth but today there are a myriad of applications available for pretty much every use case you can think of. Indeed pretty much every single thing that this partnership hopes to achieve is already possible today, not at some undetermined time in the future.

This is not to mention that IBM is also the last name you’d think of when it comes to cloud services, especially when you consider how much business they’ve lost as of late. The acquisition of SoftLayer won’t help them much in this regard as they’re building up an entirely new capability from scratch which, by definition, means that they’re offering will be behind everything else that’s currently available. They might have the supply chains and capital to be able to ramp up to public cloud levels of scalability but they’re doing it several years after everyone else has, in a problem space that is pretty much completely solved.

The only place I can see this partnership paying dividends is in places which have yet to adopt any kind of BYOD or mobility solution which, honestly, is few and far between these days. This isn’t an emerging market that IBM is getting in on the ground floor on, it’s a half decade old issue that’s had solutions from numerous vendors for some time now. Any large organisation, which has been IBM’s bread and butter since time immemorial, will already have solutions in place for this. Transitioning them away from that is going to be costly and I doubt IBM will be able to provide the requisite savings to make it attractive. Smaller organisations likely don’t need the level of management that IBM is looking to provide and probably don’t have a working relationship with Big Blue anyway.

Honestly I can’t see this working out at all for IBM and it does nothing to improve Apple’s presence in the enterprise space. The problem space is already well defined with solid solutions available from multiple vendors, many of which have already have numerous years of use in the field. The old adage of never getting fired for buying IBM has long been irrelevant and this latest foray into a field where their experience is questionable will do nothing to bring it back. If they do manage to make anything of this I will be really surprised as entering a market this late in the piece rarely works out well, even if you have mountains of capital to throw at it.

Stressed woman in office

The Idiocy of Idolizing Over-Time.

My fellow IT workers will likely be familiar with the non-standard hours our work can require us to keep. Since we’re an essential service any interruption means that other people are unable to work so we’re often left with no choice to continue working long after everyone has left. Thankfully I moved out of doing that routinely long ago however I’ve still had my fair share of long weeks, weekend work and the occasional all-nighter in order to make sure a job was done properly. I’ll never work more hours simply for the sake of it though as I know my productivity rapidly drops off after a certain point, meaning the extra hours aren’t particularly effective. Still though there seems to be something of a worship culture around those who work long hours, even if the results of doing so are questionable.

Stressed woman in office

My stance has always been that everyone should be able to complete their work in the standard number of work week hours and if goals aren’t being met it’s a fault of resourcing, not the amount of effort being put in. Too often though I’ve seen people take it upon themselves to make up for these shortcomings by working longer hours which feeds into a terrible cycle from which most projects can’t recover. It often starts with individuals accommodating bursts of work which falsely set the expectation that such peaks can be routinely accommodated. Sure it’s only a couple extra hours here or there but when each member of a team of 20 does that you’re already a resource behind and it doesn’t take much to quickly escalate from there.

The problem, I feel, stems from the association that hours worked is equal to the amount of contribution. In all cases this is simply not true as many studies have shown that, even with routine tasks with readily quantifiable output, your efficiency degrades over time. Indeed my highly unscientific observations, coupled with a little bit of online research, shows that working past the 8 hour mark per day will likely lead to heavy declines in productivity over time. I’ve certainly noticed that among people I’ve worked alongside during 12+ hour days as the pace of work rapidly declines and complex issues take far longer to solve than they would have at the beginning of the day.

Thus the solution is two fold: we need to stop idolizing people who put in “long hours” and be steadfast when it comes to taking on additional work. Stopping the idolization means that those who choose to work longer hours, for whatever reasons, are no longer used as a standard by which everyone else is judged. It doesn’t do anyone any good to hold everyone to standards like that and will likely lead to high levels of burnout and turnover. Putting constraints around additional work means that no one should have to work more than they need to and should highlight resourcing issues long before it becomes a problem that can’t be handled.

I’m fortunate to work for a company that values results over time invested and it’s been showing in the results that our people have been able to deliver. As someone who’d worked in organisations where the culture valued hours and the appearance being busy over everything else it’s been extremely refreshing, validating my long held beliefs about work efficiency and productivity. Working alongside other agencies that don’t have this culture has provided a stark reminder of just how idiotic the idolization of overtime is and why I’ll likely be sticking around this place for a while to come.

Windows 8.2 Start Bar

How Long Will it Take for Enterprise IT to Embrace Rapid Innovation?

The IT industry has always been one of rapid change and upheaval, with many technology companies only lasting as long as they could innovate for. This is at odds with the traditional way businesses operated, preferring to stick to predictable cycles and seek gains through incremental improvements in process, procedure and marketing. The eventuality of this came in the form of the traditional 3~5 year cycle that many enterprises engaged in, upgrading the latest available technology usually years after it had been released. However the pace of innovation has increased to the point where such a cycle could leave an organisation multiple generations behind and it’s not showing any signs of slowing down soon.

Windows 8.2 Start Bar

I mentioned last year how Microsoft’s move from a 3 year development cycle to a yearly one was a good move, allowing them to respond to customer demands much more quickly than they were previously able to. However the issue I’ve come across is whilst I, as a technologist, love hearing about the new technology the customer readiness for this kind of innovation simply isn’t there. The blame for this almost wholly lays at the feet of XP’s 12 year dominance of the desktop market, something which even the threat of no support did little to impact its market share. So whilst the majority may have made the transition now they’re by no means ready for a technology upgrade cycle that happens on a yearly basis. There are several factors at play with this (tools, processes and product knowledge being the key ones) but the main issue remains the same: there’s a major disjoint between Microsoft’s current release schedule and it’s adoption among its biggest customers.

Microsoft, to their credit, are doing their best to foster rapid adoption. Getting Windows 8.1 at home is as easy as downloading an app from the Windows store and waiting for it to install, something you can easily do overnight if you can’t afford the down time. Similarly the tools available to do deployments on a large scale have been improved immensely, something anyone who’s used System Center Configuration Manager 2012 (and it’s previous incarnations) will attest to. Still even though the transition from Windows 7 to 8 or above is much lower risk than from XP to 7 most enterprises aren’t looking to make the move and it’s not just because they don’t like Windows 8.

With Windows 8.2 slated for release sometime in August this year Windows 8 will retain an almost identical look and feel to that of its predecessors, allowing users to bypass the metro interface completely and giving them back the beloved start menu. With that in place there’s almost no reason for people to not adopt the latest Microsoft operating system yet it’s likely to see a spike in adoption due to the inertia of large IT operations. Indeed even those that have managed to make the transition to Windows 8 probably won’t be able to make the move until 8.3 makes it debut, or possibly even Windows 9.

Once the Windows 8 family becomes the standard however I can see IT operations looking to move towards a more rapid pace of innovation. The changes between the yearly revisions are much less likely to break or change core functionality, enabling much of the risk that came with adopting a new operating system (application remediation). Additionally once the IT sections have moved to better tooling upgrading their desktops should also be a lot easier. I don’t think this will happen for another 3+ years however as we’re still in the midst of a XP hangover, one that’s not likely to subside until it’s market share is in the single digits. Past that we administrators then have the unenviable job of convincing our businesses that engaging in a faster product update cycle is good for them, even if the cost is low.

As someone who loves working with the latest and greatest from Microsoft it’s an irritating issue for me. I spend countless hours trying to skill myself up only to end up working on 5+ year old technology for the majority of my work. Sure it comes in handy eventually  but the return on investment feels extremely low. It’s my hope that the cloud movement, which has already driven a lot of businesses to look at more modern approaches to the way they do their IT, will be the catalyst by which enterprise IT begins to embrace a more rapid innovation cycle. Until then however I’ll just lament all the Windows Server 2012 R2 training I’m doing and wait until TechEd rolls around again to figure out what’s obsolete.

Cost of a movie by country

Are We Really Surprised by Australian’s Being Pirates?

Australia is an incredibly strong country economically being ranked as the 12th largest by GDP of all countries in the world. When you then consider that our population is a fraction of that of many countries that are above us (Canada is the closest in size and is in 11th spot with a population about 50% bigger than ours) it means that, on average, Australians are more wealthy than their global counterparts. This is somewhat reflected in the price we pay for certain things however it doesn’t take a lot of effort to show that we pay more than you’d expect for many goods and services. The most notable being media as we lack any of the revolutionary services that drive their prices down (Netflix, Hulu, etc.) or any viable alternatives. It gets even worse though as it seems we also pay more just to go to the cinema.

Cost of a movie by country

 

The graphic above shows that Australia, along with a few other developed nations, pay an extraordinary amount more than others do when the costs are normalized. The differences between the lowest and the highest aren’t exactly huge, you’re looking at a spread of about $15 from the cheapest to the most expensive, however this is yet another indication of just how much more Australia pays for its media than anyone else does. In essence we’re paying something on the order of 25%~50% more for the same product yet the excuses that the industry once relied on, that Australia is “really far away”, don’t really hold water anymore.

It should come as little surprise then that Australians are then far more likely to pirate than any other developed country, sometimes representing up to almost 20% of new release piracy. There have been some inroads made into attempting to reduce this number, with a few stations “fast-tracking” episodes (although they still usually carry a delay) or giving users access to an online option, however the former doesn’t solve the problem entirely and the latter was unfortunately repealed. The hunger for the media is there it’s just that a reasonably priced option has failed to materialize for Australian users (and if you mention Quickflix I’ll gut you) which has led to these dramatic figures.

Now I’d be entirely happy with doing the slightly dodgy and getting myself a Netflix or Hulu account via a VPN or geo-unblocking service however my bandwidth isn’t up to the task of streaming media at 720p. Sure it could probably do a lower resolution but I didn’t invest as much as I did in my entire home theatre system to have it operate at a sub-par level. This issue was supposed to go away with the NBN being just around the corner but I literally have no idea when that might be coming nor what incarnation of it I will end up getting. So it seems that, at least for now, I’m stuck in digital limbo where I either fall to piracy or being gouged repeatedly.

Neither of these issues are beyond fixing and indeed it’s been shown that once a reasonably priced alternative becomes available people ditch piracy in a heartbeat. Heck I know that for me once Steam became widely available my game spend increased dramatically, especially after I found sites like DLcompare. I can assure you that the same will happen once a media based alternative comes to Australia and I’m not the only one who has the disposable income to support it.

OculusVR Headset Developer Preview 2

What’s the Deal with Facebook Acquiring Oculus VR?

Companies buying other companies is usually nothing to get excited about. Typically it’s a big incumbent player buying up a small company that’s managed to out-innovate them in a particular market segment so instead of losing market share the incumbent chooses to acquire them. Other times it’s done in order to funnel the customer base onto the core product that the incumbent is known for much like Google did with many of its acquisitions like Android. Still every so often a company will seemingly go out of its way to acquire another that honestly doesn’t seem to fit and we’re all left wondering what the hell they’re thinking. Facebook has done this today acquiring the virtual reality pioneer OculusVR.

OculusVR Headset Developer Preview 2Facebook and OculusVR could not be more different, one being  the biggest social network in the world that’s got 1.23 billion active users per month and the other being a small company with only 50 employees focusing on developing virtual reality technology. Whilst the long winded PR speech from Zuckerberg seems to indicate that they’re somehow invested in making the Oculus Rift the new way of experiencing the world it’s clear that Facebook is going to be running it as it’s own little company, much like Instagram and WhatsApp before it. With the recent rumours of Facebook looking to purchase drone manufacturer Titan Aerospace, another company that doesn’t seem like a good fit for the Facebook brand, it begs the question: what’s Facebook’s plan here?

Most of the previous high profile acquisitions aligned directly with Facebook’s weaknesses, namely how badly they were doing in the mobile space. Instagram fit the bill perfectly in this regard as they managed to grow a massive mobile-only social network that rivalled Facebook’s own mobile client for usage. Whilst many questioned whether paying $1 billion for a company that hadn’t generated a single dollar was worth it for them it seems like Facebook got some value out of it as their mobile experience has improved drastically since then. WhatsApp seemed to be in a similar vein although the high cost of acquisition (even though this one had some revenue to back it up) makes it much more questionable than the Instagram purchase. Still for both of them it was filling in a gap that Facebook had, OculusVR doesn’t do that.

From my perspective it seems like Facebook is looking to diversify its portfolio and the only reason I can think of to justify that is their core business, the Facebook social network, is starting to suffer. I can’t really find any hard evidence to justify this but it does seem like the business community feels that Facebook is starting to lose its younger audience (teens specifically) to messenger apps. Acquiring WhatsApp goes some way to alleviate this but acquiring the most popular app every couple years isn’t a sustainable business model. Instead it looks like they might be looking to recreate the early Google environment, one that spawned multiple other lines of business that weren’t directly related to their core business.

This was definitely a successful model for Google however most of the products and acquisitions they made at a similar stage to Facebook were centred around directing people back to their core products (search and advertising). Most of the moonshot ideas, whilst showing great initial results, have yet to become actual lines of business for them with the two most notable ones, Glass and the self-driving car, still in the developmental or early adopter phase. Facebook’s acquisition of OculusVR doesn’t really fit into this paradigm however with OculusVR likely going to be the first to market with a proper virtual reality headset it might just be a large bet that this market segment will take off.

Honestly it’s hard to see what Facebook’s endgame is here, both for OculusVR and themselves as a company. I think Facebook will stay true to their word about keeping OculusVR independent but I have no clue how they’ll draw on the IP and talent their to better themselves. Suffice to say not everyone is of the same opinion and this is something that Facebook and OculusVR are going to have to manage carefully lest the years of goodwill they’ve built up be dashed in a single movement. I won’t go as far to say that I’m excited to see what these too will do together but I’ll definitely be watching with a keen interest.

 

New Microsoft Chief Executive Officer of, Satya Nadella

What Kind of Microsoft Can We Expect From Satya Nadella?

In the time that Microsoft has been a company it has only known two Chief Executive Officers. The first was unforgettably Bill Gates, the point man of the company from its founding days that saw the company grow from a small software shop to the industry giant of the late 90s. Then, right at the beginning of the new millennium, Bill Gates stood down and passed the crown to long time business partner Steve Ballmer who has since spent the better part of 2 decades attempting to transform Microsoft from a software company to a devices and services one. Rumours had been spreading about who was slated to take over Ballmer for some time now and, last week, after much searching Microsoft veteran Satya Nadella took over as the third CEO of the venerable company and now everyone is wondering where he will take it.

New Microsoft Chief Executive Officer of, Satya NadellaFor those who don’t know him Nadella’s heritage in Microsoft comes from the Server and Tools department where he’s held several high ranking positions for a number of years. Most notably he’s been in charge of Microsoft’s cloud computing endeavours, including building out Azure which hit $1 billion in sales last year, something I’m sure helped to seal the deal on his new position. Thus many would assume that Nadella’s vision for Microsoft would trend along these lines, something which runs a little contrary to the more consumer focused business that Ballmer sought to deliver, however his request to have Bill Gates step down as Chairman of the Board so that Nadella could have him as an advisor in this space says otherwise.

As with any changing of the guard many seek to impress upon the new bearer their wants for the future of the company. Nadella has already come under pressure to drop some of Microsoft’s less profitable endeavours including things like Bing, Surface and even the Xbox division (despite it being quite a revenue maker, especially as of late). Considering these products are the culmination of the effort of the 2 previous CEOs, both of which will still be involved in the company to some degree, taking an axe to them would be a extraordinarily hard thing to do. These are the products they’ve spent years and billions of dollars building so dropping them seems like a short sighted endeavour, even if it would make the books look a little better.

Indeed many of these business units which certain parties would look to cut are the ones that are seeing good growth figures. The surface has gone from a $900 million write down disaster to losing a paltry $39 million in 2 quarters, an amazing recovery that signals profitability isn’t too far off. Similarly Bing, the search engine that we all love to hate on, saw a 34% increase in revenue in a single quarter. It’s not even worth mentioning the Xbox division as it’s been doing well for years now and the release of the XboxOne with it’s solid initial sales ensures that remains one of Microsoft’s better performers.

The question then becomes whether Nadella, and the board he now serves, sees the on-going value in these projects. Indeed much of the work they’ve been doing in the past decade has been with the focus towards unifying many disparate parts of their ecosystem together, heading towards that unified nirvana where everything works together seamlessly. Removing them from the picture feels like Microsoft backing itself into a corner, one where it can be easily shoehorned into a narrative that will see it easily lose ground to those competitors it has been fighting for years. In all honesty I feel Microsoft is so dominant in those sectors already that there’s little to be gained from receding from perceived failures and Nadella should take this chance to innovate on his predecessor’s ideas, not toss them out wholesale.

 

HP Care Packs

HP Charging For Updates? Not Sure if Don’t Want.

It’s every system administrator’s dream to only be working on the latest hardware running the most recent software available. This is partially due to our desire to be on the cutting edge of all things, where new features abound and functionality is at its peak. However the reality is always far from that nirvana with the majority of our work being on systems that are years old running pieces of software that haven’t seen meaningful updates in years. That’s why few tears have been shed by administrators worldwide about XP’s impending demise as it signals the end of the need to support something that’s now over a decade old. Of course this is much to the chagrin of end users and big enterprises who have still yet to make the transition.

Indeed big enterprises are rarely on the cutting edge and thus rely on extended support programs in order to keep their fleet maintained. This is partially due to the amount of inertia big corporations have, as making the change to potentially thousands of endpoints takes some careful planning an execution. Additionally the impacts to the core business cannot be underestimated and must be taken into careful consideration before the move to a new platform is made. With this in mind it’s really no surprise that corporations often buy support contracts that go for 3 or 5 years for the underlying hardware as that ensures that they won’t have to make disruptive changes during that time frame.

HP Care Packs

So when HP announced recently that it would be requiring customers to have a valid warranty or support agreement with them in order to get updates I found myself in two minds about it. For most enterprises this will be a non-issue as running hardware that’s out of warranty is begging for trouble and not many have the appetite for that kind of risk. Indeed I actually thought this would be a good thing for enterprise level IT as it would mean that I wouldn’t be cornered into supporting out of warranty hardware, something which has caused me numerous headaches in the past. On the flip side though this change does affect something that is near and dear to my heart: my little HP Mircoserver.

This new decision means that this little server only gets updates for a year after purchase after which you’re up for at least $100 for a HP Care Pack which extends the warranty out to 5 years and provides access to all the updates. Whilst I missed the boat on the install issues that plagued its initial release (I got mine after the update came out) I can see it happening again with similar hardware models. Indeed the people hit hardest by this change are likely the ones who would be least able to afford a support plan of this nature (I.E. smaller businesses) who are the typical candidates for running hardware that’s out of a support arrangement. I can empathise with their situation but should I find myself in a situation where I needed an update for them and couldn’t get it due to their lack of support arrangements I’d be the first one to tell them so.

Indeed the practice isn’t too uncommon with the majority of other large vendors requiring something on the order of a subscription in order to get product updates with the only notable exception being Dell (full disclosure: I work for them). I’ll agree that it appears to be a bit of a cash grab as HP’s server business hasn’t been doing too well in the recent quarters (although no one has done particularly well, to be honest) although I doubt they’re going to make up much to counter act the recent downfall. This might also spur some customers on to purchase newer hardware whilst freeing up resources within HP that no longer need to support previous generations of hardware.

So I guess what I’m getting at is that whilst I can empathise with the people who will be hard done by with this change I, as someone who has to deal with warranty/support calls, don’t feel too hard done by. Indeed any admin worth their salt could likely get their hands on the updates anyway without having to resort to the official source anyway. If the upkeep on said server is too much for you to afford then it’s likely time to rethink your IT strategy, potentially looking at cloud based solutions that have a very low entry point cost when compared to upgrading a server.

Amazon Prime Air Drone

Amazon Air Prime: It’s Part Marketing Campaign, Flight of Fancy and Hand Forcing.

If you haven’t been deliberately  avoiding mainstream media for the past couple days then chances are you’re already aware of Amazon’s latest announcement in Amazon Prime Air. It sounds like the world of science fiction, being able to place an order for something and then have it delivered by an automated right to your door in under 30 minutes. Indeed it pretty much is for the time being as whilst there seems to be a large amount of foundational work done on it the service is still many years away from seeing actual use. So this has had many asking the question: why would Amazon bother announcing something like this when its so far away form being a reality?

Amazon Prime Air DroneAs many have already rightly pointed out the timing of the announcement seems to point towards it being an elaborate marketing campaign with Amazon managing to snag a good 15 minutes of advertising, ostensibly for free, from the 60 minutes program. This happening right before Cyber Monday is too much of a coincidence for it to be anything other than planned so, at least for the short term, Amazon Prime Air is a marketing tactic to get people to shop at Amazon. I’d hesitate to give credence to the theory that it’s been done to remediate Jeff Bezos image though as those rumours about his personality have been swirling for ages and I’d be astonished if he wasn’t aware of them already.

However the one thing that this idea has going for it is that Bezos is behind it and this isn’t the first wild gamble he’s undertaken in recent memory. Back in 2000 he founded Blue Origin a private space company focused on human spaceflight. Whilst it hasn’t made the headlines like its competition has it’s still managed to do a handful of test flights and has worked in conjunction with NASA on its Commercial Crew Development Program. Whilst this is a worlds away from flying drones to deliver products it does show that Bezos is willing to commit funding over a long period of time to see something achieved. Though this still leaves the question of why he made the announcement so soon unanswered.

For what its worth I think the reasoning behind it is to get the public talking about it now so that there’ll be some momentum behind the idea when it comes time for Amazon to start talking with legislators about how this system is going to work. If the FAA was anything to go by such a system wouldn’t see the light of day for another 13 years or so. Whilst it’s definitely not ready for prime time yet due to the numerous technical challenges it has yet to overcome it’s unlikely that they will take that long to solve. Putting the screws to politicians in this way means that Amazon doesn’t have to spend as much money on direct lobbying or convincing the public that it’s a good idea.

As for me personally I think it’s a nifty idea however its application is likely going to be horribly limited, especially in locations outside of the USA. A quick glance over this map reveals just how many locations Amazon has in various countries (don’t be fooled by those 2 locations in Australia, they’re just corporate offices) and since their drones need to launch from one of the fulfilment sites you can see how small a range this kind of service will have. Of course they can always widen this by increasing the number of distribution centres they have but that’s antithetical to their current way of doing business. It’s a challenge that can be overcome, to be sure, however I just don’t see it getting much air time (ha!) outside of major capital cities, especially in non-USA countries.

I’d love to be proven wrong on this however as the lazy introvert inside me loves not having to do anything to get stuff that I want and the instant gratification such a service would provide is just the icing on the cake. However it’s unlikely to see the light of day for several years from now and likely the better part of a decade before it comes to Australia so I’m not exactly hanging out for it. I think the idea has some merit though although whether that will be enough to carry it on as a viable business process will be something that only time will be able to reveal to us.

 

 

AGIMO ICT Strategy Summary

A New AGIMO Policy is Great, But…

Canberra is a strange little microcosm. If you live here chances are you are either working directly for the government as a member of the public service or you’re part of an organisation that’s servicing said government. This is especially true in the field of IT as anyone with a respectable amount of IT experience can make a very good living working for any of the large department’s headquarters. I have made my IT career in this place and in my time here I’ve spent all of my time lusting after the cutting edge of technology whilst dealing with the realities of what large government departments actually need to function. As long time readers will be aware I’ve been something of a cloud junkie for a while now but not once have I been able to use it at my places of work, and there’s a good reason for that.

AGIMO ICT Strategy SummaryNot that you’d know that if you heard the latest bit of rhetoric from the current government which has criticised the current AGIMO APS ICT Strategy for providing only “notional” guidelines for using cloud base services. Whilst I’ll agree that the financial implications are rather cumbersome (although this is true of any procurement activity within the government, as anyone who’s worked in one can tell you) what annoyed me was the idea that security requirements were too onerous. The simple fact of the matter is that many government departments have regulatory and legal obligations not to use overseas cloud providers due to the legislation that restricts Australian government data from travelling outside our borders.

The technical term for this is data sovereignty and for the vast majority of the large government departments of Australia they’re legally bound to keep all their services, and the data that they rely on, on Australian soil. The legislation is so strict in this regard that even data that’s not technically sensitive, like say specifications of machines or network topologies, in some cases can’t be given to external vendors and must instead only be inspected on site. The idea then that governments could take advantage of cloud providers, most of which don’t have availability zones here in Australia, is completely ludicrous and no amount of IT strategy policies can change that.

Of course cloud providers aren’t unaware of these issues, indeed I’ve met with several people behind some of the larger public clouds on this, and many of them are bringing availability zones to Australia. Indeed Amazon Web Services has already made itself available here and Microsoft’s Azure platform is expected to land on our shores sometime next year. The latter is probably the more important of the two as if the next AGIMO policy turns out the way it’s intended the Microsoft cloud will be the defacto solution for light user agencies thanks to the heavy amount of Microsoft products in use at those places.

Whilst I might be a little peeved at the rhetoric behind the review of the APS ICT Strategy I do welcome it as even though it was only written a couple years ago it’s still in need of an update due to the heavy shift towards cloud services and user centric IT that we’ve seen recently. The advent of Australian availability zones will mean that the government agencies most able to take advantage of cloud services will finally be able to, especially with AGIMO policy behind them. Still it will be up to the cloud providers to ensure their systems can meet the requirements of these agencies and there’s still every possibility that they will still not be enough for some departments to take advantage of.

We’ll have to see how that pans out, however.