Rewind back a couple years that the idea of wearable computing was something reserved for the realms of the ultra-geek and science fiction. Primarily this was a function of the amount of computing power and power capacity we could stuff into a gadget that anyone would be willing to wear as anything that could be deemed useful was far too bulky to be anything but a concept. Today the idea is far more mainstream with devices like Google Glass and innumerable smart watches flooding the market but that seems to be as far as wearable technology goes now. Should Intel have its way though this could be set for a rapid amount of change with the announcement of Intel Edison, a x86 processor that comes in a familiar (and very small) package.
It’s an x86 processor the size of a SD card and included in that package is a 400MHz processor (for the sake of argument I am assuming that it’s the same SOC that powers Intel’s Galileo platform, just a 22nm version), WiFi and low power Bluetooth. It can run a standard version of Linux and, weirdly enough, even has its own little app store. Should it retain its Galileo roots it will also be Arduino compatible whilst also gaining the capability to run the new Wolfram programming language. Needless to say it’s a pretty powerful little package and the standard form factor should make it easy to integrate into a lot of products.
By itself the Edison doesn’t suddenly make all wearable computing ideas feasible, indeed the progress made in this sector in the last year is a testament to that, instead it’s more of an evolutionary jump that should help to jump start the next generation of wearable devices. We’ve been able to go far with devices that have a tenth of the computing power of the Edison so it will be interesting to see what kinds of applications are made possible by the additional grunt it gives. Indeed Intel believes strongly in the idea that Edison will be the core of future wearable devices and has set up the Make It Wearable challenge, with over $1 million in prizes, in order to spur product designers on.
It will be interesting to see how the Edison stacks up against the current low power giant ARM as they have a bevy of devices already available that would be comparable to the Edison. Indeed it seems that Edison is meant to be a shot across ARM’s bow as it’s one of the few devices that Intel will allow third parties to license, much in the same way as ARM does today. There’s no question that Intel has been losing out hard in this space so the idea of marketing the Edison towards the wearable computing sector is likely a coy play to carve out a good chunk of that market before ARM cements themselves in it (like they did with smart phones).
One thing is for certain though, the amount of computing power available in such small packages is on the rise enabling us to integrate technology into more and more places. It’s the first tenuous steps towards creating an Internet of Things where seamless and unbounded communication is possible between almost any device. The results of Intel’s Make It Wearable competition will be a good indication of where this market is heading and what we, the consumers, can expect to see in the coming years.
Back in July David Cameron announced that he’d be ensuring that all ISPs within the United Kingdom would implement a mandatory filtering scheme. The initiative drew a lot of negative attention, including a post from yours truly, as the UK’s citizens were rightly outraged that the government felt the need to fiddle with their Internet connections. The parallels between Cameron’s policy and that of the Clean Feed here in Australia were shocking in their similarity and I, like many others, thought that it’d likely never see the light of day. Unfortunately though it appears that not only has Cameron managed to get the big 4 Internet providers on board he’s also managed to broaden the scope far beyond its original intentions, much to the chagrin of everyone.
The base principle behind this initiative appears to be the same as the Clean Feed: to protect children from the vast swaths of objectionable content that reside on the Internet. Probably the biggest difference between however stems from its implementation as the Clean Feed was going to be enforced through legislation (although that later changed when it couldn’t pass parliament) Cameron’s filter is instead a voluntary code of practice that ISPs can adhere to. If the same thing was introduced in Australia it would be likely that none would support it however in the UK nearly all of the major suppliers have agree to implement it. The problem with this informal system though is that the scope of what should and should not be blocked isn’t guarded by any kind of oversight and, predictably, the scope has started to creep far beyond it’s initial goals.
Among the vast list of things that are making their way onto the list of “objectionable” content are such legitimate sites including sex education sites and even the UK equivalents of sites like Kids Helpline. Back when Conroy first proposed the filter this kind of scope creep was one of the biggest issues that many of us had with the proposal as the process by which they made the list was secretive and the actual list itself, even though it was eventually made public, was also meant to be kept from the general public. Cameron’s initiative does the same and, just as everyone was worried about, the list of objectionable content has grown far beyond what the general public was told it would. It’s happened so quickly that many have said (and rightly so) that it was Cameron’s plan all along.
If you ever had any doubts about just how bad the Clean Feed would have been in Australia then the UK’s initiative should serve as a good example of what we could have expected. The rapid expansion from a simple idea of protecting children from online pornography has now morphed into a behemoth where all content either fits into someone’s idea of what’s proper and what’s not. It’s only a matter of time before some politically sensitive content makes it onto the objectionable list, turning the once innocent filter into a tool of Orwellian oppression. I’d love to be proved wrong on this but I can’t say I’m hopeful given that the slippery slope that many of us predicted came true.
Fight this, citizens of the UK.
Governments often avoid long term policy goals for fear of never seeing them completed. This unfortunately means that large infrastructure projects fall by the wayside as it’s unlikely that they’ll be finished in a single term, leaving a potential political win on the table for an incoming government. The National Broadband Network then was something of an oddity, forced into being due to the lack of interest the private sector showed in building it (despite heavy government funding) it was one of the few examples of a multi-term policy that would have tangible benefits for all Australians. Like any big project it had its issues but I, and many others, still thought it was worth the investment.
If you were to believe the Liberal’s rhetoric of the past couple years however you’d likely be thinking otherwise. Whilst the initial volleys launched at the NBN were mostly focused on the fact that it was an expensive ploy by Labor to buy votes it soon metastasised into a fully fledged attack that had little rhyme or reason. It’s ultimate form was the Liberal’s FTTN NBN, a policy which many saw as a half hearted attempt to placate Liberal voters who saw the NBN as an expensive Labor policy whilst trying to retain the tech vote which they had spent so many years losing. After they got into government however many of us, myself included, thought that it was all a load of hot air and that they’d simply continue with the current NBN plan, possibly with someone else building it.
Oh how wrong we all were.
I mentioned last week that Turnbull needed to start listening to the evidence that was piling up that the FTTP NBN was the way to go, figuring that the unbiased strategic review would find in favour of it given the large body of evidence saying so. However the report was anything but saying that the current NBN plan was woefully behind schedule and would likely end up costing almost 50% more than it was currently expected to. The new NBNCo board then recommended a plan of action that looked frightfully similar to that of the Liberal’s FTTN NBN, even touting the same party lines of faster, cheaper and sooner. Needless to say I have some issues with, not least of which is the fact that it seems to be wildly out of touch with reality.
For starters I find it extremely hard to believe that NBNCo, a highly transparent company who’s financials have been available for scrutiny for years, would be unaware of a cost blow out exceeding some $28 billion. The assumption for the cost blow out seems to stem from an ill formed idea that the cost per premise will increase over time, something which is the exact opposite of reality. There also seems to be a major disconnect between the Liberal’s figures on take up rates and plan speeds which makes it appear like there’s a huge hole in the revenue that NBNCo would hope to generate. Indeed if we look at the 2013-2016 corporate plan the figures in there are drastically different to the ones the review is using, signalling that either NBNCo was lying about it (which they weren’t) or the strategic review is deliberately using misleading figures to suit an agenda.
I won’t mince words here as it’s clear that many aspects of the review have a political agenda behind them. The $28 billion blowout in the FTTP NBN seems to have been calculated to make the $11 billion increase in peak funding for the Liberal’s NBN seem a lot more palatable, even though its cost is now basically the same as the original costings for the FTTP NBN. Honestly we should have expected this when the majority of the new NBNCo board is staffed with former executives from telcos who have large investments in Hybrid Fiber Coaxial networks, something which the new NBN will be on the hook for (even though the Liberals seem to think they’ll get those for free).
In short the review is laughable, an exercise in fudging numbers to suit a political agenda that has absolutely zero groundings in reality. The end of it is that we, the Internet users of Australia, will get horrendously screwed with outdated technology that will have to be replaced eventually anyway and at a cost that will far exceed that of a pure FTTP solution. Of course it’s now clear that it was never Turnbull’s intention to do a fair and honest review and was only interested in being given evidence to support his skewed view of technology.
Convincing the wider tech community that the the FTTN NBN is a bad idea isn’t exactly a hard task as anyone who’s worked in technology understands the fundamental benefits of a primarily fibre network over one that’s copper. Indeed even non-technical users of Australia’s current broadband network are predominately in favour of the fully fibre solution knowing that it will lead to a better, more reliable service than anything the copper network can deliver. The was a glimmer of hope back in September when Turnbull commissioned NBNco to do a full report on the current rollout and how that would compare to his FTTN solution however his reaction to a recent NBNco report seems to show otherwise.
The document in question is a report that NBNCo prepared during the caretaker period that all government departments enter prior to an election. The content of the document has been rather devastating to the Coalition’s stance that FTTN can be delivered faster and cheaper with NBNCo stating in no uncertain terms that they would not be able to meet the deadlines promised before the election. Additionally many of the fundamental problems with the FTTN solution were also highlighted which should be a very clear signal to Turnbull that his solution is simply not tenable, at least in its current form.
However Turnbull has done as much as he can to discredit this report, taking the stance that it was heavily outdated and written over 6 months ago. However this is clearly not the case as there’s ample evidence that it was written recently, even if it was during the recent caretaker period (where, you could potentially argue, that NBNCo was still under the influence of Labor). In all honesty though the time at which it was written is largely irrelevant as the criticisms of it have been echoed by myself and other IT pundits for as long as the Coalition has spruiked their FTTN policy.
Worse still the official NBNCo report, which Turnbull has previously stated he’ll bind himself to, was provided to him almost 2 weeks ago and hasn’t seen the light of day since. It was even brought up during question time during a recent sitting of parliament and Turnbull was defiant in his stance to not release it. We’ll hopefully be getting some insight into what the report actually contains tomorrow as a redacted version of the report will be made available to some journalists. For someone who wanted a lot more transparency from NBNCo he is being awfully hypocritical as, if he was right about FTTN being cheaper and faster to implement, would have supported that view. The good money is then on the fact that the report is far more damning about the Coalition’s policy than Turnbull had hoped it’d be.
If Turnbull wants to keep any shred of creditability with the technically inclined voters he’s going to have to fess up sooner or later that the Coalition’s policy was a non-starter and pursuing the FTTP solution is the right way to go. Heck he doesn’t even have to do the former if he doesn’t want to but putting his stamp on the FTTP NBN would go a long way to undoing the damage to his reputation as the head of technology for Australia. I guess we’ll know more about why he’s acting the way he is tomorrow.
I had given up on writing on BitCoin because of the rather toxic community of people that seemed to appear whenever I wrote about it. They never left comments here, no instead they’d cherry pick my articles and then never attempt to read any of my further writings on the subject and then labelling me as a BitCoin cynic. It had gotten to the point where I simply couldn’t stomach most BitCoin articles because of the ensuing circlejerks that would follow afterwards where any valid criticism would be met with derision usually only found in Call of Duty matches. But the last couple months of stratospheric growth and volatility have had me pulling at my self impost reigns, just wanting to put these zealots in their place.
Since I can’t find anything better to post about it seems that today will be that day.
The last time I posted about BitCoins they were hovering around $25 (this was at the start of the year, mind you), a price that was not seen for a long time previously. It began a somewhat steady tend upwards after that however it then had another great jump into the $100~$200 range something I long expected to be completely unsustainable. It managed to keep around that area for a long time however but the end of October saw it begin an upward trend that didn’t show many signs of stopping until recently and the past couple weeks have been an insane roller coaster ride of fluctuating prices that no currency should ever undergo.
Much of the initial growth was attributed to the fact that China was now becoming interested in BitCoins and thus there was a whole new market of capital being injected into the economy. Whilst this might have fuelled the initial bump we saw back in the end of October the resulting stratospheric rise, where the price doubled in under a month, could simply not be the result of new investors buying into the market. The reasoning behind this is the fact that the transaction volumes did not escalate at a similar pace meaning those ridiculously unsustainable growth rates were driven by speculative investors looking to increase the value of their BitCoin portfolios, not a growing investor base.
The anointed champions of BitCoin won’t have a bar of that however, even when the vast majority of forums were flooded with people who were crying when they cashed out at $400, lamenting the fact they could have had 3 times more if they’d only waited another week. As I’ve said dozens of times in the past the fact that the primary use of BitCoin right now is speculative investment is antithetical to its aspirations to become a true currency. Indeed the fact that it’s deflationary means that it inherently encourages this kind of action rather than being a medium for the transfer of wealth between parties. Indeed the inflationary aspect of fiat currencies, which BitCoiners seem to hate for some reason, encourages people to spend it rather than simply hanging on to it.
The flow on effect of this rampant speculation is the wild fluctuations in value which make using it incredibly difficult for businesses. Indeed any business that was selling goods for BitCoin prior to the current crash has lost money on any goods they sold simply because of the fluctuations in price. Others would argue that typically the retailers are better off because the price of BitCoin trends upwards but history has shown that you simply can’t rely on that and it’s guaranteed that unless you exchange your BitCoins for hard currency immediately after purchases you’re likely to hit a period of instability where you’ll end up on the losing end of the equation.
Whilst I’m sure I’ve lost all the True BitCoin Believers at this point I feel I have to make the point that I think the idea of cryptocurrencies are great as they’d be a great alternate method for transferring wealth across the world. BitCoin has some fundamental issues, many of which can’t be solved by a simple work around here or there, and as such whilst I won’t advocate its wholesale abandonment I would encourage the development of alternatives to address these issues. Unfortunately none have been particularly forthcoming but as BitCoin continues to draw more attention to itself I can’t imagine they’re too far off and then hopefully we can have the decentralized method of transferring wealth all BitCoiners like to talk about.
You’d think that with my time spent as a retail worker I’d have some sense of loyalty to real world shop fronts, knowing that there’s value in a good salesperson’s opinion on what product best suits my needs. There’s something to that and indeed should I find myself out of my depth or simply not wanting to do the research myself I’ll head on into the store but my primary means for conducting my shopping is still via online merchants. Whilst its hard to argue the convenience factor of the majority of the experience the last mile delivery system is somewhat lacklustre, usually requiring me to either truck out to a depot, abscond from work early or hope that my darling wife will be able to break herself away from her studies so the goods can be delivered.
Before anyone suggests getting it delivered to my work I’ll have to say that my experience in doing so has been rather mixed. In the past I had had places where the delivery guys came right up to our reception desks to deliver things and this worked great. However as I graduated to bigger and better places that had delivery docks my lowly deliveries often got lost in the works, sometimes for days on end, with no way for me to track them. Thus I’ve since refrained from using them as at least when I get them delivered to my home I’ll either still have tracking from the courier or a note from Australia Post telling me where to pick it up. However if the latest innovation from Australia Post has anything to do with it I might need not rely on either of those processes again thanks to the introduction of Parcel Lockers.
For the uninitiated Parcel Lockers are a free service from Australia Post. You sign up for one at their website, select the location where you’d like your parcels delivered to and you’ll receive a shipping address which you can have your packages delivered to. Then when your package arrives you’ll receive a SMS with a code in it and you can then go to the locker in question and retrieve your package. Initially they were only available in a few select locations, the middle of Canberra being one of them, but they’ve since spread to other mid to large sized post offices although their availability at postal locations is still not ubiquitous.
After forgetting that I had signed up for one for the better part of 3 months I finally decided to give them a go to see how the process would pan out. I figured I’d keep it simple so I ordered a book from Book Depository that I’ve been eyeing off for ages (Critical Path if you’re wondering, and yes I’m trying to do exactly that) so that if I didn’t get it there’d be no great loss. About 2 weeks after placing the order I got my message saying a parcel was ready for me to pick up. Picking it up was painless, just punch in the code and the parcel locker opens for you, the screen even tells you where to look if it’s that hard for you to notice it opening. That’s it, nothing more to it.
Of course there are some limitations to this service as you can see from the picture above. You can’t get anything you want delivered to them as they don’t have sizes to accommodate everything and I’d hazard a guess that they’d send you a message to come collect it from somewhere else should you attempt to do so. Additionally since these are obviously at something of a premium they’ll get aggressive should you fail to pick it up swiftly (I forgot to get mine on the day and was told to pick it up 2 business days later before the afternoon). The simple solution to this is to get more of them something which Australia Post appears to be doing.
Ultimately what I’d love to have would be my very own parcel locker style device at my house that deliveries could be made to. I’d be happy to pay for the privilege too as the amount of convenience it would deliver would exceed even that of the current parcel lockers. However I’d likely be just as happy if my local post office had one as whilst this is somewhat convenient it’s only just above going to my local post office since I don’t live anywhere near to one of these (and indeed only recently started working in walking distance to one). Unfortunately they don’t seem to have a roadmap available as to when these will become available in other locations but I can’t imagine this is something they’ll want to limit just to the bigger distribution centres.
Canberra is a strange little microcosm. If you live here chances are you are either working directly for the government as a member of the public service or you’re part of an organisation that’s servicing said government. This is especially true in the field of IT as anyone with a respectable amount of IT experience can make a very good living working for any of the large department’s headquarters. I have made my IT career in this place and in my time here I’ve spent all of my time lusting after the cutting edge of technology whilst dealing with the realities of what large government departments actually need to function. As long time readers will be aware I’ve been something of a cloud junkie for a while now but not once have I been able to use it at my places of work, and there’s a good reason for that.
Not that you’d know that if you heard the latest bit of rhetoric from the current government which has criticised the current AGIMO APS ICT Strategy for providing only “notional” guidelines for using cloud base services. Whilst I’ll agree that the financial implications are rather cumbersome (although this is true of any procurement activity within the government, as anyone who’s worked in one can tell you) what annoyed me was the idea that security requirements were too onerous. The simple fact of the matter is that many government departments have regulatory and legal obligations not to use overseas cloud providers due to the legislation that restricts Australian government data from travelling outside our borders.
The technical term for this is data sovereignty and for the vast majority of the large government departments of Australia they’re legally bound to keep all their services, and the data that they rely on, on Australian soil. The legislation is so strict in this regard that even data that’s not technically sensitive, like say specifications of machines or network topologies, in some cases can’t be given to external vendors and must instead only be inspected on site. The idea then that governments could take advantage of cloud providers, most of which don’t have availability zones here in Australia, is completely ludicrous and no amount of IT strategy policies can change that.
Of course cloud providers aren’t unaware of these issues, indeed I’ve met with several people behind some of the larger public clouds on this, and many of them are bringing availability zones to Australia. Indeed Amazon Web Services has already made itself available here and Microsoft’s Azure platform is expected to land on our shores sometime next year. The latter is probably the more important of the two as if the next AGIMO policy turns out the way it’s intended the Microsoft cloud will be the defacto solution for light user agencies thanks to the heavy amount of Microsoft products in use at those places.
Whilst I might be a little peeved at the rhetoric behind the review of the APS ICT Strategy I do welcome it as even though it was only written a couple years ago it’s still in need of an update due to the heavy shift towards cloud services and user centric IT that we’ve seen recently. The advent of Australian availability zones will mean that the government agencies most able to take advantage of cloud services will finally be able to, especially with AGIMO policy behind them. Still it will be up to the cloud providers to ensure their systems can meet the requirements of these agencies and there’s still every possibility that they will still not be enough for some departments to take advantage of.
We’ll have to see how that pans out, however.
Ask any computer science graduate about the first programmable computer and the answer you’ll likely receive would be the Difference Engine, a conceptual design by Charles Babbage. Whilst the design wasn’t entirely new (that honour goes to J. H. Müller who wrote about the idea some 36 earlier) he was the first to obtain funding to create such a device although he never managed to get it to work, despite blowing the equivalent of $350,000 in government money on trying to build it. Still modern day attempts at creating the engine with the tolerances of the time period have shown that such a device would have worked should have he created it.
But Babbage’s device wasn’t created in a vacuum, it built on the wealth of mechanical engineering knowledge from the decades that proceeded him. Whilst there was nothing quiet as elaborate as his Analytical Engine there were some marvellous pieces of automata, ones that are almost worthy of the title of programmable computer:
The fact that this was built over 240 years ago says a lot about the ingenuity that’s contained within it. Indeed the fact that you’re able to code your own message into The Writer, using the set of blocks at the back, is what elevates it above other machines of the time. Sure there were many other automata that were programmable in some fashion, usually by changing a drum, but this one allows configuration on a scale that they simply could not achieve. Probably the most impressive thing about it is that it still works today, something which many machines of today will not be able to claim in 240 years time.
Whilst a machine of this nature might not be able to lay claim to the title of first programmable computer you can definitely see the similarities between it and it’s more complex cousins that came decades later. If anything it’s a testament to the additive nature of technological developments, each one of them building upon the foundations of those that came before it.
The resignation of the National Broadband Network board was an expected move due to the current government’s high level of criticism of the project. Of course while I, and many other technically inclined observers, disagreed with the reasons cited for Turnbull’s request for their resignations I understood that should we want to get the NBN in the way we (the general public) wanted it then it was a necessary move that would allow the Liberal party to put their stamp on the project. However what followed seemed to be the worst possible outcome, one that could potentially see the NBN sent down the dark FTTN path that would doom Australia into remaining as an Internet backwater for the next few decades.
They hired ex-Telstra CEO Ziggy Switkowski.
For anyone who lived through his tenure as the head of Australia’s largest telecommunications company his appointment to the head of the NBN board was a massive red flag. It would be enough to be outraged at his appointment for the implementation of data caps and a whole host of other misdeeds that have plagued Australia’s Internet industry since his time in office but the real crux of the matter is that since his ousting at Telstra he’s not been involved in the telecommunications industry for a decade. Whatever experience he had with it is now long dated and whilst I’m thankful that his tenure as head of the board is only temporary (until a new CEO is found) the fact that he has approved other former Telstra executives to the NBN board shows that even a small amount of time there could have dire implications
News came yesterday however that Turnbull has appointed Simon Hackett, of Internode fame, was appointed to the NBN board. In all honesty I never expected this to come through as whilst there were a few grass roots campaigns to get that to happen I didn’t think that they’d have the required visibility in order to make it happen. However Hackett is a well known name in the Australian telecommunications industry and it’s likely that his reputation was enough for Turnbull to consider him for the position. Best of all he’s been a big supporter of the FTTH NBN since the get go and with this appointment will be able to heavily influence the board’s decisions about the future of Australia’s communication network.
Whilst I was always hopeful that a full review of the feasibility of the NBN would come back with resounding support for a FTTH solution this will almost certainly guarantee such an outcome. Of course Turnbull could still override that but with his staunch stance of going with the review’s decision it’s highly unlikely he’d do that, less he risk some (even more) severe political backlash. The most likely change I can see coming though is that a good chunk of the rollout, mostly for sites where there is no current contracts, will fall to Telstra. Whilst I’m a little on the fence about this (they’d be double dipping in that they’d get paid to build the new network and for disconnecting their current customers) it’s hard to argue that Telstra isn’t a good fit for this. I guess the fact that they won’t end up owning it in the end does make it a fair bit more palatable.
So hopefully with Hackett’s appointment to the NBNCo board we’ll have a much more technically inclined view presented at the higher levels, one that will be able to influence decisions to go down the right path. There’s still a few more board members to be appointed and hopefully more of them are in the same vein as Hackett as I’d rather not see it be fully staffed with people from Telstra.
I’ve worked with a lot of different hardware in my life, from the old days of tinkering with my Intel 80286 through to esoteric Linux systems running on DEC tin until I, like everyone else in the industry, settled on x86-64 as the defacto standard. Among the various platforms I was happy to avoid (including such lovely things as Sun SPARC) was Intel’s Itanium range as it’s architecture was so foreign from anything else it was guaranteed that whatever you were trying to do, outside of building software specifically for that platform, was doomed to failure. The only time I ever came close to seeing it being deployed was on the whim of a purchasing manager who needed guaranteed 100% uptime until they realised the size of the cheque they’d need to sign to get it.
If Intel’s original dream was to be believed then this post would be coming to you care of their processors. You see back when it was first developed everything was still stuck in the world of 32bit and the path forward wasn’t looking particularly bright. Itanium was meant to be the answer to this, with Intel’s brand name and global presence behind it we would hopefully see all applications make their migration to the latest and greatest 64bit platform. However the complete lack of any backwards compatibility with any currently developed software and applications meant adopting it was a troublesome exercise and was a death knell for any kind of consumer adoption. Seeing this AMD swooped in with their dually compatible x86-64 architecture which proceeded to spread to all the places that Itanium couldn’t, forcing Intel to adopt the standard in their consumer line of hardware.
Itanium refused to die however finding a home in the niche high end market due to its redundancy features and solid performance for optimized applications. However the number of vendors continuing to support the platform dwindled from their already low numbers with it eventually falling to HP being the only real supplier of Itanium hardware in the form of their NonStop server line. It wasn’t a bad racket for them to keep up though considering the total Itanium market was something on the order of $4 billion a year and with only 55,000 servers shipped per year you can see how much of a premium they attract). Still all the IT workers of the world have long wondered when Itanium would finally bite the dust and it seems that that day is about to come.
HP has just announced that it will be transitioning its NonStop server range from Itanium to x86 effectively putting an end to the only sales channel that Intel had for their platform. What will replace it is still up in the air but it’s safe to assume it will be another Intel chip, likely one from their older Xeon line that shares many of the features that the Itanium had without the incompatible architecture. Current Itanium hardware is likely to stick around for an almost indefinite amount of time however due to the places it has managed to find itself in, much to the dismay of system administrators everywhere.
In terms of accomplishing it’s original vision Itanium was an unabashed failure, never finding the consumer adoption that it so desired and never becoming the herald of 64bit architecture. Commercially though it was somewhat of a success thanks to its features that made it attractive to the high end market but even then it was only a small fraction of total worldwide server sales, barely enough to make it a viable platform for anything but wholly custom solutions. The writing was on the wall when Microsoft said that Windows Server 2008 was the last version to support it and now with HP bowing out the death clock for Itanium has begun ticking in earnest, even if the final death knell won’t come for the better part of a decade.