Posts Tagged‘infrastructure’

Open Compute Project Logo

Will Open Compute Ever Trickle Down?

When Facebook first announced the Open Compute Project it was a very exciting prospect for people like me. Ever since virtualization became the defacto standard for servers in the data center hardware density became the prime the name of the game. Client after client I worked for was always seeking out ways to reduce their server fleet’s footprint, both by consolidating through virtualization and by taking advantage of technology like blade servers. However whilst the past half decade has seen a phenomenal increase the amount of computing power available, and thus an increase in density, there hasn’t been another blade revelation. That was until Facebook went open kimono on their data center strategies.

Open Compute Project LogoThe designs proposed by the Open Compute Project are pretty radical if you’re used to traditional computer hardware, primarily because they’re so minimalistic and the fact that they expect a 12.5V DC input rather than the usual 240/120VAC that’s typical of all modern data centers. Other than that they look very similar to your typical blade server and indeed the first revisions appeared to get densities that were pretty comparable. The savings at scale were pretty tremendous however as you could gain a lot of efficiency by not running a power supply in every server and their simple design meant their cooling aspects were greatly improved. Apart from Facebook though I wasn’t aware of any other big providers utilizing ideas like this until Microsoft announced today that it was joining the project and was contributing its own designs to the effort.

On the surface they look pretty similar to the current Open Compute standards although the big differences seem to come from the chassis.Instead of doing away with a power supply completely (like the current Open Compute servers advocate) it instead has a dedicated power supply in the base of the chassis for all the servers. Whilst I can’t find any details on it I’d expect this would mean that it could operate in a traditional data center with a VAC power feed rather than requiring the more specialized 12.5V DC. At the same time the density that they can achieve with their cloud servers is absolutely phenomenal, being able to cram 96 of them in a standard rack. For comparison the densest blade system I’ve ever supplied would top out at 64 servers and most wouldn’t go past 48.

This then begs the question: when we will start to see server systems like this trickle down to the enterprise and consumer market? Whilst we rarely have the requirements for the scales at which these servers are typically used I can guarantee there’s a market for servers of this nature as enterprises continue on their never ending quest for higher densities and better efficiency. Indeed this feels like it would be advantageous for some of the larger server manufacturers to pursue since if these large companies are investing in developing their own hardware platforms it shows that there’s a niche they haven’t yet filled.

Indeed if the system can also accommodate non-compute blades (like the Microsoft one shows with the JBOD expansion) such ideas would go toe to toe with system-in-a-box solutions like the CISCO UCS which, to my surprise, quickly pushed its way to the #2 spot for x86 blade servers last year. Of course there are already similar systems on the market from others but in order to draw people away from that platform other manufacturers are going to have to offer something more and I think the answer to that lies within the Open Compute designs.

If I’m honest I think that the real answer to the question posited in the title of this blog is no. Whilst it would be possible for anyone working at Facebook and Microsoft levels of scale to engage in something like this unless a big manufacturer gets on board Open Compute based solutions just won’t be feasible for the clients I service. It’s a shame because I think there’s some definite merits to the platform, something which is validated by Microsoft joining the project.

DSC_0148

Microsoft’s Internet Connection is the Least of Your Worries.

After spending a week deep in the bowels of Microsoft’s premier tech conference and writing about them breathlessly for Lifehacker Australia you’d be forgiven for thinking I’m something of a Microsoft shill. It’s true that I think the direction they’re going in for their infrastructure products is pretty spectacular and the excitement for those developments is genuine. However if you’ve been here for a while you’ll know that I’m also among their harshest critics, especially when they do something that drastically out of line with my expectations as one of their consumers. However I believe in giving credit where its due and a recent PA Report article has brought Microsoft’s credentials in one area into question when they honestly shouldn’t be.

DSC_0148

The article I’m referring to is this one:

I’m worried that there are going to be a few million consoles trying to dial into the home servers on Christmas morning, about the time when a mass of people begin to download new games through Microsoft’s servers. Remember, every game will be available digitally day and date of the retail version, so you’re going to see a spike in the number of people who buy their Xbox One games online.

I’m worried about what happens when that new Halo or Call of Duty is released and the system is stressed well above normal operating conditions. If their system falls, no matter how good our Internet connections, we won’t be able to play games.

Taken at face value this appears to be a fair comment. We can all remember times when the Xbox Live service came down in a screaming heap, usually around christmas time or even when a large release happened. Indeed even doing a quick Google search reveals there’s been a couple of outages in recent memory although digging deeper into them reveals that it was usually part of routine maintenance and only affected small groups of people at a time. With all the other criticism that’s being levelled at Microsoft of late (most of which I believe is completely valid) it’s not unreasonable to question their ability to keep a service of this scale running.

However as the title of this post alludes to I don’t think that’s going to be an issue.

The picture shown above is from the Windows Azure Internals session by Mark Russinovich which I attended last week at TechEd North America. It details the current infrastructure that underpins the Windows Azure platform which powers all of Microsoft’s sites including the Xbox Live service. If you have a look at the rest of the slides from the presentation you’ll see how far that architecture has come since they first introduced it 5 years ago when the over-subscription rates were much, much higher for the entire Azure stack. What this meant was that when something big happened the network simply couldn’t handle it and caved under the pressure. With this current generation of the Azure infrastructure however it’s far less oversubscribed and has several orders of magnitude more servers behind it. With that in mind it’s far less likely that Microsoft will struggle to service large spikes like they have done in the past as the capacity they have on tap is just phenomenal.

Of course this doesn’t alleviate the issues with the always/often on DRM or the myriad of other issues that people are criticizing the XboxOne for but it should show you that worrying about Microsoft’s ability to run a reliable service shouldn’t be one of them. Of course I’m just approaching this from an infrastructure point of view and it’s entirely possible for the Xbox Live system to have some systemic issue that will cause it to fail no matter how much hardware they throw at it. I’m not too concerned about that however as Microsoft isn’t your run of the mill startup who’s just learning how to scale.

I guess we’ll just have to wait and see how right or wrong I am.

Alpha Nerd

I Ain’t Got Time For You, Alpha Nerd.

I often find myself trusted with doing things I’ve never done before thanks to my history of delivering on these things but I always make people well aware of my inexperience in such areas before I pursue such things. I do this because I know I’m not the greatest engineer/system administrator/coder around but I do know that, given enough time, I can deliver something that’s exactly what they required. It’s actually an unfortunate manifestation of the imposter syndrome whereby I’m constantly self assessing my own skills, wondering if anything I’ve done was really that good or simply the product of all the people I worked with. Of course I’ve worked with people who know they are the best at what they do, even if the reality doesn’t quite match up to their own self-image.

Alpha Nerd

Typically these kinds of people take one of 2 forms, the first one of which I’ll call The Guns. Guns are awesome people, they know everything there is to know about their job and they’re incredibly helpful, a real treasure for the organisation. I’m happy to say that I’ve encountered more of these than the second type and they’re in no small part responsible for a lot of the things that I know today. They are usually vastly under-appreciated for their talents however as since they usually enjoy what they do to such a great extent they don’t attempt to upset the status quo and toil away in relative obscurity. These are the kinds of people I have infinite amounts of time for and are usually the ones I look to when I’m looking for help.

Then there’s the flip side: the Alpha Nerds.

These guys are typically responsible for some part of a larger system and to their credit they know it inside and out. I’d say on average about half of them got to that level of knowledge by simply being there for an inordinate amount of time and through that end up being highly valuable because of their vast amount of corporate knowledge. However the problem with these guys, as opposed to The Guns, is that they know this and use it to their advantage in almost every opportunity they get. Simple change to their system? Be prepared to do a whole bunch of additional work for them before it’ll happen. A problem that you’re responsible for but is out of your control due to other arrangements? They’ll drill you on it in order to reinforce their status with everyone else. I can’t tell you how detrimental these people are to the organisation even if their system knowledge and expertise appears invaluable.

Of course this delineation of Guns and Alpha Nerds isn’t a hard and fast line, there’s a wide spectrum between the two extremes, but there is an inflexion point where a Gun starts to turn Alpha and the benefits to the organisation start to tank. Indeed I had such a thing happen to me during my failed university project where I failed to notice that a Gun was turning Alpha on me, burning them out and leaving the project in a state where no one else could work on it even if they wanted to. Whilst the blame still rests solely on my shoulders for failing to recognise that it still highlights how detrimental such behaviour can be when technical expertise isn’t coupled with a little bit of humility.

Indeed if your business is building products that are based on the talents of said people then it’s usually to your benefit to remove Alpha Nerds from your team, even if they are among the most talented people in your team. This is especially true if you’re trying to invest in developing people professionally as typically Alphas will end up being the de-facto contacts for the biggest challenges, stifling the skill growth of members of the team. Whilst they might be worth 2.5 times of your average performers you’re likely limiting the chances of the team being more productive than they currently are, quite possibly to the tune of much more than what the Alpha is capable of delivering.

Like I said before though I’m glad these kinds of people tend towards being less common than their Gun counterparts. I believe this is because during the nascent stages of someone’s career you’re likely to run up against an Alpha and see the detrimental impacts they have. Knowing that you’re then much more likely to work against becoming like them and should you become an expert in your chosen area you’ll make a point of being approachable. Some people fail to do that however and proceed to make our lives a lot more difficult than they should be but I’m sure this isn’t unique to IT and is innate to organisations both big and small.

 

Seems OnLive Couldn’t Handle Being a Niche Product.

It’s no secret that I’ve never been much of a fan of the OnLive service. Whilst my initial scepticism came from my roots as someone who didn’t have decent Internet for the vast majority of his life while everyone else in the world seemed to since then I’ve seen fundamental problems with the service that I felt would severely hamper adoption. Primarily it was the capital heavy nature of the beast, requiring a large number of high end gaming PCs to be always on and available even when there was little demand for them. That and the input lag issue that would have made many games (FPS being the most prominent genre) nearly unplayable, at least in my mind. Still I never truly believed that OnLive would struggle that much as there definitely seemed to be a lot of people eager to use the service.

For once though I may have been right.

OnLive might have been a rather capital intensive idea but it didn’t take long for them to build out a company that was getting valued in the $1 billion range, no small feat by any stretch of the imagination. It was at that point that I started doubting my earlier suspicions, that level of value doesn’t come without some solid financials behind it, but it seems that since that dizzying high (and most likely in a reaction to Sony’s acquisition of their competitor Gaikai for much less than that) that they only had one place to go and that was down:

We’re hearing from a reliable source that OnLive’s founder and CEO Steve Perlman finally decided to make an exit — and in the process, is screwing the employees who helped build the company and brand. The cloud gaming company reportedly had several suitors over the last few years (perhaps including Microsoft) but Perlman reportedly held tight control over the company, apparently not wanting to sell or share any of OnLive’s secret sauce.

Our source tells us that the buyer wants all of OnLive’s assets — the intellectual property, branding, and likely patents — but the plan is to keep the gaming company up and running. However, OnLive management cleaned house today, reportedly firing nearly the entire staff, and we hear it was done just to reduce the company’s liability, thus reducing employee equity to practically zero. Yeah, it’s a massive dick move.

We’ve seen this kind of behaviour before in companies like the ill-fated MySpace and whilst the company will say many things about why they’re doing it essentially it makes the acquisition a lot more attractive for the buyer, due to the lower ongoing costs. Whoever this well funded venture capitalist is they don’t seem to be particularly interested in the company of OnLive itself, more the IP and massive amount of infrastructure that they’ve built up over the course of the last 3 years. No matter how the service is doing financially those things have some intrinsic value behind them and although the new mysterious backer has committed to keeping the service running I’m not sure how much faith can be put in those words.

Granted there are services that were so costly to build that the initial companies who built them folded but the subsequent owner who acquired everything at a fire sale price went onto to make a very profitable service (see Iridium Communications for a real world example of this). However the figures that we’ve been seeing on OnLive’s numbers since this story broke don’t paint a particularly rosy picture for the health of the service. When you have a fleet of 8000 servers servicing at most 1600 users that doesn’t seem sustainable by any way that I can think of lest the users be paying out the nose for the service (which they’re not, unfortunately). It’s possible that the massive amount of lay offs coupled with a reduction in their current infrastructure base might see OnLive become a profitable enterprise once again but I’ll have to say that I’m still sceptical.

Apart from the monthly access fee requirement being dropped none of the issues that I and countless other gamers have highlighted have been addressed and their niche of people who want to play high end games without the cost (and don’t own a console) just isn’t big enough to support their idea. I could see something like this service being an also-ran for a large company, much like Sony is planning to do with Gakai, but as a stand alone enterprise the costs of establishing the require infrastructure to get the required user base are just too high. This is not even touching on the input lag or the ownership/DRM issues either, both of which have been shown to be deal breakers for many gamers contemplating the service.

It’s a bit of a shame really as whilst I love being right about these things I’d much rather be proven wrong, especially when it comes to non-traditional ideas like OnLive. It’s entirely possible that their new benefactor could turn things around for them but they haven’t done a lot to endear themselves to the public and their current employees so their battle is going to be very much up hill from now on. I’m still willing to be proven wrong on this idea though but as time goes on it seems less and less likely that it’ll happen and that’s a terrible thing for my already inflated ego.

iiNet Buys Internode, Australia’s Broadband Future Looks Brighter.

Ever since I’ve been able to get broadband Internet I’ve only had the one provider: Internode. Initially it was just because my house mate wanted to go with them, but having zero experience in the area I decided to go along with him. I think the choice was partially due to his home town being Adelaide, but Internode also had a reputation for being a great ISP for geeks and gamers like us. Fast forward 6 years and you can still find me on an Internode plan simply because the value add services they provide are simply second to none. Whilst others may be cheaper overall none can hold a candle to all the extra value that Internode provides, which I most heartily indulge in.

In Internode’s long history it’s made a point about being one of the largest privately owned Internet service providers (ISPs) in Australia. This is no small feat as the amount of capital required to become an ISP, even in Australia, is no small feat. Internode’s reputation however afforded it the luxury of many geeks like myself chomping at the bit to get their services in our area, guaranteeing them a decent subscriber base wherever there was even a slight concentration of people passionate about IT and related fields. In all honestly I thought Internode would continue to be privately owned for a long time to come with the only possible change being them becoming publicly traded when they wanted to pursue more aggressive growth strategies.

Today brings news however that they will be bought out by none other than iiNet:

In a conference call this afternoon discussing the $105 million takeover announcement, Hackett said that because of NBN Co’s connectivity virtual circuit charge, and the decision to have 121 points of interconnect (POI) for the network, only an ISP of around 250,000 customers would have the scale to survive in an NBN world. With 260,000 active services, Internode just makes the cut. He said the merger was a matter of survival.

“The size of Internode on its own is right on the bottom edge of what we’ve considered viable to be an NBN player. If you’re smaller than that, the economics don’t stack up. It would be a dangerous thing for us to enter the next era being only just quite big enough,” he said.

Honestly when I first heard the news I had some very mixed feelings about what it would entail. iiNet, whilst being a damn fine provider in their own right, isn’t Internode and their value add services still lag behind those offered by Internode. However if I was unable to get Internode in my chosen area they would be the second ISP that I would consider going for, having numerous friends who have done so. I figured that I’d reserve my judgement until I could do some more research on the issue and as it turns out I, and all of Internode’s customers, really have nothing to worry about.

Internode as it stands right now will continue on as it does but will be wholly owned by iiNet. This means that they can continue to leverage their brand identity (including their slightly premium priced value add business model) whilst gaining the benefit of the large infrastructure that iiNet has to offer. The deal then seems to be quite advantageous for both Internode and iiNet especially with them both looking towards a NBN future.

That leads onto another interesting point that’s come out of this announcement: Internode didn’t believe it couldn’t economically provide NBN services at their current level of scale. That’s a little scary when one of the largest independent ISPs (with about 3% market capture if I’m reading this right) doesn’t believe the NBN is a viable business model for them. Whilst they’ll now be able to provide such services thanks to the larger user base from iiNet it does signal that nearly all smaller ISPs are going to struggle to provide NBN services into the future. I don’t imagine we’ll end up in a price fixing oligopoly but it does seem to signal the beginning of the end for those who can’t provide a NBN connection.

Overall the acquisition looks like a decisive one for iiNet and the future is now looking quite bright for Internode and all its customers. Hopefully this will mean the same or better services delivered at a lower price thanks to iiNet’s economies of scale and will make Internode’s NBN plans look a lot more comepetitive than they currently are. Should iiNet want to make any fundamental changes to Internode they’re going to have to do that softly as there’s legions of keyboard warriors (including myself) that could unleash hell if they felt they’ve been wronged. I doubt it will come to that though but there are definitely going to be a lot of eyes on the new iiNet/Internode from now on.

What Y’All Got Against Microsoft?

Maybe I’m just hanging around the wrong places on the Internet but recently there seemed to be a higher than average level of vitriol being launched at Microsoft. From my totally arbitrary standpoint it seems that most people don’t view Microsoft as the evil empire that they used to and instead now focus on the two new giants in the tech center, Apple and Google. This could be easily explained by the fact that Microsoft hasn’t really done anything particularly evil recently whilst Apple and Google have both been dealing with their ongoing controversies of platform lock-down and privacy related matters respectively. Still no less than two articles have crossed my path of late that squarely blame Microsoft for various problems and I feel they warrant a response.

The first comes courtesy of the slowly failing MySpace who has been bleeding users for almost 2 years straight now. Whilst there are numerous reasons as to why they’re failing (with Facebook being the most likely) one blog asked the question if their choice of infrastructure was to blame:

1. Their bet on Microsoft technology doomed them for a variety of reasons.
2. Their bet on Los Angeles accentuated the problems with betting on Microsoft.

Let me explain.

The problem was, as Myspace started losing to Facebook, they knew they needed to make major changes. But they didn’t have the programming talent to really make huge changes and the infrastructure they bet on made it both tougher to change, because it isn’t set up to do the scale of 100 million users it needed to, and tougher to hire really great entrepreneurial programmers who could rebuild the site to do interesting stuff.

I won’t argue point 2 as the short time I spent in Los Angeles showed me that it wasn’t exactly the best place for acquiring technical talent (although I haven’t been to San Francisco to give it a good comparison, but talking with friends who have seems to confirm this). However betting on Microsoft technology is definitely not the reason why MySpace started on a long downward spiral several years ago, as several commenters point out in this article. Indeed MySpace’s lack of innovation appears to stem from the fact that they outsourced much of their core development work to Telligent, a company that provides social network platforms. The issue with such an arrangement meant that they were wholly dependent on Telligent to provide updates to the platform they were using, rather than owning it entirely in house. Indeed as a few other commenters pointed out the switch to the Microsoft stack actually allowed MySpace to Scale much further with less infrastructure than they did previously. If there was a problem with scaling it definitely wasn’t coming from the Microsoft technology stack.

When I first started developing what became Lobaco scalability was always something that was nagging at the back of my head, taunting me that my choice of platform was doomed to failure. Indeed there have been only a few start-ups that have managed to make it big using the Microsoft technology stack so it would seem like the going down this path is a sure fire way to kill any good idea in its infancy. Still I have a heavy investment in the Microsoft line of products so I kept on plugging away with it. Problems of scale appear to be unique for each technology stack with all of them having their pros and cons. Realistically every company with large numbers of users  has their own unique way of dealing with it and the technology used seems to be secondary to good architecture and planning.

Still there’s still a strong anti-Microsoft sentiment amongst those in Silicone Valley. Just for kicks I’ve been thumbing through the job listings for various start ups in the area, toying with the idea of moving there to get some real world start-up experience. Most commonly however none of them want to hear anything about a Microsoft based developer, instead preferring something like PHP/Rails/Node.js. Indeed some have gone as far as to say that .NET development is black mark against you, only serving to limit your job prospects:

Programming with .NET is like cooking in a McDonalds kitchen.  It is full of amazing tools that automate absolutely everything.  Just press the right button and follow the beeping lights, and you can churn out flawless 1.6 oz burgers faster than anybody else on the planet.

However, if you need to make a 1.7 oz burger, you simply can’t.  There’s no button for it.  The patties are pre-formed in the wrong size.  They start out frozen so they can’t be smushed up and reformed, and the thawing machine is so tightly integrated with the cooking machine that there’s no way to intercept it between the two.  A McDonalds kitchen makes exactly what’s on the McDonalds menu — and does so in an absolutely foolproof fashion.  But it can’t go off the menu, and any attempt to bend the machine to your will just breaks it such that it needs to be sent back to the factory for repairs.

I should probably point out that I don’t disagree with some of the points of his post, most notably how Microsoft makes everything quite easy for you if you’re following a particular pattern. The trouble comes when you try to work outside the box and many programmers will simply not attempt anything that isn’t already solved by Microsoft. Heck I encountered that very problem when I tried to wrangle their Domain Services API to send and receive JSON a supported but wholly undocumented part of their API. I got it working in the end but I could easily see many .NET developers simply saying it couldn’t be done, at least not in the way I was going for it.

Still that doesn’t mean all .NET developers are simple button pushers, totally incapable of thinking outside the Microsoft box. Sure there will be more of those type of programmers simply because .NET is used is so many places (just not Internet start-ups by the looks of it) but to paint all of those who use the technology with the same brush seems pretty far fetched. Heck if he was right then there would’ve been no way for me to get my head around Objective-C since it’s not supported by Visual Studio. Still I managed to get competent in 2 weeks and can now hack my way around in Xcode just fine, despite my extensive .NET heritage.

It’s always the person or company, not the technology, that limits their potential. Sure you may hit a wall with a particular language or infrastructure stack but if you’re people are capable you’ll find a way around it. I might be in the minority when it comes to trying to start a company based around Microsoft technology but the fact is that attempting to relearn another technology stack is a huge opportunity cost. If I do it right however it should be flexible enough so that I can replace parts of the system with more appropriate technologies down the line, if the need calls for it. People pointing the finger at Microsoft for all their woes are simply looking for a scapegoat so they don’t have to address the larger systemic issues or are simply looking for some juicy blog fodder.

I guess they found the latter, since I certainly did ;)

The Decentrialized Workplace.

As any engineer will tell you our brains are always working out the best path to accomplish something, even those problems that are far outside of our area of expertise. The world to us is a giant set of problems just waiting to be solved and our minds are almost always ticking away at some problem from the most trivial quibble to those larger than life. Some ideas stick around longer than others and one that’s been plauging me for the past year or so has been the one of the 9-5 work day that nearly every work place adheres to. The roots of the problem have their roots back in the industrial revolution but todays technology makes most of the issues irrelevant. Coupling this with the massive duplication of resources required to enable these old ideals it seems almost inevitable that one day we’ll have to transition away from them if we are progress as we have done for the past few decades.

The idea at its core is one of decentralizing our workforce.

Right now the vast majority of workers commute daily to their place of work. Primarily this is because the organisation hosts resources required for them to complete their work, but there’s also the norm that you have to be at work to be working. In the traditional business sense this is true as there was no way that a company could provide the required infrastructure to all its employees in order for them to be able to do their work outside company premises. However the advent of almost ubiquitous Internet connectivity and organisation’s reliance on IT to complete most tasks means that nearly everyone who’s job doesn’t require physical labour could do their job at home for a fraction of the overhead of doing the same work on company premises. The barrier for most companies is twofold with the first being one of investment in additional (and removal of current) infrastructure to support remote workers. The second is one of mentality as traditional management techniques struggle with producing sound metrics to judge employee’s performance.

For established organisations the transition to a highly remote workforce can be rather painful as they already have quite a bit invest in their current infrastructure and most of this will go to waste as the transition takes hold. Whilst the benefits of being able to downsize the office are quite clear they usually can’t be realized immediately, usually due to contracts and agreements. Companies that have successful remote workforces are usually in a period of radical reform and this is what drives them to rethink their current work practices. The pioneers in such moves have been the IT focused companies, although more recent examples in the form of Best Buy and Circuit City in America show that even large organisations can realise the benefits shortly after implementation.

Designing metrics for your employees is probably the biggest sticking point I’ve seen for most workers looking to go remote. I’d attribute this to most managers having come through the ranks with their previous managers being the same. As such they value employee time on premises far more highly than they do actual work output, because most of their decisions are done by the seat of their pants rather than with research and critical thinking. That may sound harsh but it is unfortunately common as most managers don’t take the time to dive deep into the metrics they use, instead going by their gut feeling. Workers who aren’t present can’t be judged in such a fashion and usually end up being put down as slackers.

This idea is primarily why I support the National Broadband Network as ubiquitous high speed Internet to the vast majority of the population means that the current remote worker’s capabilities would be even more greatly enhanced. No longer is a workplace big enough to accommodate your entire team required when the majority of your workforce is there virtually. HP pioneered this kind of technology with their HALO which was designed around removing the stigma around telepresent workers and the results speak for themselves.

At the heart of this whole idea is the altruistic principle of reducing waste and our environmental impact, improving worker happiness and possibly reusing existing infrastructure to solve other problems. Right now every office worker has 2 places of residence and neither of them are used full time. This means that a large amount of resources go to waste whenever we’re at work or not and decentralizing the workforce would eliminate a good portion of this. Couple this with reduced transport usage and the environmental impact would be quite significant. Additionally underused infrastructure could easily be converted into low cost/government housing, relieving the pressure on many low income earners.

Maybe its just my desire to work in my own home on my own time that drives this but the more I talk to those people who can do their work wherever there’s an Internet connection the more it makes sense as the future of the bulk of our workforce. The body of knowledge on the subject today suggests that there’s far more to be gained from this endeavour than what it will cost but until there’s a massive shift in the way managers view a decentralized workforce it will unfortunately remain as a pipe dream. Still with the barrier to entry of making your own self sustaining company being so low these days we may just end up with not only a decentralized work force, but a completely decentralized world.

National Broadband Network: How 1Gbps is Possible.

Regular readers of this blog will know that I’m no fan of our dear Senator Conroy, but credit where it’s due he at least understands technology better than our current PM or opposition leader, even if he doesn’t listen to the tech community at large. Whilst I abhor the Internet Filter policy in its entirety I’m almost salivating at the possibility that one day soon I’ll have access to a 100Mbps fiber connection at my house. Not only is it awesome because of the raw speeds it also opens up opportunity for someone like me who’s looking to host his own services but doesn’t necessarily want to spend the cash on proper hosting just yet, but still deliver a decent service to his end users (this lightweight blog is about the limit of my current connection).

Last week saw the Liberal party finally release their planfor upgrading Australia’s Internet infrastructure. To say it was unimpressive would be putting it gently as whilst they did outline a plan for upgrading our infrastructure it was a far cry from what the NBN is currently shaping up to be. In essence their plan was just a continuation of what would have been done eventually with no fundamental change in the way Australia’s Internet infrastructure was done. This would not free Australian consumers from the problems that have plagued them thanks to the botched privatisation of Telstra (read: not keeping their retail and wholesale branches at arms length) and wouldn’t increase speeds for anyone who didn’t already have broadband at their homes. It was in essence the lowest cost option they could come up with, done to try and bolster their image of being fiscally responsible. We all know that is complete bollocks anyway.

Still for some reason the Labor party the need to kick the Liberals while they were down and announced that their NBN would reach speeds of up to 1Gbps, ten times that of what they originally promised:

Communications Minister Stephen Conroy confirmed today that the National Broadband Network NBN would reach speeds of up 1Gbps, ten times faster than the originally announced speeds of up to 100Mbps.

Conroy said he had only found out about the 1GB speeds yesterday when NBN Co chief executive Mike Quigley called him last night. Quigley will make further announcements regarding the faster speeds at a lunch time conference in Sydney today.

The announcement was made at the official NBN launch this morning at Midway Point in Hobart, Tasmania, one of the first townships to receive the NBN, as part of Prime Minister Julia Gillard’s campaign trail. The official launch was a chance to differentiate Labor from the Coalition — which has vowed to bin the NBN if elected.

On the surface it would sound like a bit of over-promising in aid of boosting numbers for the coming election but realistically there’s no fundamental issue that would stop the NBN from achieving these speeds and even exceeding them in the future. With so much mud being slung (as is the norm for election time) I would have thought the Liberals would’ve jumped all over this but the statement came and went without much fanfare at all. Conroy’s statement does highlight the fact that the NBN is a fundamental shift in the way Australian’s get their Internet and how it will remain with us for decades to come.

You see the current backbone of our Internet infrastructure in Australia is primarily copper wire, stuff that’s been around since the 1880’s. Right now the fastest connection you can push over our current copper based lines is around 24Mbps and that’s highly dependent on factors such as distance to exchange, back haul capacity and how over subscribed the exchange is. Theoretically if you used a technology like VDSL (ala Transact here in Canberra) you could squeeze 250Mbps out of the same copper, however that signal would drop dramatically if you were a mere 500 meters away from the closest repeater. Transact manages to get it done because they have a fiber to the curb network ensuring most houses aren’t that far away from the repeater, but the last mile is still copper.

Fiber to the home means that the underlying technology that we use for our communications in Australia changes to our generation’s copper: optical fiber. Whilst the current copper infrastructure has theoretical peaks double that of what the NBN originally planned to deliver optical fiber has current, working implementations that run all the way up to 10Gbps. Using a combination of single-modefor back haul and multi-mode it is entirely possible for any house that has a fiber connection to have speeds of up 1Gbps. The only limitation then is on the bandwidth at the local exchange but problems like line attenuation are completely removed. Additionally higher speeds than those currently possible could be achieved by upgrading the endpoints on their side of the fiber connection, ensuring the longevity of the multi-billion dollar infrastructure upgrade.

The NBN as it stood in its original incarnation would have put Australia right up there with the leading countries in terms of Internet infrastructure. Whilst the 1Gbps claim doesn’t fundamentally change what’s going to happen with the NBN it does mean that it is being built with a vision for the future. Compare this to the Liberal party’s plan of just carrying on as we have done for the past 2 decades you can see why I believe that the NBN needs to go ahead because as it stands right now Australia just doesn’t compare to the vast majority of other developed countries. I believe that the NBN is fundamental in making Australia attractive as a base for Internet companies worldwide, as well as existing businesses looking to extend their reach into our area.

It’s not often that you see a government project that will outlast its party’s term but the NBN is a shining example of long term planning. When it is implemented all Australians will reap the benefits of cheap, ubiquitous, high speed Internet that will spur innovation on a national scale of the likes we haven’t yet seen. With the current completion date hovering around 2018 we’re still a way off from seeing the benefits of such a network unfold but if we’re to have infrastructure that will last us as long as the copper has done up until now the NBN must be completed, lest we be left behind by the rest of the Internet world.

The Little Budget that Could.

It’s that time of the year again, and the full federal budget is now out and about for all of us Australians to take a gander at. My previous blog post about the speculation seems to have hit on some of the right points, namely the increase in the pension and hit to superannuation contributions but it seems the higher taxes for the rich have fallen by the wayside (although they might be on the table in the future) along with the increased defence spending. Here’s some of the major initiatives that the government has intended to include in the current budget:

  • $3.4 billion for roads
  • $4.6 billion for metro rail
  • $389 million for ports and freight infrastructure
  • $4.5 billion for the Clean Energy Initiative, which includes $1.0 billion of existing funding
  • $2.6 billion in projects focused on universities and research from the Education Investment Fund
  • $3.2 billion in projects focused on hospitals and health infrastructure from the Health and Hospitals Fund
  • Partnering with the private sector to build the $43 billion National Broadband Network
  • A pension increase of $32.49 per week for singles and $10.14 per week combined for couples on the full rate
  • A crucial boost of $2.7 billion in funding for tertiary education, research and innovation
  • $1.5 billion for the Jobs and Training Compact, providing education and services to support young people, retrenched workers and local communities
  • A 50 per cent Small Business Tax Break for eligible assets
  • Extending the First Home Owners Boost for an extra 6 months
  • Honouring our promise of tax cuts

What I’m impressed with are the initiatives dedicated to infrastructure spending. This is something that will not only benefit Australia at large but will also build a solid foundation of sustainable jobs which will grow when the economy recovers. This also lends itself well to the boost provided to tertiary education as these people are going to want somewhere to work once they’ve graduated. The extension to the first home owner’s grant was a small surprise and it will help to keep the housing market afloat until the end of the year. Phasing it out instead of dropping it will make sure the market doesn’t suffer too much when the bonus finally comes to an end, as any more shocks to the market aren’t going to help our current situation.

Straight after the budget the criticisms started to flow thick and fast. ABC’s 7:30 Report last night had interviews with both Wayne Swan and Joe Hockey, although Hockey’s criticisms of the budget feel a little….weak:

KERRY O’BRIEN: Have we seen a global crisis like this?

JOE HOCKEY: Well, can I tell you, the RBA, the Reserve Bank said last week it will not be as deep as 1990. They said that last week. And yet this Government has spent more money than any government in modern Australian history – 29 per cent of GDP. It is the biggest spending government in modern history, the biggest debt in modern history. One million people unemployed. Nothing to show for all the money they’ve spent.

KERRY O’BRIEN: So what would you be doing? What should’ve happened in this Budget to reduce debt?

JOE HOCKEY: Well the starting point is don’t deliver the cash splashes.

KERRY O’BRIEN: No, that’s gone.

JOE HOCKEY: Well, no, no, no.

KERRY O’BRIEN: What would you be doing in this Budget now, what should’ve happened in this Budget now to reduce debt?

JOE HOCKEY: Well, grow the pie. You’ve got to grow the pie.

KERRY O’BRIEN: How?

JOE HOCKEY: Well, the first thing is you’ve got to focus on small business. That’s what we’ve always talked about. Malcolm Turnbull has already laid out a number of detailed policies to try and get small business to grow.

JOE HOCKEY: Well, let’s go back to the assumptions, right, that you’ve put into that question. The fact of the matter is that the Reserve Bank and the IMF say it’s going to be a slow recovery. But the RBA, the Reserve Bank said it’s not going to be as deep and severe as 1990. The starting point for the Rudd Government was they inherited a Budget surplus, they inherited four per cent unemployment, which is now going to eight per cent. They inherited zero Government debt, in fact there was money put in the bank. They’ve spent all the proceeds of the mineral boom and they’re now mortgaging the next boom.

KERRY O’BRIEN: OK, but very briefly, you’re happy to quote the Reserve Bank when it suits you, but …

JOE HOCKEY: Well, no, the Reserve Bank was right.

KERRY O’BRIEN: Well then do you also accept that the Reserve Bank governor is right when he says that the debt levels are modest?

JOE HOCKEY: Well, I don’t know if he’s seen these Budget numbers. But I tell you what, I wouldn’t consider them modest when it’s $9,000 for every man, every woman and every child in Australia, with an annual interest bill of $500 for every person. I don’t consider that modest.

I’m going to have to agree with my father (whom I was watching the report with last night) and say that Joe Hockey is just a trouble maker. He’s lashing out at the budget in order to try and score some easy political points. Additionally he ridicules the government for selectively quoting the RBA when it suits them and then proceeds to do the same thing. Whilst I know this budget isn’t perfect it’s a great start to keep this nation afloat whilst creating a sound basis for our economy to boom again when the time is right. The middle section I quoted shows that Hockey has little to no idea on how to approach this situation and had he been in charge of the budget I’m sure we’d be closer to the budget I predicted last week; something lacking direction and lining the coffers of the loudest lobbyists.

Swan didn’t get off easy either. Kerry did point out that some of his initiatives, namely the raising of the pension age and large deficit, were created without a lot of knowledge of the situation we’re in. Whilst the predictions made are probably the best that can be done with the information that we have it does seem a bit reckless to start basing policy on them. Many of their policies have times that are set a bit far off in the future which is a deliberate political ploy in order to make sure they get elected back into office. This will be the budget that will stick in people’s minds come the next election, and I’m sure the Rudd government knows that.

Overall I’m pleased with the budget. The money is getting spent in the right places and whilst we might be running a deficit, we’re still in a good position to weather this recession and come out ready for the good times ahead.

I’m just going to have to tune out Hockey and the other detractors for a couple weeks. :)

The National Broadband Network.

Another day, another multi-billion dollar proposal to stimulate the economy and conveniently distract everyone from the shambles of a proposal that was the Great Firewall of Australia. The newsbots are in a flurry about this one and with this being right up my alley, I can’t help but throw my few cents in ;). So let’s take a good look at this proposal and see what it will mean for Australia, the public at large and of course, Senator Conroy.

Australia is about average when it comes to broadband penetration with the majority of our users on ADSL, some on cable and the rest on some unknown connection (usually satellite or 3G wireless). This is quite comparable to many other countries and the norm seems to be the majority on ADSL with only Japan and Korea having a large representation of customers with fibre/cable speeds. What this proposal aims to do is to bring fibre connections to 90% of all homes in Australia. By my estimates with approximately 8 million households in Australia that will mean fibre speeds to about 7.2 million houses, with 800,000 left in the digital dark age. Whilst this is still a very aggressive target to meet you’d still be pretty annoyed if you were one of those 800,000 homes that was left out. Hopefully the extra fibre being run everywhere will also spur others to upgrade the DSLAMs in local exchanges for those poor people who are left out.

The current proposal is signalled to run for about 8 years. Now anyone in IT will tell you that a time frame like that for a project in this field will inevitably be out-dated by the time it is completed. Using Moore’s Law as a basis, we would see that the average computing power would have increased by about 16 times, with data rates and storage capacities following suit. If this kind of project is to be undertaken the network must be scalable with newer technologies, otherwise it will be useless by the time it is implemented. Whilst they haven’t described what kind of fibre technology they’re going to be using I would recommend single-mode fibre which should scale up to 10Gb/s, allowing the network to not be outdated the day it’s switched on.

I rejoiced when I heard that the whole thing would be government controlled, hoping to avoid the catastrophe that Telstra has become. However it became apparent that the initial investment from the government will be $4.7 billion with the rest to be raised from private investors. Once the network is complete they will sell down their holdings in the company, thereby releasing all control on it. I don’t think I have to make it anymore clear that they are basically creating a monopoly on the network by allowing this one mega-corp to own all the infrastructure instead of the government. Unless there are strict provisions in place to ensure that other ISPs will be able to tap into this network and use it fairly, we’ll just end up with yet another Telstra who won’t have much incentive to be competitive, let along co-operative with others.

Overall for Australia this proposal is mediocre at best. Whilst I applaud the idea of upgrading Australia’s broadband and making us a market leader in terms of broadband penetration the way Senator Conroy is going about it is, as usual, confused and misguided. When it was obvious that his attempt at a fibre to the node was not going to win him the right amount of political points he turned his attention to the Internet Filter. Now that filter is dieing on the vine he’s taken the $4.7 billion that was allocated for the new broadband network and tried to make it look like ten times more by saying that investors will make up the rest. Maybe he is just trying to make everyone think that they’re dreaming….

Luckily it appears that the IT community is remaining sceptical, as it should with any that Conroy proposes. Triple J’s Hack program ran an excellent show yesterday exploring the new proposal and even, interviewing the man himself. Conroy is awkward at the best of times but when he was confronted on the issue of the Internet filter and the new broadband network, he seemed to hit a few brick walls:

Senator Conroy: We said if the trial shows that this cannot be done, then we won’t do it.

Interviewer: And what’s the definition of cannot be done? What would be the acceptable amount to slow the internet down?
Senator Conroy: Well now your asking me to preempt the outcome of the trial.

Interviewer: No I’m not, you’ve got to have an understanding of what’s a pass and what’s a fail. You can’t wait ’til the trial finishes and then look back and decide how your going to measure the outcome.

Senator Conroy: Well actually that’s how you conduct a trial. You wait to see what the result is and then you make a decision based on the result. If the trial shows that it cannot be done without slowing the internet down then we will not do it.

I’m not sure I can comprehend what he thinks a trial actually is. If you follow the scientific method you’d know that first you formulate a hypothesis, establish the test, formulate the thresholds for successful/unsuccessful and then perform the test. You don’t make up your pass/fail from the data, that’s just bad science. I once defended Conroy as just a figurehead for a bad idea put forth by Labor to win votes, now I’m sure that isn’t the case.

An amazing idea that was twisted and contorted into something that will at best, create another mega-monoply on Australia’s telecommunication network. It seems no one will listen to George Santayana:

Those who cannot remember the past are condemned to repeat it.