Technology

Professional Memory Holder

DDR4 Appears on The Market; I Realise I’ve Been Under a Rock.

Whilst I don’t spend as much time as I used to keeping current with all things PC hardware related I still maintain a pretty good working knowledge of where the field is going. That’s partly due to my career being in the field (although I’m technically a services guy) but mostly it’s because I love new tech. You’d think then that DDR4, the next generation in PC memory, making its commercial debut wouldn’t be much of a surprise to me but I had absolutely no idea it was in the pipeline. Indeed had I not been building out a new gaming rig for a friend of mine I wouldn’t have known it was coming, nor that I could buy it today if I was so inclined.

Professional Memory Holder

Double Data Rate Generation 4 (DDR4) memory is the direct successor to the current standard, DDR3, which has been in widespread use since 2007. Both standards (indeed pretty much all memory standards) were developed by the Joint Electron Device Engineering Council (JEDEC) who have been working on DDR4 since about 2005. The reasoning behind the long lead times on new standards like this is complicated but it comes down to a function of getting everyone to agree to the standard, manufacturers developing products around said standard and then, finally, them making their way into the hands of consumers. Thus whilst new memory modules come and go with the regular tech cycle typically the standards driving them remain standard for the better part of a decade or two which is probably why this writer neglected to keep current on it.

In terms of actual improvements DDR4 seems like an evolutionary step forward rather than a revolutionary one. That being said the improvements introduced with the new specification are nothing to sneeze at with one of the big improvements being a reduction in the voltage (and thus power) that the specification requires. Typical DDR4 modules will now use 1.2V compared to DDR3’s 1.5V and the low voltage variant, typically seen in low power systems like smartphones and the like, goes all the way down to 1.05V.To end consumers this won’t mean too much but for large scale deployments the savings from running this new memory add up very quickly.

As you’d expect there’s also been a bump up in the operating speed of DDR4 modules, ranging from 2133Mhz all the way up to 4266Mhz. Essentially the lowest tier of performance DDR4 memory will match the top performers of DDR3 and the amount of headroom for future development is quite significant. This will have a direct impact on the performance of systems that are powered by DDR4 memory and whilst most consumers won’t notice the difference it’s definitely going to be a defining feature of enthusiast PCs for the next couple years. I know that I updated my dream PC specs to include it even though the first generation of products is only just hitting the market.

DDR4 chips are also meant to be a lot more dense than their DDR3 predecessors, especially considering that the specification has also accommodated 3D layering technologies like Samsung’s V-NAND. Many are saying that this will lead to DDR4 being cheaper for a comparable amount of memory vs DDR3 however right now you’ll be paying about a 40% premium on pretty much everything if you want to build a system around the new style of memory. This is to be expected though and whilst I can eventually see DDR4 eclipsing DDR3 on a price per gigabyte basis that won’t be for several years to come. DDR3 has 7 years worth of economies of scale built up and they won’t become irrelevant for a very long time.

So whilst I might be a little shocked that I was so out of the loop I didn’t know a new memory standard had made its way into reality I’m glad it has. The improvements might be incremental rather than a bold leap forward but progress in this sphere is so slow that anything is worth celebrating. The fact that you can build systems with it today is just another bonus, one that I’m sure is making dents in geek’s budgets the world over.

Facebook Bluray Archiving

You Won’t See Blu Ray Archiving Anytime Soon.

Ask your IT administrator what medium they back up all your data to and the answer is likely some form of magnetic tape storage. For many people that’d be somewhat surprising as the last time they saw a tape was probably a couple decades ago and it wasn’t used to store much more than a blurry movie or maybe a couple songs. However in the world of IT archiving and backup there’s really no other medium that can beat tapes for capacity, durability or cost. Many have tried to unseat tapes from their storage crown but they’re simply too good at what they do and Facebook’s latest experiment, using Blu Ray disc caddies as an archiving solution, isn’t likely to take over from them anytime soon.

Facebook Bluray ArchivingThe idea Facebook has come with is, to their credit, pretty novel. Essentially they’ve created these small Blu Ray caddies each of which contains 12 discs. These are all housed in a robotic enclosure which is about the size of a standard server rack. Each of these racks is capable of storing up to 10,000 discs which apparently gives rise to a total 1PB worth of storage in a single rack. Primarily it seems to be a response to their current HDD based backup solutions which, whilst providing better turn around for access, are typically far more costly than other archiving solutions. What interests me though is why Facebook would be pursuing something like this when there are other archiving systems already available, ones with much better ROI for the investment.

The storage figures quoted peg the individual disc sizes at 100GB something which is covered off under the BD-R XL specification. These discs aren’t exactly cheap and whilst I’m sure you could get a decent discount when buying 10,000 the street price for them is currently on the order of $60. If they’re able to even get a 50% discount on these discs that means that you’re still on the hook for about $300K just for the media. If you wanted to get a similar amount of storage on tapes (say using the 1.5TB HP LTO-5 which can be had for $40) you’re only paying about $27K a tenth of the cost. You could even halve that again if you were able to use compression on the tapes although honestly you don’t really need to at that price point.

Indeed pretty much every single advantage that Facebook is purporting this Blu Ray storage system to have is the same benefit you get with a tape drive. Tapes are low power, as their storage requires no active current draw, are readily portable (and indeed there are entire companies already dedicated to doing this for you) and have many of the same durability qualities that DVDs do. When you combine this with the fact that they’re an already proven technology with dozens of competitive offers on the table it really does make you wonder why Facebook is investigating this idea at all.

I’d hazard a guess it’s just another cool engineering product, something that they’ll trial for a little while before mothballing completely once they look at the costs of actually bringing something like that into production. I mean I like the idea, it’s always good to see companies challenging the status quo, however sometimes the best solutions are the ones that have stood the test of time. Tapes, whether you love them or hate them, by far outclass this system in almost all ways possible and that won’t change until you can get Blu Ray discs at the same dollars per gigabyte that you can get tapes. Even then Facebook is going to have to try hard to find some advantage that Blu Rays have that tapes don’t as right now I don’t think anyone can come up with one.

Can you?

 

Australia's Shitty Internet

Why We Need the Full FTTP NBN.

The unfortunate truth about telecommunications within Australia is that everyone is under the iron rule of a single company: Telstra. Whilst the situation has improved somewhat in the last decade, mostly under threat of legal action from the Australian government, Australia still remains something of an Internet backwater. This can almost wholly be traced back to the lack of investment on Telstra’s behalf in new infrastructure with their most advanced technology being their aging HFC networks that were only deployed in limited areas. This is why the NBN was such a great idea as it would radically modernize our telecommunications network whilst also ensuring that we were no longer under the control of a company that had long since given up on innovating.

Australia's Shitty Internet

 

To us Australians my opening statements aren’t anything surprising, this is the reality that we’ve been living with for some time now. However when outsiders look in, like say the free CDN/DDoS protection service Cloudflare (who I’ve recently started using again), and find that bandwidth from Telstrat is about 20 times more expensive than their cheapest providers it does give you some perspective on the situation. Whilst you would expect some variability for different locations (given the number of dark fiber connections and other infrastructure) a 20x increase does appear wildly out of proportion. The original NBN would be the solution to this as it would upend Telstra’s grip on the backbone connections that drive these prices however the Liberal’s new MTM solution will do none of this.

Right now much of the debate of the NBN has been framed around the speeds that will be delivered to customers however that’s really only half of the story. In order to support the massive speed increases that customers would be seeing with the FTTP NBN the back end infrastructure would need to be upgraded as well and this would include the interconnects that drive the peering prices that Cloudflare sees. Such infrastructure would also form the backbone of wide area networks that businesses and organisations use to connect their offices together, not to mention all the other services that rely on backhaul bandwidth. The MTM NBN simply doesn’t have the same requirements, nor the future expandability, to necessitate the investment in this kind of back end infrastructure and, worse still, the last mile connections will still be under the control of Telstra.

That last point is one I feel that doesn’t get enough attention in the mainstream media. The Liberals have released several videos that harp on about the point of making the right amount of investment in the NBN, citing that there’s a cut off point where extra bandwidth doesn’t enable people to do anything more. The problem with that thinking is though that, with the MTM NBN, you cannot guarantee that everyone will have access to those kinds of speeds. Indeed the MTM NBN can only guarantee 50Mbps to people who are 200m or less away from an exchange which, unfortunately, the vast majority of Australians aren’t. Comparatively FTTP can deliver the same speeds regardless of distances and also has the ability to provide higher speeds well into the future.

In all honesty though the NBN has been transformed from a long term, highly valuable infrastructure project to a political football, one that the Liberal party is intent to kick around as long as it suits their agenda. Australia had such potential to become a leader in Internet services with an expansive fiber network that would have rivalled all others worldwide. Instead we have a hodge podge solution that does nothing to address the issues at hand and the high broadband costs, for both consumers and businesses alike, will continue as long as Telstra controls a vast majority of the critical infrastructure. Maybe one day we’ll get the NBN we need but that day seems to get further and further away with each passing day.

HERNRER ERGERS

The NBN Cost Benefit Analysis is a Steaming Pile of Horeshit.

There seems to be this small section of my brain that’s completely disconnected from reality. At every turn with the Liberal’s and the NBN it’s been the part of my head that’s said “Don’t worry, I’m sure Turnbull and co will be honest this time around” and every single time it has turned out to be wrong. At every turn these “independent” reports have been stacked with personnel that all have vested interests in seeing Turnbull’s views come to light no matter how hard they have to bend the facts in order to do so. Thus all the reports that have come out slamming Labor’s solution are not to be trusted and the latest report, the vaulted cost benefit analysis that the Liberals have always harped on about, is a another drop on the gigantic turd pile that is the Liberal’s NBN.

HERNRER ERGERS

The problems with this cost-benefit analysis started long before the actual report was released. Late last year Turnbull appointed Henry Ergas as the head of the panel of experts that would be doing the cost-benefit analysis. The problem with this appointment is that he’d already published many pieces on the NBN before which where not only critical of the NBN but were also riddled with factual inaccuracies. So his opinion of the NBN was already well known prior to starting this engagement and thus he was not capable of providing a truly independent analysis, regardless of how he might want to present it. However in the interests of fairness (even though Turnbull won’t be doing so) let’s judge the report on it’s merits so we can see just how big this pile of horseshit is.

The report hinges primarily on a metric called “Willingness to Pay (WTP)” which is what Australians would be willing to pay for higher broadband speeds. The metric is primarily based around data gathered by the Institute for Choice which surveyed around 3,000 Australians about their current broadband usage and then showed them some alternative plans that would be available under the NBN. Problem is the way these were presented were not representative of all the plans available nor did they factor in things like the speed not being guaranteed on FTTN vs guaranteed speed on FTTP. So essentially all this was judging was people’s willingness to change to another kind of plan and honestly was not reflective of whether or not they’d want to pay more for higher broadband speeds.

Indeed this is even further reflected in the blended rate of probabilities used to determine the estimation of benefits with a 50% weighting applied to the Institute for Choice data and a 25% modifier to the other data (take-up demand and technical bandwidth demand) which, funnily enough, find in favour of the FTTP NBN solution. Indeed Table I makes it pretty clear that whenever there was multiple points of data the panel of experts decided to swing the percentages in ways that were favourable to them rather than providing an honest view of the data that they had. If the appointment of a long anti-NBN campaigner wasn’t enough to convince you this report was a total farce then this should do the trick.

However what really got me about this report was summed up perfectly in this quote:

The panel would not disclose the costs of upgrading [to] FTTP compared with other options, which were redacted from the report, citing “commercial confidentiality associated with NBN Co’s proprietary data”.

What the actual fuck.

So a completely government owned company is citing commercial in confidence for not disclosing data in a report that was commissioned by the government? Seriously, what the fuck are you guys playing at here? It’s obvious that if you included the cost of upgrading a FTTN network to FTTP, which has been estimated to cost at least $21 billion extra, then the cost-benefit would swing wildly in the direction of FTTP. Honestly I shouldn’t be surprised at this point as the report has already taken every step possible to avoid admitting that a FTTP solution is the better option. Hiding the upgrade cost, which by other reports commissioned by the Liberals is to be required in less than 5 years after completion of their FTTN NBN, is just another fact they want they want to keep buried.

Seriously, fuck everything about Turnbull and his bullshit. They’ve commissioned report after report, done by people who have vested interests or are in Turnbull’s favour, that have done nothing to reflect the reality of what the NBN was and what it should be. This is just the latest heaping on the pile, showcasing that the Liberals have no intention of being honest nor implementing a solution that’s to the benefit of all Australians. Instead they’re still focused on winning last year’s election and we’re all going to suffer because of it.

Panasonic Super Tablet

Super Tablets? You’re Not Simple, Are You?

Smartphones and laptops have always been a pain in the side of any enterprise admin. They almost always find themselves into the IT environment via a decidedly non-IT driven process, usually when an executive gets a new toy that he’d like his corporate email on. However the tools to support these devices have improved drastically allowing IT to provide the basic services (it’s almost always only email) and then be done with it. For the most part people see the delineation pretty clearly: smartphones and tablets are for mobile working and your desktop or laptop is for when you need to do actual work. I’ve honestly never seen a need for a device that crosses the boundaries between these two worlds although after reading this piece of dribble it seems that some C-level execs think there’s demand for such a device.

I don’t think he could be more wrong if he tried.

Panasonic Super TabletThe article starts off with some good points about why tablet sales are down (the market has been saturated, much like netbooks were) and why PC sales are up (XP’s end of life, although that’s only part of it) and then posits the idea of creating “super tablets” in order to reignite the market. Such a device would be somewhere in between an iPad and a laptop, sporting a bigger screen, functional keyboard and upgraded internals but keeping the same standardized operating system. According to the author such a device would bridge the productivity gap that currently divides tablets from other PCs giving users the best of both worlds. The rest of the article makes mention of a whole bunch of things that I’ll get into debunking later but the main thrust of it is that some kind of souped up tablet is the perfect device for corporate IT.

For starters the notion that PCs are hard to manage in comparison to tablets or smartphones is nothing short of total horseshit. The author makes a point that ServiceNow, which provides some incident management software, is worth $8 billion as some kind of proof that PCs break often and are hard to manage. What that fails to grasp is that ServiceNow is actually an IT Service Management company that also has Software/Platform as a Service offerings and thus are more akin to a company like Salesforce than simply an incident management company. This then leads onto the idea that the mobile section is somehow cheaper to run that its PC counterpart which is not the case in many circumstances. He also makes the assertion that desktop virtualization is expensive when in most cases it makes heavy use of investments that IT has already made in both server and desktop infrastructure.

In fact the whole article smacks of someone who seems cheerfully ignorant of the fact that the product that he’s peddling is pretty much an ultrabook with a couple of minor differences. One of the prime reasons people like tablets is their portability and the second you increase the screen size and whack a “proper” keyboard on that you’ve essentially given them a laptop. His argument is then that you need the specifications of a laptop with Android or iOS on it but I fail to see how extra power is going to make those platforms any more useful than they are today. Indeed if all you’re doing is word processing an Internet browsing then the current iteration of Android laptops does the job just fine.

Sometimes when there’s an apparent gap in the market there’s a reason for it and in the case of “super tablets” it’s because when you take what’s good about the two platforms it bridges you end up with a device that has none of the benefits of either. This idea probably arises from the incorrect notion that PCs are incredibly unreliable and hard to manage when, in actuality, that’s so far from reality it’s almost comical. Instead the delineations between tablets and laptops are based on well defined usability guidelines that both consumers and enterprise IT staff have come to appreciate. It’s like looking at a nail and a screw and thinking that combining them into a super nail will somehow give you the benefits of both when realistically they’re for different purposes and the sooner you realise that the better you’ll be at hammering and screwing.

 

14nmCosts

Intel Keeps Moore’s Law Alive With 14nm Fabrication.

The popular interpretation of Moore’s Law is that computing power, namely of the CPU, doubles every two years or so. This is then extended to pretty much all aspects of computing such as storage, network transfer speeds and so on. Whilst this interpretation has held up reasonably well in the past 40+ years since the law has coined it’s actually not completely accurate as Moore was actually referring to the number of components that could be integrated into a single package for a minimum cost. Thus the real driver behind Moore’s law isn’t performance, per se, it’s the cost at which we can provide said integrated package. Keeping on track with this law hasn’t been easy but innovations like Intel’s new 14nm process are what have been keeping us on track.

14nmCosts

CPUs are created through a process called Photolithography whereby a substrate, typically a silicon wafer, has the transistors etched onto it through a process not unlike developing a photo. The defining characteristic of this process is the minimum size of a feature that the process can etch on the wafer which is usually expressed in terms of nanometers. It was long thought that 22nm would be the limit for semiconductor manufacturing as this process was approaching the physical limitations of the substrates used. However Intel, and many other semiconductor manufacturers, have been developing processes that push past this and today Intel has released in depth information regarding their new 14nm process.

The improvements in the process are pretty much what you’d come to expect from a node improvement of this nature. A reduction in node size typically means that a CPU can be made with more transistors that performs better and uses less power than a similar CPU built on a larger sized node. This is most certainly the case with Intel’s new 14nm fabrication process and, interesting enough, they appear to be ahead of the curve so to speak, with the improvements in this process being slightly ahead of the trend. However the most important factor, at least in respect Moore’s Law, is that they’ve managed to keep reducing the cost per transistor.

One of the biggest cost drivers for CPUs is what’s called the yield of the wafer. Each of these wafers costs a certain amount of money and, depending on how big and complex your CPU is, you can only fit a certain number of them on there. However not all of those CPUs will turn out to be viable and the percentage of usable CPUs is what’s known as the wafer yield. Moving to a new node size typically means that your yield takes a dive which drives up the cost of the CPU significantly. The recently embargoed documents from Intel reveals however that the yield for the 14nm process is rapidly approaching that of the 22nm process which is considered to be Intel’s best yielding process to date. This, plus the increased transistor density that’s possible with the new manufacturing process, is what has led to the price per transistor dropping giving Moore’s law a little more breathing room for the next couple years.

This 14nm process is what will be powering Intel’s new Broadwell set of chips, the first of which is due out later this year. Migrating to this new manufacturing process hasn’t been without its difficulties which is what has led to Intel releasing only a subset of the Broadwell chips later this year, with the rest to come in 2015. Until we get our hands on some of the actual chips there’s no telling just how much of an improvement these will be over their Haswell predecessors but the die shrink alone should see some significant improvements. With the yields fast approaching those of its predecessors they’ll hopefully be quite reasonably priced too, for a new technology at least.

It just goes to show that Moore’s law is proving to be far more robust than anyone could have predicted. Exponential growth functions like that are notoriously unsustainable however it seems every time we come up against another wall that threatens to kill the law off another innovative way to deal with it comes around. Intel has long been at the forefront of keeping Moore’s law alive and it seems like they’ll continue to be its patron saint for a long time to come.

Google Plus

Winding Down Google+ is the Right Move, But Might Be Too Late.

When Google+ was first announced I counted myself among its fans. Primarily this was due to the interface which, unlike every other social media platform at the time, was clean and there was the possibility I could integrate all my social media in the one spot. However as time went on it became apparent that this wasn’t happening any time soon and the dearth of people actively using it meant that it just fell by the wayside. As other products got rolled into it I wasn’t particularly fussed, I wasn’t a big user of most of them in the first place, however I was keenly aware of the consternation from the wider user base. It seems that Google might have caught onto this and is looking to wind down the Google+ service.

Google Plus

Back in April the head of Google+, Vic Gundotra, announced that he was leaving the company. Whilst Google maintained that this would not impact on their strategy many sources reported that Google was abandoning its much loathed approach of integrating Google+ into everything and that decrease in focus likely meant a decrease in resources. Considering that no one else can come up for a good reason why Gundotra, a 7 year veteran of Google, would leave the company it does seem highly plausible that something is happening to Google+ and it isn’t good for his future there. The question in my mind then is whether or not winding down the service will restore the some of the goodwill lost in Google’s aggressive integration spree.

Rumours have it that Google+ Photos will be the first service to be let free from the iron grip of its parent social network. Considering that the Photos section of Google+ started out as the web storage part of their Picasa product it makes sense that this would be the first service to be spun out. How it will compete with other, already established offerings though is somewhat up in the air although they do have the benefit of already being tightly integrated with the Android ecosystem. If they’re unwinding that application then it makes you wonder if they’ll continue that trend to other services, like YouTube.

For the uninitiated the integration of YouTube and Google+ was met with huge amounts of resistance with numerous large channels openly protesting it. Whilst some aspects of the integration have been relaxed (like allowing you to use a pseudonym that isn’t your real name) the vast majority of features that many YouTubers relied on are simply gone, replaced with seemingly inferior Google+ alternatives. If Google+ is walking off into the sunset then they’d do well to bring back the older interface although I’m sure the stalwart opponents won’t be thanking Google if they do.

Honestly whilst I liked Google+ originally, and even made efforts to actively use the platform, it simply hasn’t had the required amount of buy in to justify Google throwing all of its eggs into that basket. Whilst I like some of the integration between the various Google+ services I completely understand why others don’t, especially if you’re a content creator on one of their platforms. Winding down the service might see a few cheers here or there but honestly the damage was already done and it’s up to Google to figure out how to win the users back in a post Google+ world.

The Internet Never Forgets

Sometimes The Internet Does Forget.

Last year I fucked up.

There’s really no other way to put it, I made the rookie mistake of not backing up everything before I started executing commands that could have some really bad consequences. I’d like to say it was hubris, thinking that my many years in the industry had made me immune to things like this, but in reality it was just my lack of knowledge of how certain commands worked. Thankfully it wasn’t a dreaded full wipe and I was able to restore the essence of this blog (I.E. the writing) without too much trouble, however over time it became apparent just how incomplete that restore was. Whilst I was able to restore quite a lot of the pictures I’ve used over the years I was still lacking lots of them, some of them on some of my favourite posts.

The Internet Never Forgets

Thankfully, after writing some rather complicated PowerShell scripts, I was able to bulk restore a lot of images. Mostly this was because of the way I do the screenshots for my reviews, meaning there was a copy of pretty much everything on PC, I just had to find them. I’ve been reviewing games for quite some time though and that’s meant I’ve changed PCs a couple times, meaning some of the images are lost in the sea of old hard drives I have lying around the place. Whilst I was able to scrounge up a good chunk of them by finding an old version of the server I used to host locally there were still some images that eluded me, forcing me to think of other places that might have a copy of them.

My site has been on the Wayback Machine for some time now so I figured that there would (hopefully) be a copy of most of my images on there. For the most part there is, even the full sized ones, however there were still multiple images that weren’t there either. My last bastion of hope was Google’s cache of my website however they only store (or at least, make available) the latest version that they have indexed. Sometimes this meant that I could find an image here or there, as they seem to be archived separately and aren’t deleted if you remove it, however it was still at hit or miss affair. In the end I managed to get the list of missing images down from about 2000 to 150 and thanks to a fortuitous hard drive backup I found most of those will hopefully be eliminated in short order.

What kept me going throughout most of this was the mantra that many privacy advocates and parents alike have parroted many times: the Internet never forgets. For the most part I’d be inclined to agree with this as the vast majority of the information that I had put out there, even though I had erased the source, was still available for anyone to view. However the memory of the Internet, much like that of the humans that run it, isn’t a perfect one, routinely forgetting things, jumbling them up or just plain not remembering them at all. The traces of what you’re searching for are likely there somewhere, but there’s no guarantee that the Internet will remember everything for you.

 

 

Turnbull's Disinterested Face

Turnbull’s MTM NBN Will be Later, Slower and More Expensive.

There’s 2 main reasons why I’ve avoided writing about the NBN for the last couple months. For the most part it’s been because there’s really been nothing of note to report and sifting through hours of senate talks to find a nugget of new information to write about isn’t really something I’m particularly enthused about doing. Secondly as someone who’s deeply interested in technology (and makes his living out of services that could make heavy use of the NBN) the current state of the project is, frankly, infuriating and I don’t think people enjoy reading about how angry I am. Still it seems that the Liberal’s MTM NBN plan has turned from a hypothetical farce into a factual one and I’m not one to pass up an opportunity to lay down criticism where criticism is due.

Turnbull's Disinterested Face

The slogan the Liberal’s ran with during their election campaign was “Fast. Affordable, Sooner.” promising that they’d be able to deliver at least 25Mbps to every Australian by the end of 2016 with that ramping up to 50Mbps by the end of 2019. This ended up being called the Multi-Technology Mix (MTM) NBN which would now include the HFC rather than overbuilding them and would switch to FTTN technology rather than FTTP. The issues with this plan were vast and numerous (ones I’ve covered in great detail in the past) and suffice to say the technology community in Australia didn’t buy into the ideas one bit. Indeed as time as progressed the core promises of the plan have dropped off one by one with NBNCo now proceeding with the MTM solution despite a cost-benefit analysis not being completed and the speed guarantee is now gone completely. If that wasn’t enough it’s come to my attention that even though they’ve gone ahead with the solution NBNCo hasn’t been able to connect a single customer to the FTTN solution.

It seems the Liberal’s promises simply don’t stand up to reality, fancy that.

The issues they seem to be encountering with deploying their FTTN trial are what many of the more vocal critics had been harping on for a long time, primarily the power and maintenance requirements that FTTN cabinets would require. Their Epping trial has faced several months of delays because they weren’t able to source adequate power, a problem which currently doesn’t have a timeline for a solution yet. The FTTP NBN which was using Gigabit Passive Optical Network (GPON) technology does not suffer from this kind of issue at all and this was showing in the ramp up in deployment numbers that NBNCo was seeing before it stopped its FTTP rollouts. If just the trial of the MTM solution is having this many issues then it follows that the full rollout will fare no better and that puts an axe to the Liberal’s election promises.

We’re rapidly approaching the end of this year which means that the timeline the Liberals laid out is starting to look less and less feasible. Even if the trial site gets everyone on board before the end of this year that still gives only 2 years for the rest of the infrastructure to be rolled out. The FTTP NBN wasn’t even approaching those numbers so there’s no way in hell that the MTM solution would be able to accomplish that, even with their little cheat of using the HFC networks.

So there goes the idea of us getting the NBN sooner but do any of their other promises hold true?

Well the speed guarantee went away some time ago so even the Liberals admit that their solution won’t be fast so the only thing they might be able to argue is that they can do it cheaper. Unfortunately for Turnbull his assumption that Telstra would just hand over the copper free of charge something which Telstra had no interest in doing. Indeed as part of the renegotiation of the contract with Telstra NBNCo will be paying some $150 million for access to 200,000 premises worth of copper which, if extrapolated to all of Australia, would be around $5.8 billion. This does not include the cabinets or remediating any copper that can’t handle FTTN speeds which will quickly eat into any savings on the deal. That’s not going into the ongoing costs these cabinets will incur during their lifetimes which is an order of magnitude more than what a GPON network would.

I know I’m not really treading any new ground by writing all this but the MTM NBN is beyond a joke now; a failed election promise that’s done nothing to help the Liberal’s waning credibility and will only do damage to Australia’s technology sector. Even if they do get voted out come next election it’ll be years before the damage can be undone which is a royal shame as the NBN was one of the best bits of policy to come out of the tumultuous time that was Labor’s last 2 terms in office. Maybe one day I’ll be able to look back on all my rants on this topic and laugh about it but until that day comes I’ll just be yet another angry IT sector worker, forever cursing the government that took away my fibre filled dream.

Print Yourself a House.

Ever since I first saw a 3D printer I wondered how long it’d be before they’d start scaling up in size. Now I’m not talking about incremental size improvements that we see every so often (like with the new Makerbot Z18), no I was wondering when we’d get industrial scale 3D printers that could construct large structures. The steps between your run of the mill desktop 3D printer and something of that magnitude isn’t a simple matter of scaling up the various components as many of the assumptions made at that size simply don’t apply when you get into large scale construction. It seems that day has finally come as Suzhou Yingchuang Science and Trade Development Co has developed a 3D printer capable of creating full size houses:

YouTube Preview Image

Details the makeup of the material used, as well as its structural properties, aren’t currently forthcoming however the company behind them claims that it’s about 5 times as hard as traditional building materials. They’re apparently using a few of these 3D printed buildings as offices for some of their employees so you’d figure they’re somewhat habitable although I’m sure they’re in a much more finished state than the ones shown above. Still for a first generation product they seem pretty good and if the company’s claims hold up then they’d become an attractive way to provide low cost housing to a lot of people.

What I’d really be interested to see is how the cost and materials used compares to that of traditional construction. It’s a well known fact that building new housing is an incredibly inefficient process with a lot of materials wasted in during construction. Methods like this provide a great opportunity to reduce the amount of waste generated as there’s no excess material left over once construction has completed. Further refinement of the process could also ensure that post-construction work, like cabling and wiring, are also done in a much more efficient manner.

I’m interested to see how inventive they can get with this as there’s potentially a world of new housing designs out there to exploited using this new method. That will likely be a long time coming however as not everyone will have access to one of these things to fiddle around with but I’m sure just the possibility of a printer of this magnitude has a few people thinking about it already.