Technology

Turnbull Failing to Understand Anything

Well Fuck It, I Didn’t Want That NBN Anyway.

Frankly I’m tired of the rollercoaster that the Coalition’s NBN has been. First came the dread the dread that when they came to power we’d end up with an inferior product, one that would be completely outdated by the time it was implemented. Then a small beacon of hope flickered with Turnbull stating fervently he’d wait for a technology review as well as a cost/benefit analysis before he’d move forward with any changes. Of course the review comes out and it’s deeply flawed, showing that the exercise was nothing but an excuse for the Coalition to go ahead with their inferior plan. All hopes that the Coalition would keep their promises from then on were dashed however many thought Turnbull would still hold off on any decisions until the magical cost/benefit analysis was completed (something which he continually lambasted Labor for).

Of course that turned out to be a fucking lie.

Turnbull Failing to Understand Anything

Worse still news comes today that the much vaulted minimum speed guarantee, the one where everyone would get access to at least 25Mbps “sooner, faster and cheaper” than the Labor NBN, is also fucking gone. This isn’t a case of them promising “up to” 25Mbps and all of us not remembering it properly, it’s right there on their website (I’ve also taken a screenshot for posterity):

While 25 mbps will be the peak speed on the satellite and fixed wireless services (under Labor’s plan for under the Coalition’s), it will be the floor speed under the Coalition’s plan and most consumers in the fixed line footprint will be able to access 50 mbps or faster.

So now we’re getting a NBN with a mix of technologies, some of which will require substantial remediation, that will still run predominately on the old copper network and the magical guarantee of 25Mbps (which we were all pretty sure was a lie anyway) has also disappeared in a puff of broken promises.

Fan-fucking-tastic.

I wish I could say I’m not angry, just disappointed (as we’ve all felt the sting of that line from a parental figure at one stage in our lives) but honestly I’m just fucking angry. They ran a whole election campaign on honesty and integrity and right after they get in everything they promised has turned out to be a complete and utter farce. It’s so bad now that even those who could’ve been called supportive of the Coalition’s NBN are turning on them and many technical news outlets are simply giving up, unable to trust anything the government says. I was pretty much of the same mind until I read that the speed guarantee was going away which just added yet another insult to the numerous injuries that the NBN has endured.

Worst part about all this? We’re stuck with this hypocritical, untrustworthy government for at least another 2 years and it’s clear that no amount of political toxicity will change their minds on the NBN matter. So the best case scenario is that NBN takes so long to transition to the new scheme that by the time the next election comes around the Coalition gets turfed and no substantial work on the shitty Coalition NBN has been done. It’s a fairy tale ending, I know (there hasn’t been an one term government in at least 80 years, as far as I can see) but it’s the only hope we have of getting something that isn’t a complete trainwreck like what the Coalition is proposing.

I could go on but I know all the ranting in the world on this blog isn’t going to change anything. All I can hope for is that Australia has its fill of an Abbott government by the time the elections roll around and they give him the boot he so rightly deserves. Of course that won’t stop me writing about the various fuckery that Turnbull and co get up to with regards to the NBN, but I know I’m preaching to the choir here.

OculusVR Headset Developer Preview 2

What’s the Deal with Facebook Acquiring Oculus VR?

Companies buying other companies is usually nothing to get excited about. Typically it’s a big incumbent player buying up a small company that’s managed to out-innovate them in a particular market segment so instead of losing market share the incumbent chooses to acquire them. Other times it’s done in order to funnel the customer base onto the core product that the incumbent is known for much like Google did with many of its acquisitions like Android. Still every so often a company will seemingly go out of its way to acquire another that honestly doesn’t seem to fit and we’re all left wondering what the hell they’re thinking. Facebook has done this today acquiring the virtual reality pioneer OculusVR.

OculusVR Headset Developer Preview 2Facebook and OculusVR could not be more different, one being  the biggest social network in the world that’s got 1.23 billion active users per month and the other being a small company with only 50 employees focusing on developing virtual reality technology. Whilst the long winded PR speech from Zuckerberg seems to indicate that they’re somehow invested in making the Oculus Rift the new way of experiencing the world it’s clear that Facebook is going to be running it as it’s own little company, much like Instagram and WhatsApp before it. With the recent rumours of Facebook looking to purchase drone manufacturer Titan Aerospace, another company that doesn’t seem like a good fit for the Facebook brand, it begs the question: what’s Facebook’s plan here?

Most of the previous high profile acquisitions aligned directly with Facebook’s weaknesses, namely how badly they were doing in the mobile space. Instagram fit the bill perfectly in this regard as they managed to grow a massive mobile-only social network that rivalled Facebook’s own mobile client for usage. Whilst many questioned whether paying $1 billion for a company that hadn’t generated a single dollar was worth it for them it seems like Facebook got some value out of it as their mobile experience has improved drastically since then. WhatsApp seemed to be in a similar vein although the high cost of acquisition (even though this one had some revenue to back it up) makes it much more questionable than the Instagram purchase. Still for both of them it was filling in a gap that Facebook had, OculusVR doesn’t do that.

From my perspective it seems like Facebook is looking to diversify its portfolio and the only reason I can think of to justify that is their core business, the Facebook social network, is starting to suffer. I can’t really find any hard evidence to justify this but it does seem like the business community feels that Facebook is starting to lose its younger audience (teens specifically) to messenger apps. Acquiring WhatsApp goes some way to alleviate this but acquiring the most popular app every couple years isn’t a sustainable business model. Instead it looks like they might be looking to recreate the early Google environment, one that spawned multiple other lines of business that weren’t directly related to their core business.

This was definitely a successful model for Google however most of the products and acquisitions they made at a similar stage to Facebook were centred around directing people back to their core products (search and advertising). Most of the moonshot ideas, whilst showing great initial results, have yet to become actual lines of business for them with the two most notable ones, Glass and the self-driving car, still in the developmental or early adopter phase. Facebook’s acquisition of OculusVR doesn’t really fit into this paradigm however with OculusVR likely going to be the first to market with a proper virtual reality headset it might just be a large bet that this market segment will take off.

Honestly it’s hard to see what Facebook’s endgame is here, both for OculusVR and themselves as a company. I think Facebook will stay true to their word about keeping OculusVR independent but I have no clue how they’ll draw on the IP and talent their to better themselves. Suffice to say not everyone is of the same opinion and this is something that Facebook and OculusVR are going to have to manage carefully lest the years of goodwill they’ve built up be dashed in a single movement. I won’t go as far to say that I’m excited to see what these too will do together but I’ll definitely be watching with a keen interest.

 

Sony Project Morpheus Headset

Sony’s New VR Headset and the Peripheral Conundrum.

Ever since the Nintendo Wii was released back in 2006 there seems to have been a resurgence in non-standard peripherals for consoles although most are simply motion based controllers in a fancy case. The issue with non-standard hardware was, and still is, that game developers can’t rely on a consumer having it and thus many choose to simply not use them. It’s for this (and other) reasons that Donkey Kong 64 had to include the Expansion Pak as their game was inoperable without it and its distribution in the market place could not be relied on. However it seems that manufacturing costs have become cheap enough to make custom peripherals like this viable and thus they have returned in greater numbers than ever before.

The big issue I see with things like this is that once a good idea comes along it’s guaranteed that there will be a lot of copy cat ideas that come out not too long after. In the absence of any interface standards governing their interactions with the consoles this inevitably turns into an arms race of who can win the most support from developers, most often ending in a duopoly of two competing standards that will likely never completely agree with one another. Whilst I’m all for competition in the consumer space I’m also for an open set of standards so that I’m not forced to choose between two functionally equivalent products based on who or what they support.

Which is why Sony’s announcement today of Project Morpheus, their virtual reality headset, is slightly troubling to me.

Sony Project Morpheus HeadsetSince it’s still in the prototype phase details are pretty scant on what its specifications will be but it’s apparently rocking a 1080p display (I’m guessing there’s 2 of them in there) and can apparently do full 360 degree tracking. Predictably the motion tracking relies on the PlayStation Eye accessory indicating that it’s probably got most of the same technology in it that the DualShock4/PlayStation Move controllers do. There doesn’t appear to be any headphones built into it but if it’s got all the same core bits and pieces as a regular PlayStation controller than I’m sure there’ll be a headphone port on it. Essentially it looks like the Oculus Rift did way back when it first debuted on Kickstarter, albeit far more reliant on Sony technology than their product will ever be.

Therein lies the crux of the issue with peripherals of this nature. Sure they add functionality and experiences that would be otherwise impossible to accomplish on the platform by their own but when they’re built like Sony’s, reliant on a whole bunch of things that are only available on that platform, I almost immediately lose interest. As someone who plays across multiple platforms in the space of a year the last thing I want to do is flood my living room with all sorts of one shot peripherals that have no use outside a couple narrow scenarios. Instead I’d prefer one that works across a multitude, something which is technically possible (I won’t tell you how much research I did into finding a cross platform compatible arcade stick for the fighting games I play) but rarely occurs in the wild.

What I’m really getting at here is that whilst I’m super excited for these kinds of virtual reality devices to become commonplace I also want a set of open standards so that when you buy one you’ll be able to use it pretty much everywhere. Oculus Rift has a big head start on everyone in this regard so I really hope that they’ve seen this problem on the horizon and are working towards a solution for it. With something like that in place companies could then focus on making the better headsets rather than trying to coax everyone into their ecosystem. It’s probably a pipe dream, I know, but it would be to the benefit of everyone if it happened.

Uncovered Telstra Pit

Beating a Dead Copper Network.

Not so long time readers will know that a month ago (exactly, strangely enough) I posted about the issues that a FTTN NBN wouldn’t fix, namely that of the horrendous nature of the copper network that Telstra currently maintains. When I posted it I figured that my almost unusably slow Internet was the byproduct of the incumbent weather and would soon rectify itself, something which had happened in the past. Unfortunately that wasn’t the case at all and after many days of sunshine and no improvement in sight I decided to do the thing I had been regretting: calling up Telstra to get the line investigated.

Uncovered Telstra PitYou can then imagine my elation when I saw that they now have a handy online form for you to fill out instead of calling them. Like a dutiful consumer I filled it out and sent it on its way, not caring about the multiple warnings about getting charged $120 if there was no fault found. The site guaranteed me a response within a week and so I waited for them to respond. Almost like clockwork a response appear from Telstra a week later claiming that the problem had been resolved and inviting me to take a survey about my experience. If my problem hadn’t been fixed, it said, I could say so on the survey and they’d continue investigating the issue.

Of course the fault hadn’t been fixed as no one had contacted me since lodging the fault so it was obvious that they hadn’t done any troubleshooting at all, that was just the system automatically closing out a ticket that had no action on it. I replied to the survey in kind, outlining the issues I was experiencing and the troubleshooting steps I had taken to fix it. I received a call back a day later from an agent who was going to handle my case who was very understanding of the situation I was in. However the earliest he could send out a technician was a month away although he promised he’d get that moved up.

I never heard back from him after a couple call backs where he told me he couldn’t do anything for me (even though he promised to keep me updated). Luckily the technician did arrive on the scheduled date although at 8AM rather than the agreed time of after 5pm. Upon inspection of my outlet he asked if I was able to get a connection at all as the line was essentially unusable by his diagnostic tools. After a quick trip to the pit he came back with an assessment that shouldn’t shock anyone but should make you lose all faith in the state of Telstra’s copper network.

Essentially the pit had been uncovered for quite some time, much like the above picture, with the terminals exposed to the elements. Another technician had been by recently though as they had put a temporary cover the terminals to protect it however this had to have been done after my terminal had degraded. A simple rewiring job fixed the issue but the pit still remains uncovered although, hopefully, the terminals are now protected from the elements so that it won’t happen again in the future.

The issue here is that I know this isn’t exactly uncommon as I’ve managed to pass multiple pits in my travellings around Canberra that are in a similar state. To get speeds higher than what I get right now would mean that a lot of remediation to the copper network would need to be done and no where in the government’s NBN plan does it stipulate that happening. This makes their promise of getting higher speeds to everyone cheaper and faster hollow as the infrastructure they’re relying on to provide it simply isn’t capable of delivering the required outcomes. I could go on but I feel like I’ve said my peace about this a dozen times over already. I just wanted to highlight the amount of rigmarole I had to go through to get a single connection fixed which, when multiplied by an entire nation, shows how infeasible a FTTN NBN really is.

Open Compute Project Logo

Will Open Compute Ever Trickle Down?

When Facebook first announced the Open Compute Project it was a very exciting prospect for people like me. Ever since virtualization became the defacto standard for servers in the data center hardware density became the prime the name of the game. Client after client I worked for was always seeking out ways to reduce their server fleet’s footprint, both by consolidating through virtualization and by taking advantage of technology like blade servers. However whilst the past half decade has seen a phenomenal increase the amount of computing power available, and thus an increase in density, there hasn’t been another blade revelation. That was until Facebook went open kimono on their data center strategies.

Open Compute Project LogoThe designs proposed by the Open Compute Project are pretty radical if you’re used to traditional computer hardware, primarily because they’re so minimalistic and the fact that they expect a 12.5V DC input rather than the usual 240/120VAC that’s typical of all modern data centers. Other than that they look very similar to your typical blade server and indeed the first revisions appeared to get densities that were pretty comparable. The savings at scale were pretty tremendous however as you could gain a lot of efficiency by not running a power supply in every server and their simple design meant their cooling aspects were greatly improved. Apart from Facebook though I wasn’t aware of any other big providers utilizing ideas like this until Microsoft announced today that it was joining the project and was contributing its own designs to the effort.

On the surface they look pretty similar to the current Open Compute standards although the big differences seem to come from the chassis.Instead of doing away with a power supply completely (like the current Open Compute servers advocate) it instead has a dedicated power supply in the base of the chassis for all the servers. Whilst I can’t find any details on it I’d expect this would mean that it could operate in a traditional data center with a VAC power feed rather than requiring the more specialized 12.5V DC. At the same time the density that they can achieve with their cloud servers is absolutely phenomenal, being able to cram 96 of them in a standard rack. For comparison the densest blade system I’ve ever supplied would top out at 64 servers and most wouldn’t go past 48.

This then begs the question: when we will start to see server systems like this trickle down to the enterprise and consumer market? Whilst we rarely have the requirements for the scales at which these servers are typically used I can guarantee there’s a market for servers of this nature as enterprises continue on their never ending quest for higher densities and better efficiency. Indeed this feels like it would be advantageous for some of the larger server manufacturers to pursue since if these large companies are investing in developing their own hardware platforms it shows that there’s a niche they haven’t yet filled.

Indeed if the system can also accommodate non-compute blades (like the Microsoft one shows with the JBOD expansion) such ideas would go toe to toe with system-in-a-box solutions like the CISCO UCS which, to my surprise, quickly pushed its way to the #2 spot for x86 blade servers last year. Of course there are already similar systems on the market from others but in order to draw people away from that platform other manufacturers are going to have to offer something more and I think the answer to that lies within the Open Compute designs.

If I’m honest I think that the real answer to the question posited in the title of this blog is no. Whilst it would be possible for anyone working at Facebook and Microsoft levels of scale to engage in something like this unless a big manufacturer gets on board Open Compute based solutions just won’t be feasible for the clients I service. It’s a shame because I think there’s some definite merits to the platform, something which is validated by Microsoft joining the project.

ASUS Monitor with NVIDIA G-Sync

NVIDIA’s G-SYNC is the Business.

If there’s one thing that I can’t stand in any game it’s visual tearing and stuttering. This is the main reason why I play all my games with v-sync on as whilst I, like any gamer, enjoy the higher frame rates that come with turning it off it’s not long before I’m turning it back on again after the visual tearing wreaks havoc on my visual experience. Unfortunately this has the downside of requiring me to over-spec my machine to ensure 60 FPS at all times (something which I do anyway, but it doesn’t last forever) or lowering the visual quality of the game, something which no one wants. It’s been an issue for so long that I had given up on a fix for it although there was some hope with a 120Hz monitor. As it turns out there is hope and its name is G-SYNC.

ASUS Monitor with NVIDIA G-SyncThe technology comes by way of NVIDIA and it’s a revolutionary way of having the GPU and your monitor work in tandem to remove tearing and stuttering. Traditionally when you’re operating a monitor like I am your graphics card has to wait for the monitor’s refresh interval every time it wants to write a frame to it. In a highly variable frame rate game (which is anything that’s graphically intensive) this leads to stuttering where repeated frames give the appearance of the game freezing up. Flipping v-sync off leads to the other problem where the GPU can write frames to the monitor whenever it wants. This means that a new frame can start being written halfway through a scan cycle which, if there’s even a skerrick of motion, leads to the frames being out of alignment causing visual tears. G-SYNC allows the GPU to dictate when the monitor should refresh, eliminating both these issues as every frame is synced perfectly.

For me this is basically monitor nirvana as it gives me the advantages of running v-sync without any of the drawbacks. Better still all the monitors that support G-SYNC also run up to 144Hz, something which was going to be a requirement for my next monitor purchase. The only drawback that I see currently is that all these high refresh rate monitors are TN panels which aren’t as great when compared to the shiny new IPS panels that have been flooding the market recently. Honestly though I’m more than willing to trade off the massive resolution and better colour reproduction for solving my main visual gripe that’s plagued me for the better part of 20 years.

Unfortunately your options for getting a G-SYNC capable monitor right now are fairly limited. Whilst there are a good number of monitors that were recently announced as supporting G-SYNC none of them have become commercially available yet, with all of them scheduled for release in Q2 2014. You can, if you’re so inclined, purchase an ASUS VG248QE and then hit up NVIDIA directly for a G-SYNC upgrade kit (currently out of stock) and upgrade your monitor yourself but it will require you to crack it open in order to do so. There are places that will do this for you though but they too are out of stock. Still for something like this I’m more than willing to wait and, hopefully, it will mean that other components of my new computer build will come down a touch, enough to justify the extra expenditure on these new fangled monitors.

Windows 8.1 With Bing wzor.net

Windows 8.1 With Bing: The First (Legal) Free Version of Windows?

As a poor student the last thing I wanted to pay for was software. Whilst the choice to pirate a base operating system is always questionable, it’s the foundation on which all your computing activities rely, it was either pay the high license cost or find an alternative. I’ve since found numerous, legitimate alternatives of course (thank you BizSpark) but not everyone is able to take advantage of them. Thus for many the choice to upgrade their copy of Windows typically comes with the purchase of a new computer, something which doesn’t happen as often as it used to. I believe that this is one factor that’s affected the Windows 8/8.1 adoption rates and it seems Microsoft might be willing to try something radical to change it.

Windows 8.1 With Bing wzor.netRumours have been making the rounds that Microsoft is potentially going to offer a low cost (or completely free) version of their operating system dubbed Windows 8.1 with Bing. Details as to what is and isn’t included are still somewhat scant but it seems like it will be a full version without any major strings attached. There’s even musings around some of Microsoft core applications, like Office, to be bundled in with the new version of Windows 8.1. This wouldn’t be unusual (they already do it with Office Core for the Surface) however it’s those consumer applications where Microsoft draws a lot of its revenue in this particular market segment so their inclusion would mean the revenue would have to be made up somewhere else.

Many are toting this release as being targeted mostly at Windows 7 users who are staving off making the switch to Windows 8. In terms of barriers to entry they are by far the lowest although they’re also the ones who have the least to gain from the upgrade. Depending on the timing of the release though this could also be a boon to those XP laggards who run out of support in just over a month. The transition from XP to Windows 8 is much more stark however, both in terms of technology and user experience, but there are numerous things Microsoft could do in order to smooth it over.

Whilst I like the idea there’s still the looming question of how Microsoft would monetize something like this as releasing something for free and making up the revenue elsewhere isn’t really their standard business model (at least not with Windows itself). The “With Bing” moniker seems to suggest that they’ll be relying heavily on browser based revenue, possibly by restricting users to only being able to use Internet Explorer. They’ve got into hot water for doing similar things in the past although they’d likely be able to argue that they no longer hold a monopoly on Internet connected devices like they once did. Regardless it will be interesting to see what the strategy is as the mere rumour of something like this is new territory for Microsoft.

It’s clear that Microsoft doesn’t want Windows 7 to become the next XP and is doing everything they can to make it attractive to get users to make the switch. They’re facing an uphill battle as there’s still a good 30% of Windows users who are still on XP, ones who are unlikely to change even in the face of imminent end of life. A free upgrade might be enough to coax some users across however Microsoft needs to start selling the transition from any of their previous version as a seamless affair, something that anyone can do on a lazy Sunday afternoon. Even then there will still be holdouts but at least it’d go a long way to pushing the other versions’ market share down into the single digits.

 

Artemis pCell pWave

The Artemis pCell: Making Interference Work For You.

It will likely come as a shock to many to find out that Australia leads the world in terms of 4G speeds, edging out many other countries by a very healthy margin. As someone who’s a regular user of 4G for both business and pleasure I can attest to the fact that the speeds are phenomenal with many of the CBD areas around Australia giving me 10~20Mbps on a regular basis. However the speeds have notably degenerated over time as back in the early days it wasn’t unheard of to get double those speeds, even if you were on the fringes of reception. The primary factor in this is an increased user base and thus as the network becomes more loaded the bandwidth available to everyone starts to turn south.

There’s 2 factors at work here, both of which influence the amount of bandwidth that a device will be able to use. The primary one is the size of the backhaul pipe on the tower as that is the hard limit on how much traffic can pass through a particular end point. The second, and arguably just as important, factor is the number of devices vs the number of antennas on the base station as this will determine how much of the backhaul speed can be delivered to a specific device. This is what I believe has been mostly responsible for the reduction in 4G speeds I’ve experienced but according to the engineers at Artemis, a new communications start up founded by Steve Perlman (the guy behind the now defuct OnLive), that might not be the case forever.

Artemis pCell pWaveArtemis new system hopes to solve the latter part of the equation not by eliminating signal interference, that’s by definition impossible, but instead wants to utilize it in order to create pCells (personal cells) that are unique to each and every device that’s present on their network. According to Perlman this would allow an unlimited number of devices to coexist in the same area and yet still receive the same amount of signal and bandwidth as if they were on it all by themselves. Whilst he hasn’t divulged exactly how this is done yet he has revealed enough for us to get a good idea about how it functions and I have to say it’s quite impressive.

So the base stations you see in the above picture are only a small part of the equation, indeed from what I’ve read they’re not much different to a traditional base station under the hood. The magic comes in the form of the calculations that are done prior to the signal being sent out as instead of blindly broadcasting (like current cell towers do) they instead use your, and everyone else who is connected to the local pCell network, location to determine how the signals be sent out. This then manifests as a signal that’s coherent only at the location of your handset giving you the full amount of signal bandwidth regardless of how many other devices are nearby.

I did enough communications and signal processing at university to know something like this is possible (indeed it’s a similar kind of technology that powers “sound lasers”) and could well work in practice. The challenges facing this technology are many but from a technical standpoint there are 2 major ones I can see. Firstly it doesn’t solve the backhaul bandwidth issue meaning that there’s still an upper limit on how much data can be passed through a tower, regardless of how good the signal is. For a place like Australia this would be easily solved by implementing a full fibre network which, unfortunately, seems to be off the cards currently. The second problem is more nuanced and has to do with the calculations required and the potential impacts that might have on the network.

Creating these kinds of signals, ones that are only coherent at a specific location, requires a fair bit of  back end calculations to occur prior to being able to send the signal out. The more devices you have in any particular area the more challenging this becomes and the longer that this will take to calculate before the signal can be generated. This has the potential to introduce signal lag into the network, something that might be somewhat tolerable from a data perspective but is intolerable when it comes to voice transmission. To their credit Artemis acknowledges this challenge  and has stated that their system can do up to 100 devices currently so it will be very interesting to see if it can scale out like they believe it can.

Of course this all hinges on the incumbent cellular providers getting on board with this technology, something which a few have already said their aware of but haven’t gone much further than that. If it works as advertised then it’s definitely a disruptive technology, one that I believe should be adopted everywhere, but large companies tend to shy away from things like this which could strongly hamper adoption. Still this tech could have wide reaching applications outside the mobile arena as things like municpal wireless could also use it to their advantage. Whether it will see application there, or anywhere for that matter, will be something to watch out for.

 

Blue Angels’ C-130 Conducting a Jet Assisted Take Off.

Most aircraft capable of Short Take -Offs and Landings (STOL) are usually small and nimble kinds of planes, usually being either designed for use in adverse conditions or, more famously, fighter jets that find their homes on aircraft carriers. The reasons for this are pretty simple: the larger you the make the aircraft the more power you require to shorten its take off and past a certain point regular old jet engines simply aren’t going to cut it any more. However there have been a few notable examples of large aircraft using JATO rockets to drastically shorten their take off profile and the most notable of which is the Blue Angels’ C-130 dubbed Fat Albert:

YouTube Preview Image

If you’ve ever seen one of these beasts take off in person (or even say, an Airbus A380 which is a monster by comparison) then you’ll know that they seem to take forever to get off the ground. Strapping 8 JATOs that produce 1000lbs of thrust to the back of them makes a C-130 look a fighter jet when its taking off, gaining altitude at a rate that just seems out of this world. Of course this then begs the question of why you’d want to do something like this as it’s not often that a C-130 or any of its brethren find themselves in a situation where taking off that quickly would be necessary.

In truth it isn’t as the missions that these large craft fly are typically built around their requirements for a long runway. There have been some notable examples though with the most recent being the Iranian Host Crisis that occurred over 30 years ago. After the failure of a first rescue attempt the Pentagon set about creating another mission in order to rescue the hostages. The previous mission failure was largely blamed on the use of a large number of heavy lift helicopters, many of which didn’t arrive in operational condition. The thinking was to replace those helicopters with a single C-130 that was modified to land in a nearby sports stadium for evacuation of the extraction teams and the hostages.

The mission was called Operation Credible Sport and was tasked with modifying 2 C-130 craft to be capable of landing in a tight space. They accomplished this by the use of no less than 30 JATO rockets: 8 facing backward (for take off), 8 facing forward (for breaking on landing), 8 pointed downwards (to slow the descent), 4 on the wings and 2 on the tail. The initial flight test showed that the newly modified C-130 was capable of performing the take-off in the required space however on landing the 8 downward facing rockets failed to fire and, in combination with one of the pilots accidentally triggering the breaking rockets early, the craft met its tragic demise thankfully without out injury to any of the crew.

Even Fat Albert doesn’t do JATO runs any more as a shortage of the required rocketry spelled an end to it in 2009. It’s a bit of a shame as it’s a pretty incredible display but considering it had no practical use whatsoever I can see why they discontinued it. Still the videos of it are impressive enough, at least for me anyway.

Shitty Telstra Copper Pit

The Problems a FTTN NBN Won’t Fix.

Growing up in a rural area meant that my Internet experience was always going to be below that of my city living counterparts. This wasn’t much of an issue for a while as dial-up was pretty much all you could hope for anywhere in Australia however the advent of broadband changed this significantly. From then on the disparity in Internet accessibility was pretty clear and the gap only grew as time went on. This didn’t seem to change much after I moved into the city either, always seeming to luck out with places that connected at speeds far below the advertised maximum that our current gen ADSL lines were capable of. Worst still they almost always seemed to be at the mercy of the weather with adverse conditions dropping speeds or disconnecting us from the Internet completely.

Shitty Telstra Copper PitMy current place of residence never got great speeds, topping out at 6Mbps and only managing to sustain that connection for a couple hours before falling over. I can expect to get a pretty stable 4Mbps connection most of the time however the last few days have seen Canberra get a nice amount of rain and the speeds I was able to get barely tickled 1Mbps no matter how many times I reconnected, reset my modem or shouted incoherently at the sky. It was obvious then that my situation was caused by the incumbent weather, filling my local Telstra pit with water which sent the signal to noise ratio into the ground. Usually this is something I’d just take on the chin but this situation was meant to be improved by now if it wasn’t for the current government.

Prior to the election my area was scheduled to start construction in October last year however it became one of the areas that disappeared off NBNco’s deployment map shortly after the Abbot government came into power. This meant I would then come under their revised plan to bring in FTTN through VDSL which has the unfortunate consequence of leaving me on the known-bad infrastructure in my street. So my speeds might improve but it’d be unlikely that I’d get “at least” 20Mbps and I could guarantee that every time it rained I’d be in for another bout of tragic Internet speeds, if I could connect to it at all.

The big issue with the Liberal’s NBN plan is that my situation is by no means unique and indeed quite typical thanks to the aging infrastructure that is commonplace throughout much of Australia. Indeed the only place that I know gets speeds as advertised for their cable run are my parents who still live in a rural area. The reason for this is because the copper is new out there and is quite capable of carrying the higher speeds. My infrastructure on the other hand, in a place where you’d expect it to be regularly maintained, doesn’t hold a candle to theirs and will continue to suffer from issues after we get “upgraded”.

A full FTTP NBN on the other hand would eliminate these issues providing ubiquitous access that’s, above all, dependable and reliable. The copper mile last run that the majority of Australia will end up using as part of the Liberal’s NBN just can’t provide that, not without significant remediation which neither Telstra nor the government has any interest in doing. Hopefully the Liberal government wakes up and realises this before we get too far down the FTTN hole as it’s been shown that the majority of Australian’s want the FTTP NBN and they’re more than willing to pay for it.