Technology

The Internet Never Forgets

Sometimes The Internet Does Forget.

Last year I fucked up.

There’s really no other way to put it, I made the rookie mistake of not backing up everything before I started executing commands that could have some really bad consequences. I’d like to say it was hubris, thinking that my many years in the industry had made me immune to things like this, but in reality it was just my lack of knowledge of how certain commands worked. Thankfully it wasn’t a dreaded full wipe and I was able to restore the essence of this blog (I.E. the writing) without too much trouble, however over time it became apparent just how incomplete that restore was. Whilst I was able to restore quite a lot of the pictures I’ve used over the years I was still lacking lots of them, some of them on some of my favourite posts.

The Internet Never Forgets

Thankfully, after writing some rather complicated PowerShell scripts, I was able to bulk restore a lot of images. Mostly this was because of the way I do the screenshots for my reviews, meaning there was a copy of pretty much everything on PC, I just had to find them. I’ve been reviewing games for quite some time though and that’s meant I’ve changed PCs a couple times, meaning some of the images are lost in the sea of old hard drives I have lying around the place. Whilst I was able to scrounge up a good chunk of them by finding an old version of the server I used to host locally there were still some images that eluded me, forcing me to think of other places that might have a copy of them.

My site has been on the Wayback Machine for some time now so I figured that there would (hopefully) be a copy of most of my images on there. For the most part there is, even the full sized ones, however there were still multiple images that weren’t there either. My last bastion of hope was Google’s cache of my website however they only store (or at least, make available) the latest version that they have indexed. Sometimes this meant that I could find an image here or there, as they seem to be archived separately and aren’t deleted if you remove it, however it was still at hit or miss affair. In the end I managed to get the list of missing images down from about 2000 to 150 and thanks to a fortuitous hard drive backup I found most of those will hopefully be eliminated in short order.

What kept me going throughout most of this was the mantra that many privacy advocates and parents alike have parroted many times: the Internet never forgets. For the most part I’d be inclined to agree with this as the vast majority of the information that I had put out there, even though I had erased the source, was still available for anyone to view. However the memory of the Internet, much like that of the humans that run it, isn’t a perfect one, routinely forgetting things, jumbling them up or just plain not remembering them at all. The traces of what you’re searching for are likely there somewhere, but there’s no guarantee that the Internet will remember everything for you.

 

 

Turnbull's Disinterested Face

Turnbull’s MTM NBN Will be Later, Slower and More Expensive.

There’s 2 main reasons why I’ve avoided writing about the NBN for the last couple months. For the most part it’s been because there’s really been nothing of note to report and sifting through hours of senate talks to find a nugget of new information to write about isn’t really something I’m particularly enthused about doing. Secondly as someone who’s deeply interested in technology (and makes his living out of services that could make heavy use of the NBN) the current state of the project is, frankly, infuriating and I don’t think people enjoy reading about how angry I am. Still it seems that the Liberal’s MTM NBN plan has turned from a hypothetical farce into a factual one and I’m not one to pass up an opportunity to lay down criticism where criticism is due.

Turnbull's Disinterested Face

The slogan the Liberal’s ran with during their election campaign was “Fast. Affordable, Sooner.” promising that they’d be able to deliver at least 25Mbps to every Australian by the end of 2016 with that ramping up to 50Mbps by the end of 2019. This ended up being called the Multi-Technology Mix (MTM) NBN which would now include the HFC rather than overbuilding them and would switch to FTTN technology rather than FTTP. The issues with this plan were vast and numerous (ones I’ve covered in great detail in the past) and suffice to say the technology community in Australia didn’t buy into the ideas one bit. Indeed as time as progressed the core promises of the plan have dropped off one by one with NBNCo now proceeding with the MTM solution despite a cost-benefit analysis not being completed and the speed guarantee is now gone completely. If that wasn’t enough it’s come to my attention that even though they’ve gone ahead with the solution NBNCo hasn’t been able to connect a single customer to the FTTN solution.

It seems the Liberal’s promises simply don’t stand up to reality, fancy that.

The issues they seem to be encountering with deploying their FTTN trial are what many of the more vocal critics had been harping on for a long time, primarily the power and maintenance requirements that FTTN cabinets would require. Their Epping trial has faced several months of delays because they weren’t able to source adequate power, a problem which currently doesn’t have a timeline for a solution yet. The FTTP NBN which was using Gigabit Passive Optical Network (GPON) technology does not suffer from this kind of issue at all and this was showing in the ramp up in deployment numbers that NBNCo was seeing before it stopped its FTTP rollouts. If just the trial of the MTM solution is having this many issues then it follows that the full rollout will fare no better and that puts an axe to the Liberal’s election promises.

We’re rapidly approaching the end of this year which means that the timeline the Liberals laid out is starting to look less and less feasible. Even if the trial site gets everyone on board before the end of this year that still gives only 2 years for the rest of the infrastructure to be rolled out. The FTTP NBN wasn’t even approaching those numbers so there’s no way in hell that the MTM solution would be able to accomplish that, even with their little cheat of using the HFC networks.

So there goes the idea of us getting the NBN sooner but do any of their other promises hold true?

Well the speed guarantee went away some time ago so even the Liberals admit that their solution won’t be fast so the only thing they might be able to argue is that they can do it cheaper. Unfortunately for Turnbull his assumption that Telstra would just hand over the copper free of charge something which Telstra had no interest in doing. Indeed as part of the renegotiation of the contract with Telstra NBNCo will be paying some $150 million for access to 200,000 premises worth of copper which, if extrapolated to all of Australia, would be around $5.8 billion. This does not include the cabinets or remediating any copper that can’t handle FTTN speeds which will quickly eat into any savings on the deal. That’s not going into the ongoing costs these cabinets will incur during their lifetimes which is an order of magnitude more than what a GPON network would.

I know I’m not really treading any new ground by writing all this but the MTM NBN is beyond a joke now; a failed election promise that’s done nothing to help the Liberal’s waning credibility and will only do damage to Australia’s technology sector. Even if they do get voted out come next election it’ll be years before the damage can be undone which is a royal shame as the NBN was one of the best bits of policy to come out of the tumultuous time that was Labor’s last 2 terms in office. Maybe one day I’ll be able to look back on all my rants on this topic and laugh about it but until that day comes I’ll just be yet another angry IT sector worker, forever cursing the government that took away my fibre filled dream.

Print Yourself a House.

Ever since I first saw a 3D printer I wondered how long it’d be before they’d start scaling up in size. Now I’m not talking about incremental size improvements that we see every so often (like with the new Makerbot Z18), no I was wondering when we’d get industrial scale 3D printers that could construct large structures. The steps between your run of the mill desktop 3D printer and something of that magnitude isn’t a simple matter of scaling up the various components as many of the assumptions made at that size simply don’t apply when you get into large scale construction. It seems that day has finally come as Suzhou Yingchuang Science and Trade Development Co has developed a 3D printer capable of creating full size houses:

YouTube Preview Image

Details the makeup of the material used, as well as its structural properties, aren’t currently forthcoming however the company behind them claims that it’s about 5 times as hard as traditional building materials. They’re apparently using a few of these 3D printed buildings as offices for some of their employees so you’d figure they’re somewhat habitable although I’m sure they’re in a much more finished state than the ones shown above. Still for a first generation product they seem pretty good and if the company’s claims hold up then they’d become an attractive way to provide low cost housing to a lot of people.

What I’d really be interested to see is how the cost and materials used compares to that of traditional construction. It’s a well known fact that building new housing is an incredibly inefficient process with a lot of materials wasted in during construction. Methods like this provide a great opportunity to reduce the amount of waste generated as there’s no excess material left over once construction has completed. Further refinement of the process could also ensure that post-construction work, like cabling and wiring, are also done in a much more efficient manner.

I’m interested to see how inventive they can get with this as there’s potentially a world of new housing designs out there to exploited using this new method. That will likely be a long time coming however as not everyone will have access to one of these things to fiddle around with but I’m sure just the possibility of a printer of this magnitude has a few people thinking about it already.

Windows Threshold

Windows Threshold: Burying Windows 8 for the Sake of 9.

It’s hard to deny that Windows 8 hasn’t been a great product for Microsoft. In the 2 years that it’s been on the market it’s managed to secure some 12% of total market share which sounds great on the surface however its predecessor managed to nab some 40% in a similar time frame. The reasons behind this are wide and varied however there’s no mistaking that a large part of it was the Metro interface which just didn’t sit well with primarily desktop users. Microsoft, to their credit, has responded to this criticism by giving consumer what they want but like Vista the product that Windows 8 today is overshadowed by it’s rocky start. It seems clear now that Microsoft is done with Windows 8 as a platform and is now looking towards its successor, codenamed Windows Threshold.

Windows ThresholdNot a whole lot is known about what Threshold will entail but what is known points to a future where Microsoft is distancing itself from Windows 8 in the hopes of getting a fresh start. It’s still not known whether or not Threshold will become known as Windows 9 (or whatever name they might give to it) however the current release date is slated for sometime next year, on time with Microsoft’s new dynamic release schedule. This would also put it at 3 years after the initial release of Windows 8 which also ties into the larger Microsoft product cycle. Indeed most speculators are pegging Threshold to be much like the Blue release of last year with all Microsoft products receiving an update upon release. What interests me about this release isn’t so much of what it contains, more what it’s going to take away from Windows 8.

Whilst Microsoft has made inroads to making Windows 8 feel more like its predecessors the experience is still deeply tied to the Metro interface. Pressing the windows key doesn’t bring up the start menu and Metro apps are still have that rather obnoxious behaviour of taking over your entire screen. Threshold however is rumoured to do away with this, bringing back the start menu with a Metro twist that will allow you to access those kinds of applications without having to open up the full interface. Indeed for desktop systems, those that are bound to a mouse and keyboard, Metro will be completely disabled by default. Tablets and other hybrid devices will still retain the UI with the latter switching between modes depending on what actions occur (switch to desktop when docked, Metro when in tablet form).

From memory such features were actually going to make up parts of the next Windows 8 update, not the next version of Windows itself. Microsoft did add some similar features to Windows 8 in the last update (desktop users now default to desktop on login, not Metro) but the return of the start menu and the other improvements are seemingly not for Windows 8 anymore. Considering just how poor the adoption rates of Windows 8 has been this isn’t entirely surprising and Microsoft might be looking for a clean break away from Windows 8 in order to drive better adoption of Threshold.

It’s a strategy that has worked well for them in the past so it shouldn’t be surprising to see Microsoft doing this. For those of us who actually used Vista (after it was patched to remedy all the issues) we knew that Windows 7 was Vista under the hood, it was just visually different enough to break past people’s preconceptions about it. Windows Threshold will likely be the same, different enough from its direct ancestor that people won’t recognise it but sharing the same core that powered it. Hopefully this will be enough to ensure that Windows 7 doesn’t end up being the next XP as I don’t feel that’s a mistake Microsoft can afford to keep repeating.

 

Samsung 850 Pro V-NAND SSD

Samsung’s V-NAND Has Arrived, and It’s Awesome.

When people ask me what one component on their PC they should upgrade my answer is always the same: get yourself a SSD. It’s not so much the raw performance characteristics that make the upgrade worth it, more all those things that many people hate about computers seem to melt away when you have a SSD behind it. All your applications load near instantly, your operating system feels more responsive and those random long lock ups where your hard drive seems to churn over for ages simply disappears. However the one drawback is their size and cost, being an order of magnitude above the good old spinning rust. Last year Samsung announced their plans to change that with V-NAND and today they deliver on that promise.

Samsung 850 Pro V-NAND SSD

The Samsung 850 Pro is the first consumer drive to be released with V-NAND technology and is available in sizes up to 1TB. The initial promise of 128Gbit per chip has unfortunately fallen a little short of its mark with this current production version only delivering around 86Gbit per chip. This is probably due to economical reasons as the new chips under the hood of this SSD are smaller than the first prototypes which helps to increase the yield per wafer. Interestingly enough these chips are being produced on an older lithography process, 30nm instead of the current standard 20nm for most NAND chips. That might sound like a step back, and indeed it would be for most hardware, however the performance of the drive is pretty phenomenal, meaning that V-NAND is going to get even better with time.

Looking at the performance reviews the Samsung 850 Pro seems to be a top contender, if not the best, in pretty much all of the categories. In the world of SSDs having consistently high performance like this across a lot of categories is very unusual as typically a drive manufacturer will tune performance to a certain profile. Some favour random reads, others sustained write performance, but the Samsung 850 Pro seems to do pretty much all of them without breaking a sweat. However what really impressed me about the drive wasn’t so much the raw numbers, it was how the drive performed over time, even without the use of TRIM.

samsung 850 pro 512gb - hdtach-3-

SSDs naturally degrade in performance over time, not due to the components wearing out but due to the nature of how they read and write data. Essentially it comes down to blocks needing to be checked to see if they’re free or not before they can be written to, a rather costly process. A new drive has all blank space which means these checks don’t need to be done but over time they’ll get into unknown states due to all the writing and rewriting. The TRIM command tells SSDs that certain blocks have been freed up, allowing the drive to flag them as unused, recovering some of the performance. The graph above shows what happens when the new Samsung 850 Pro reaches that performance degradation point even without the use of TRIM. If you compare that to other SSDs this kind of consistent performance almost looks like witchcraft but it’s just the V-NAND technology showing one of its many benefits.

Indeed Samsung is so confident in these new drives it’s giving all of them a 10 year warranty, something you can’t find even on good old spinning rust drives anymore. I’ll be honest when I first read about V-NAND I had a feeling that the first drives would likely be failure ridden write offs, like most new technologies are. However this new drive from Samsung appears to be the evolutionary step that all SSDs need to take as this first iteration device is just walking all over the competition. I was already sold on a Samsung SSD for my next PC build but I think an 850 Pro just made the top of my list.

Now if only those G-SYNC monitors could come out already, then I’d be set to build my next gen gaming PC.

Google Cardboard

Google’s Cardboard: VR For The Masses.

I can remember my first encounter with virtual reality way back in the 90s. It was a curiosity more than anything else, something that was available at this one arcade/pizza place in the middle of town. You’d go in and there it would be, two giant platforms containing people with their heads strapped into oversized head gear. On the screens behind them you could see what they were seeing, a crude polygonal world inhabited by the other player and a pterodactyl. I didn’t really think much of it at the time, mostly since I couldn’t play it anywhere but there (and that was an hour drive away) but as I grew older I always wondered what had become of that technology. Today VR is on the cusp of becoming mainstream and it looks like Google wants to thrust it into the limelight.

Google Cardboard

Meet Google Cardboard, the ultra low cost virtual reality headset that Google gave out to every attendee at I/O this year. It’s an incredibly simple idea, using your smartphone’s screen and to send different images to your eyes. Indeed if you were so inclined a similar system could be used to turn any screen into a VR headset, although the lenses would need to be crafted for the right dimensions. With that in mind the range of handsets that Google Cardboard supports is a little limited, mostly to Google Nexus handsets and some of their closely related cousins, but I’m sure that future incarnations that support a wide range of devices won’t be too far off. Indeed if the idea has piqued your interest enough you can get an unofficial version of it for the low cost of $25, a bargain if you’re looking to dabble with VR.

Compared to the original OculusVR specs most smartphones are more than capable of driving Google Cardboard with an acceptable level of performance. My current phone, the Sony Xperia Z, has a full 1080p resolution and enough grunt to run some pretty decent 3D applications. That combined with the bevy of sensors that are in most modern smartphones make Google Cardboard a pretty brilliant little platform for testing out what you can do with VR. Of course that also means the experience you can get with this will vary wildly depending on what handset you have but for those looking for a cheap platform to validate ideas on it’s hard to argue against it.

Of course this begs the question as to what Google’s larger plan is for introducing this concept to the world. Ever since the breakaway success that was the OculusVR it’s been obvious that there’s consumer demand for VR and it only seems to be increasing as time goes on. However most applications are contained solely within the games industry with only a few interesting experiments (like Living with Lag) breaking outside that mould. There’s a ton of augmented reality applications on Android which could potentially benefit from widespread adoption of something like Cardboard, however beyond that I’m not so sure.

I think it’s probably a gamble on Google’s part as history has proven that throwing out a concept to the masses is a great way to root out innovative ideas. Google might not have any solid plans for developing VR of this nature themselves but the community that arises around the idea could prove a fruitful place for applications that no one has thought of before. I had already committed myself to a retail version of an Oculus when it came out however so whilst Cardboard might be a curiosity my heart is unfortunately promised to another.

Facebook Headquarters

Facebook is Being Creepy Again, But They Didn’t Have to be.

In the now decade long history of Facebook we’ve had numerous scandals around the ideas of privacy and what Facebook should and should not be doing with the data they have on us. For the most part I’ve tended to side with Facebook as whilst I share everyone’s concerns use of the platform is voluntary in nature and should you highly object to what they’re doing you’re free to not use them. The fact is that any service provided to you free of charge needs to make revenue somewhere and for Facebook that comes from your data. However this doesn’t seem to stop people from being outraged at something Facebook does with almost clockwork regularity, the most recent of which was tinkering with people’s feeds to see if emotions could spread like the plague.

Facebook HeadquartersThe results are interesting as they show that emotions can spread through social networks without the need for direct interaction, it can happen by just reading status updates. The experimenters sought to verify this by manipulating the news feeds of some 689,000 Facebook users to skew the emotional content in one direction and then saw how the user’s emotional state fared further down the line. The results confirmed their initial hypothesis showing that emotions expressed on Facebook can spread to others. Whilst it’s not going to cause a pandemic of ecstasy or sudden whirlwind of depression cases worldwide the evidence is there to suggest that your friend’s sentiment on Facebook does influence your own emotional state.

Whilst it’s always nice to get data that you can draw causal links from (like with this experiment) I do wonder why they bothered to do this when they could’ve done much more in depth analysis on a much larger subset of the data. They could have just as easily taken a much larger data set, classified it in the same way and then done the required analysis. This somewhat sneaks around the rather contentious issue of informed consent when it comes to experiments like this as there’s no indication that Facebook approached these individuals before including them in the experiment.

Indeed that’s probably the only issue I have with Facebook doing this as whilst the data they have is theirs to do with as they see fit (within the guidelines of privacy regulations) attempting to alter people’s emotional state is a little too far. The people behind the study have came out and said that the real impact wasn’t that great and it was all done in aid of making their product better something which I’m sure is of little comfort to those who object to the experiment in the first place. Whilst the argument can be made that Facebook already manipulates users feeds (since you don’t see everything that your friends post anymore) doing so for site usability/user engagement is one thing, performing experiments on them without consent is another.

If Facebook wants to continue these kinds of experiments then they should really start taking steps to make sure that its user base is aware of what might be happening to them. Whilst I’m sure people would still take issue to Facebook doing widespread analysis on user’s emotional state it would be a far cry from what they did with this experiment, one that would likely not run afoul of established experimental standards. The researchers have said they’ll take the reaction to these results under advisement which hopefully means that they might be more respectful of their user’s data in the future. However since we’re going on 10 years of Facebook doing things like this I wouldn’t hold my breath for immediate change.

 

Screenshot_2014-06-25-10-36-44

Recycling Electromagnetic Energy? iFind, Surely You Jest.

If you’re reading this article, which is only available through the Internet, then you’re basking in a tsunami of electromagnetic radiation. Don’t worry though, the vast majority of these waves are so low power that they don’t make it through the first layer of your skin before dissipating harmlessly. Still they do carry power, enough so that this article can worm its way from the server all the way to the device that you’re reading it on. Considering just how pervasive wireless signals are in our modern lives it then follows that there’s a potential source of energy there, one that’s essentially free and nigh on omnipresent. Whilst this is true, to some extent, actually harvesting a useful amount of it is a best impractical but that hasn’t stopped people from trying.

Screenshot_2014-06-25-10-36-44

If you’re a longtime fan of Mythbusters like myself you’ll likely remember the episode they did on Free Energy back in 2004. In that episode they tested a myriad of devices to generate electricity, one of them being a radio wave extractor that managed to power half of a wristwatch. In an unaired segment they even rigged up a large coil of wire and placed it next to a high voltage power line and were able to generate a whopping 8mV. The result of all this testing was to show that, whilst there is some power available for harvesting, it’s not a usable quantity by any stretch of the imagination.

So you can imagine my surprise when a product like iFind makes claims like “battery free” and “never needs recharging” based around the concept of harvesting energy from the air.

The fundamental functionality of the iFind isn’t anything new, it’s just yet another Bluetooth tag system so you don’t lose whatever you attach the tag to. It’s claim to fame, and one that’s earned it a rather ridiculous half a million dollars, is that it doesn’t have a battery (which it does, unless you want to get into a semantic argument about what “battery” actually means) and that it charges off the electromagnetic waves around you. They’ve even gone as far to provide some technical documentation that shows the power generated from various signals. Suffice to say I think their idea is unworkable at best and, at worst, outright fraud.

The graphs they show in this comment would seem to indicate that it’s capable of charging even under very weak signal conditions, all the way down to -6dBm. That sounds great in principle until you take in account what a typical charging scenario for a device like this would be, like the “ideal” one that they talk about in some of their literature: a strong wifi signal. The graph shown above is the signal strength of my home wifi connection (an ASUS RT-N66U for reference) with the peak readings being from when I had my phone right next to the antennas. That gives a peak power output of some -22dBM, which sounds fine right? Well since those power ratings are logarithmic in nature the amount of power output is about 200 times weaker which puts the actual charge time at about 1000 days. If you had a focused RF source you could probably provide it with enough power to charge quickly but I doubt anyone has them in their house.

There’s also the issue of what kind of power source they have as the size precludes it from being anything hefty and they’re just referring to it as a “power bank”. Non-rechargeable batteries that fit within that form factor are usually on the order of a couple hundred milliamps with rechargeable variants having a much smaller capacity. Similar devices like Tile, which includes a non-rechargeable non-replaceable battery, lasts about a year before it dies which suggests a minimum power drain of at least a couple mAh per day. Considering iFind is smaller and rechargeable I wouldn’t expect it to last more than a couple weeks before giving it up, Of course since there’s no specifications on either of them it’s hard to judge but the laws of physics don’t differ between products.

However I will stop short of calling iFind a scam, more I think it’s a completely misguided exercise that will never deliver on its promises. They’ve probably designed something that does work under their lab circumstances but the performance will just not hold up in the real world. There’s a lot of questions that have been asked of them that are still unanswered which would go a long way to assuring people that what they’re making isn’t vaporware. Until they’re forthcoming with more information however I’d steer clear of giving them your money as it’s highly unlikely that the final product will perform as advertised.

What it Takes to Make Wheels that can Travel at 1600km/h.

One of my favourite shows that I found out about far too late into my adult life was How It’s Made. The premise of the show is simple: they take you into the manufacturing process behind many common products, showing you how they go from their raw materials into the products we all know. Whilst I’d probably recommend skipping the episodes which show you how some of your favourite food is made (I think that’s called the Sausage Principle) the insight into how some things are made can be incredibly fascinating. However whilst everyday products can be interesting they pale in comparison to something like the following video which shows how solid aluminium wheels are created for an upcoming jet car:

YouTube Preview Image

I think what gets me most about this video is the amazing level of precision that they’re able to obtain using massive tools, something which usually doesn’t go together. The press seems to be able to move in very small increments and can do so at speeds that just seem to be out of this world. The gripper also seems to have a pretty high level of fidelity about it, being able to pick up an extremely malleable piece of heated aluminium without structurally deforming it. That’s only half the equation though as the operators of these machines are obviously highly skilled in their operation, being able to guide them with incredible accuracy.

In fact the whole YouTube channel dedicated to the Bloodhound SSC car is filled with engineering marvels like this from showing off the construction of the monocoque and the attached components all the way to the interior and the software they’ll be using for it. If the above video had you tingling with excitement (well, I was, but I’m strange) then I highly recommend checking them out.

Watch_Dogs2014-6-18-12-16-6

Looks Like Ubisoft Owes Us Some Answers.

In my recent review of Ubisoft Montreal’s latest game, Watch_Dogs, I gave the developers the benefit of the doubt when it came to the graphics issues that many people had raised. Demos are often scripted and sculpted in such a way as to show a game in the best light possible and so the delivered product most often doesn’t line up with people’s expectations. So since Watch_Dogs wasn’t an unplayable monstrosity I chalked it up to the hype leading us all astray and Ubisoft pulling the typical demo shenanigans. As it turns out though there’s a way to make Watch_Dogs look as good as it did in the demos and all that’s required is adding 2 files to a directory.

This mod came to everyone’s attention yesterday with dozens of screenshots plastering all the major games news outlets. A modder called TheWorse on Guru3D became obsessed with diving into the Watch_Dog code and eventually managed to unpack many of the game’s core files. After that he managed to enable many of the effects that had been present in the original E3 demo of Watch_Dogs, along with tweaking a number of other settings to great effect. The result speaks for itself (as my before and after screenshots above can attest to) with the game looking quite a lot better than it did on my first play through. The thing with this mod is that unlike other graphical enhancements like ENB, which gives us all those pretty Skyrim screenshots, this mod isn’t adding anything to the rendering pipeline, it’s just enabling functionality that’s already there. Indeed this is most strongly indicated by the mod’s size, a paltry 45KB in size.

So first things first: I was wrong. Whilst the demo at E3 was likely running on a machine far better than many PC gamers have access to this mod shows that Watch_Dogs is capable of looking a lot better than it currently is. My current PC is approaching some 3 years old now, almost ancient in gaming PC years, and it was able to run the mod with ultra graphics settings, something I wasn’t able to do previously. It could probably use a little tweaking to get the framerate a bit higher but honestly that’s just my preference for higher frame rates more than anything. So with this in mind the question then turns to why Watch_Dogs shipped on PC in the state it did and who was ultimately responsible for removing the features that had so many in love with the E3 demo.

The conspiracy theorist in me wants to join the chorus of people saying that Watch_Dogs was intentionally crippled on PC in order to make it look more comparable to its console brethren. Whilst I can’t deny that it’s a possibility I simply have no evidence apart from the features being in the game files themselves. This is where Ubisoft’s response to the controversy would shed some light on the issue as whilst they’re not likely to say “Yep, we did it because Watch_Dogs looks horrendous on consoles when compared to PC” they might at least give us some insight into why these particular features were disabled. Unfortunately they’re still keeping their lips sealed on this one so unfortunately all we have to go on now is rampant speculation, something I’m not entirely comfortable with engaging in.

Regardless of the reasons though it does feel a bit disingenuous to be shown one product and then be sold another. Most of the traditional reasons for disabled features, like performance or stability issues, just don’t seem to be present with this mod, which lends credence to the idea that they were disabled on purpose after they were fully developed. Until Ubisoft starts talking about this though we don’t have much more to go on and since this can be enabled so easily I don’t think many gamers are going to care too much what they have to say anyway. Still I’d very much like to know the story behind it as looks a lot more like a political/financial issue rather than a purely technical one.