Light Based Memory Paves the Way for Optical Computing.

Computing as we know it today is all thanks to one plucky little component: the transistor. This simple piece of technology, which is essentially an on/off switch that can be electronically controlled, is what has enabled the computing revolution of the last half century. However it has many well known limitations most of which stem from the fact that it’s an electrical device and is thus constrained by the speed of electricity. That speed is about 1/100th of that of light so there’s been a lot of research into building a computer that uses light instead of electricity. One of the main challenges that an optical computer has faced is storage as light is a rather tricky thing to pin down and the conversion process into electricity (so it can be stored in traditional memory structures) would negate many of the benefits. This might be set to change as researchers have developed a non-volatile storage platform based on phase-change materials.


The research comes out of the Karlsruhe Institute of Technology with collaborations from the universities of Münster, Oxford, and Exeter. The memory cell which they’ve developed can be written at speeds of up to 1GHz, impressive considering most current memory devices are limited to somewhere around a 1/5th of that. The actual memory cell itself is made up of phase-change material (a material that can shift between crystalline and amorphous states) Ge2Sb2Te5, or GST for short. When this material is exposed to a high-intensity light beam its state will shift. This state can then be read later on by using less intense light, allowing a data cell to be changed and erased.

One novel property that the researchers have discovered is that their cell is capable of storing data in more than just a binary format. You see the switch between amorphous and crystalline states isn’t distinct like it is with a transistor which essentially means that a single optical cell could store more data than a single electrical cell. Of course to use such cells with current binary architecture would mean that these cells would need a proper controller to do the translation but that’s not exactly a new idea in computing. For a completely optical computer however that might not be required but such an idea is still a way off from seeing a real world implementation.

The only thing that concerns me about this is the fact that it’s based on phase change materials. There’s been numerous devices based on them, most often in the realms of storage, which have purported to revolutionize the world of computing. However to date not one of them has managed to escape the lab and the technology has always been a couple years away. It’s not that they don’t work, they almost always do, more that they either can’t scale or producing them at volume proves to be prohibitively expensive. This light cell faces the unique challenge that a computing platform built for it currently doesn’t exist yet and I don’t think it can compete with traditional memory devices without it.

It is a great step forward however for the realm of light based computing. With quantum computing likely being decades or centuries away from becoming a reality and traditional computing facing more challenges than it ever has we must begin investigating alternatives. Light based computing is one of the most promising fields in my mind and it’s great to see progress when it’s been so hard to come by in the past.


3D Printed Prosthesis Regenerates Nerves.

Nerve damage has almost always been permanent. For younger patients there’s hope for full recovery after an incident but as we get older the ability to repair nerve damage decreases significantly. Indeed by the time we reach our 60s the best we could hope for is what’s called “protective sensation”, the ability to determine things like hot from cold. The current range of treatments are mostly limited to grafts, often using nerves from the patient’s own body to repair the damage, however even those have limited success in practice. However that could all be set to change with the development of a process which can produce nerve regeneration conduits using 3D scanning and printing.


The process was developed by a collaboration of numerous scientists from the following institutions: University of Minnesota, Virginia Tech, University of Maryland, Princeton University, and Johns Hopkins University. The research builds upon one current cutting edge treatment which uses special structures to trigger regeneration, called nerve guidance conduits. Traditionally such conduits could only be produced in simple shapes, meaning they were only able to repair nerve damage in straight lines. This new treatment however can work on any arbitrary nerve structure and has proven to work in restoring both motor and sensory function in severed nerves both in-vitro (in a petri dish) and in-vivo (in a living thing).

How they accomplished this is really quite impressive. First they used a 3D scanner to reproduce the structure of the nerve they’re trying to regenerate, in this case it was the sciatic nerve (pictured above). Then they used the resulting model to 3D print a nerve guidance conduit that was the exact size and shape required. This was then implanted into a mouse who had a 10mm gap in their sciatic nerve (far too long to be sewn back together). This conduit then successfully triggered the regeneration of the nerve and after 10 weeks the rat showed a vastly improved ability to walk again. Since this process had only been verified on linear nerves before this process shows great promise for regenerating much more complicated nerve structures, like those found in us humans.

The great thing about this is that it can be used for any arbitrary nerve structure. Hospitals equipped with such a system would be able to scan the injury, print the appropriate nerve guide and then implant it into the patient all on site. This could have wide reaching ramifications for the treatment of nerve injuries, allowing far more to be treated and without the requisite donor nerves needing to be harvested.

Of course this treatment has not yet been tested in humans but the FDA has approved similar versions of this treatment in years past which have proven to be successful. With that in mind I’m sure that this treatment will prove successful in a human model and from there it’s only a matter of time before it finds its way into patients worldwide. Considering how slow progress has been in this area it’s quite heartening to see dramatic results like this and I’m sure further research into this area will prove just as fruitful.

Mad Max Review Screenshot Wallpaper Title Screen

Mad Max: Before His Silence.

Movie tie-in games are some of the most derided games ever to grace our presence and with good reason. Often they’re given a woefully insufficient amount of time to come up with a playable product and quality is the first thing that hits the chopping block, leaving them bug ridden messes of half finished dreck. The last few years have seen a few titles rise above the filth however and whilst none of them have been game of the year material they have been pleasurable surprises. Mad Max is one such gem, taking the essence of the movie and distilling it down into a very playable experience.

Mad Max Review Screenshot Wallpaper Title Screen

The world lies barren, scorched by an intense nuclear war. Those that survived the collapse struggle to survive, scavenging what they could from the remnants of society that lie scattered about. This world now belongs to the ruthless and violent with gangs and war bands patrolling the sandy dunes looking for people and places to pillage. You are Max, a survivor who has lost everything since the collapse and wants nothing to do with this world any more. So he has resigned himself to cross the Plains of Silence in his Black on Black, an Interceptor capable of making the long journey. However his plans are foiled by Scabrous Scrotus, son of the warlord Immortan Joe who steals everything from him. You are not so easily beaten however and you turn your eyes to recovering your Black on Black and making Scrotus pay for what he did.

The wasteland setting for Mad Max is quite beautiful with the plains stretching out to the horizon in every direction. It’s the definition of an open world game with nearly every bit of scenery that you can see being accessible and part of the game. There’s definitely been a lot of effort put into crafting certain aspects of the scenery, like the dust you kick up when going offroad and the slight changes in the howls that each engine emits. It’s also got enough visual variety that you don’t feel like you’re driving through the same place all the time as each area has its own distinct theme. Thankfully this all comes to you fully optimized, something which games with lots of open space like this often get wrong.

Mad Max Review Screenshot Wallpaper is it Safe

On first blush Mad Max is your typical open worlder, with all the standard trimmings of campaign missions, side missions and a lack of direction of which one you should do when. If I was to compare it to recent open world titles it’d be somewhere in the middle between Far Cry 4 and Batman: Arkham Knight. You’ve got your typical progression in the form of skills and equipment, both for your vehicle (the Magnum Opus) and Max himself, some of which are locked behind story missions whilst others through open world objectives. There’s camps for you to capture, places for you to explore and hordes of enemies bounding around for you to take out or avoid. Combat comes in two flavours: the stylized beat ’em up hand to hand combat while on foot as Max as well as some in-car combat which is a little more rudimentary. Suffice to say I was surprised at just how much was crammed into this game given its origins as a movie tie-in.

If you’re a fan of the Arkham series of combat then Mad Max is right up your alley with the controls and style being instantly familiar. There’s not as much variety in moves and finishers however it’s still quite a challenge to rack up long combo streaks without getting interrupted. There’s a few rough edges on the combat though which really start to show in the later stages of the game. Essentially you can get yourself into a situation where there’s no way for you to block or counter an incoming move, ruining your chain (and potentially losing you an upgrade point). Usually this happens when you’re doing a finisher which triggers a mini-cutscene which, when interrupted, feels unfair. There’s also a heavy reliance on consumables which aren’t readily available or farmable in the world for a lot of the big finisher moves so they often go unused. Overall it’s a good emulation of Rocksteady’s combat style, just in need of a little more tuning.

Mad Max Review Screenshot Wallpaper Lord Nitrous

The car combat is pretty simplistic by comparison, usually involving you ramming the other car into submission. As you progress through the story missions there are weapon upgrades that allow you to more effectively dispatch your enemies, like a harpoon that can rip wheels off, but the heavy investment requirement means you’ll have to forgo quite a few other upgrades to get them. Additionally for the most part you don’t really need all the bells and whistles, just having the fastest car (both in terms of top speed and acceleration) is all that’s needed for most encounters. The final boss battle is the only exception to this as you’ll struggle to finish it in a timely manner if you don’t have at least a Level 4 harpoon and another similarly upgraded weapon. It’d probably be made a lot better if the driving controls were a little more refined as the slightly janky steering, even on cars with the top handling, makes things more difficult than they should be.

As you’d expect from an open world game there’s numerous activities for you to do most of which will provide you some form of benefit. Clearing out camps for instance will net you a periodic amount of scrap, the currency that underpins the economy of Mad Max. Doing “projects” in strongholds will unlock certain benefits like giving you a full water canteen or opening up new types of missions for you to complete. Winning races will unlock a permanent location where you can fuel up your car whenever you want. For people who like to meander through games, picking and choosing whichever mission takes their fancy, this kind of thing is probably what they’re after. For me though these little side distractions just didn’t feel rewarding enough for me to bother with them for long. In the end I’d only go on scrap hunting missions if I needed it to unlock the next campaign mission which I only had to do a couple times.

Mad Max Review Screenshot Wallpaper Stuck in all the Wrong Places

It’s not a perfect experience by any stretch of the imagination as the above screenshot will attest to. You see there’s no jumping in Mad Max but there are multiple heights and in some instances you’ll find yourself trapped in a place you can’t get out of. Some of them aren’t even as obvious as the one pictured above. In one particular mission I managed to roll over some pipes which I couldn’t roll back out of. It’s clear what’s missing here, the movement system isn’t coded to deal with situations where the difference in terrain height is above a certain threshold. Whilst not every game needs to have the parkour stylings of Assassin’s Creed a more robust move system would be key in alleviating the unfortunately frequent problems that arise from the current simplistic implementation.

The story, if it were standing on its own, is fairly rudimentary although since it serves as a kind of prequel to the world of the movie it’s a little more interesting. Strictly speaking it’s a separate story in terms of canon and indeed Max’s character is quite different to the one portrayed in the cinema. However it does give you a little bit more insight into the reasons why Max ends up the way he is in the movie. Still it’s not much more than your typical action script, albeit it bereft of some of the more common components in favour of more talk of cars as a religion and all the craziness that the movie demonstrated.

Mad Max Review Screenshot Wallpaper The Plains of Silence

Mad Max is an example of what tie-in games can achieve if they have more than a token effort put into them. The barren wasteland world is beautifully realised with the landscapes reaching out from horizon to horizon. The core game mechanics are mostly well realised, often getting close to their more mature brethren from which they draw inspiration. For fans of the open world genre there’s more than enough activities to keep you going for numerous hours on end. For people like me though who aren’t so interested in the distractions the game is still readily playable if you do pretty much campaign missions only, you’ll just have to use your skill rather than your scrap to win fights. Suffice to say I was surprised by just how playable Mad Max was, especially given its tie in origins. If you’re one of the many raving fans of Fury Road then Mad Max is probably worth a look in.

Rating: 8.0/10

Mad Max is available on PC, XboxOne and PlayStation4 right now for $59.99, $99.95 and $99.95 respectively. Game was played on the PC with approximately 13 hours of total playtime and 29% of the achievements unlocked.


The Subtle Forms of Not Invented Here Syndrome.

Whilst I haven’t had a real programming job in the better part of a decade I have continued coding in my spare time for various reasons. Initially it was just a means to an end, making my life as an IT admin easier, however it quickly grew past that as I aspired to be one of those startup founders Techcrunch idolizes. Once I got over that my programming exploits took on a more personal approach, the programs I built mostly for my own interest or fixing a problem. I recently dove back into one of my code bases (after having a brainwave of how it could have been done better) and, seeing nothing salvageable, decided to have a crack at it again. This time around however, with far less time on my hands to dedicate to it, I started looking for quicker ways of doing things. It was then I realised that for all the years I’ve been coding I had been suffering from Not Invented Here syndrome and that was likely why I found making progress so hard.


The notion behind Not Invented Here is that using external solutions, like in the case of programming things like frameworks or libraries, to solve your problems isn’t ideal. The reasoning that drives this is wide and varied but it often comes down to trust, not wanting to depend on others or the idea that solutions you create yourself are better. For the most part it’s a total fallacy as any one programmer cannot claim to the be the expert in all fields and using third party solutions is common practice worldwide. However, like I found, it can manifest itself in all sorts of subtle ways and it’s only after taking a step back that you can see its effects.

I have often steered clear of many frameworks and libraries mostly because I felt the learning curve for them would be far too steep. Indeed when I’ve looked at other people’s code that makes use of them I often couldn’t make sense of them due to the additional code libraries they had made use of and resided myself to recoding them from scratch. Sometimes I’d do a rudimentary search for something to see if there was an easy answer to my problem but often I’d just wrangle the native libraries into doing what I wanted. The end result of this was code that, whilst functional, was far more verbose than it needed to be. Then when I went back to look at it I found myself wondering why I did things in this way, and decided there had to be a better way.

Fast forward to today and the new version of the application I’ve been working on makes use of numerous frameworks that have made my progress so much faster. Whilst there’s a little bloat from some frameworks, I’m looking at you Google APIs, it’s in the form of dlls and not code I have to maintain. Indeed much of the code that I had to create for handling edge cases and other grubby tasks is now handled much better by code written by people far more experienced with the problem space than I will ever be. Thus I’ve spent far less time troubleshooting my own work and a lot more time making progress.

I have to attribute part of this to the NuGet package management system in Visual Studio which downloads, installs and resolves all dependencies for any framework you want to install. In the past such tasks would fall to the developer and would often mean chasing down various binaries, making sure you had the right versions and repeating the whole process when a new version was released. NuGet broke down that barrier, enabling me to experiment with various frameworks to meet my end goals. I know similar things to this have existed in the past however I really only begun to appreciate them recently.

There’s still a learning curve when adopting a framework, and indeed your choice of framework will mean some design decisions are made for you. However in the short time I’ve been working with things like HtmlAgilityPack and Json.NET have shown me that the time invested in learning them is far smaller than trying to do what they do myself. I’m sure this seems obvious to seasoned programmers out there but for me, who was working with the mentality that I just needed to get things done, it never occurred to me that my approach was completely wrong.

I guess where I’m going with all this is that should you find yourself attempting to solve a problem the first thing you need to do is see if its been solved by someone else first. Chances are they’ve gone through the pain you’re about to put yourself through and can save you from it. Sure you might not understand fully how they did it and they might not do it the way you wanted to but those things mean little if they save you time and emotional capital. I know for myself if I had to redo everything I did previously I would be no where near where I am today and likely would get no further.


Jeff Bezos’ Blue Origin Selects Cape Canaveral as Launch Site.

You’d be forgiven for not knowing that Amazon founder Jeff Bezos had founded a private space company. Blue Origin, as it’s known, isn’t one for the spotlight as whilst it was founded in 2000 (2 years before SpaceX) it wasn’t revealed publicly until some years later. The company has had a handful of successful test launches however, focusing primarily on the suborbital space with Vertical Takeoff/Vertical Landing (VTVL) capable rockets. Indeed their latest test vehicle, the New Shepard, was successfully launched at the beginning of this year. Outside of that though you’d be hard pressed to find out much more about Blue Origin however today they have announced that they will be launching from Cape Canaveral, using the SLC-36 complex which used to be used for the Atlas launch system.


It might not sound like the biggest deal however the press conference held for the announcement provided us some insight into the typically secretive company. For starters Blue Origins efforts have thus far been focused on space tourism, much like Virgin Galactic was. Indeed all their previous craft, including the latest New Shepard design, were suborbital craft designed to take people to the edge of space and back. This new launch site however is designed with much larger rockets in mind, ones that will be able to carry both humans and robotic craft alike into Earth’s orbit, putting them in direct competition with SpaceX and other private launch companies.

The new rocket, called Very Big Brother (pictured above), is slated to be Blue Origin’s first entry into the market. Whilst raw specifications aren’t yet forthcoming we do know that it will be based off Blue Origin’s BE-4 engine which is being co-developed with United Launch Alliance. This engine is slated to be the replacement for the RD-180 which is currently used as part of the Atlas-V launch vehicle. Comparatively speaking the engine is about half as powerful when compared to the RD-180, meaning that if the craft is similarly designed to the Atlas-V it’s payload will be somewhere in the 4.5 to 9 tonne range to LEO. Of course this could be wildly different to what they’re planning and we likely won’t know much more until the first craft launches.

Interestingly the craft is going to retain the VTVL capability that its predecessors had. This is interesting because no sizeable craft has that capability. SpaceX has been trying very hard to get it to work with the first stages of their Falcon-9 however they have yet to have a successful landing yet. Blue Origin likely won’t beat SpaceX to the punch on this however but it’s still interesting to see other companies adopting similar strategies in order to make their rockets reusable.

Also of note is the propellant that the rocket will use for the BE-4 engine. Unlike most rockets, which either run on liquid hydrogen/liquid oxeygen or RP-1(kerosene)/liquid oxygen the BE-4 will use natural gas and liquid oxygen. Indeed it has only been recently that methane has been considered as a viable propellant as I could not find an example of a mission that has flown using the fuel. However there must be something to it as SpaceX is going to use it for their forthcoming Raptor engines.

I’m starting to get the feeling that Blue Origin and SpaceX are sharing a coffee shop.

It’s good to finally get some more information out of Blue Origin, especially since we now know their ambitions are far beyond that of suborbital pleasure junkets. They’re entering a market that’s now swarming with competition however they’ve got both the capital and strategic relationships to at least have a good go at it. I’m very interested to see what they do at SLC-36 as more competition in this space is a good thing for all concerned.


Microsoft Rumoured to be Looking to Acquire AMD.

The last decade has not been kind to AMD. It used to be a company that was readily comparable to Intel in almost every way, having much the same infrastructure (including chip fabs) whilst producing products that were readily comparable. Today however they’re really only competitive in the low end space, surviving mostly on revenues from the sales of both of the current generation of games consoles. Now with their market cap hovering at the $1.5 billion mark rumours are beginning to swirl about a potential takeover bid, something numerous companies could do at such a cheap price. The latest rumours point towards Microsoft and, in my humble opinion, an acquisition from them would be a mixed bag for both involved.


The rumour surfaced from an article on Fudzilla citing “industry sources” on the matter, so there’s potential that this will amount to nothing more than just a rumour. Still talks of an AMD acquisition by another company have been swirling for some time now however so the idea isn’t exactly new. Indeed AMD’s steadily declining stock price, one that has failed to recover ever since its peak shortly after it spun off Global Foundries, has made this a possibility for some time now. A buyer hasn’t been forthcoming however but let’s entertain the idea that Microsoft is interested to see where it leads us.

As Microsoft begins to expand itself further into the devices market there’s some of potential in owning the chip design process. They’re already using an AMD chip for the current generation console and, with total control over the chip design process, there’s every chance that they’d use one for a future device. There’s similar potential for the Surface however AMD has never been the greatest player in the low power space, so there’d likely need to be some innovation on their part to make that happen. Additionally there’s no real solid offering from AMD in the mobile space, ruling out their use in the Lumia line of devices. Based just on chips alone I don’t think Microsoft would go for it, especially with the x86 licensing deal that the previous article I linked to mentions.

Always of interest to any party though will be AMD’s warchest of patents, some 10,000 of them. Whilst the revenue from said patents isn’t substantial (at least I can’t find any solid figures on it, which means it isn’t much) they always have value when the lawsuits start coming down. For a company that has billions sitting in reserve those patents might well be worth AMD’s market cap, even with a hefty premium on top of it. If that’s the only value that an acquisition will offer however I can’t imagine AMD, as a company, sticking around for long afterwards unfortunately.

Of course neither company has commented on the rumour and, as of yet, there isn’t any other sources confirming this rumour. Considering the rather murky value proposition that such an acquisition offers both companies I honestly have trouble believing it myself. Still the idea of AMD getting taken over seems to come up more often than it used to so I wouldn’t put it past them courting offers from anyone and everyone that will hear them. Suffice to say AMD has been in need of a saviour for some time now, it might just not end up being Microsoft at this point.


New Antenna Design Could Vastly Improve Mars Rover Communications.

The way we get most of the scientific data back from the rovers we currently have on Mars is through an indirect method. Currently there are four probes orbiting Mars (Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter and MAVEN) all of which contain communications relays, able to receive data from the rovers and then retransmit it back to Earth. This has significant advantages, mostly being that the orbiters have longer periods with which to communicate with Earth. Whilst all the rovers have their own direct connections back to Earth they’re quite limited, usually several orders of magnitude slower. Whilst current rovers won’t have their communication links improved for future missions having a better direct to Earth link could prove valuable, something which researchers at the University of California, Los Angeles (UCLA) have started to develop.


The design is an interesting one essentially being a flat panel of phased antenna array elements using a novel construction. The reasoning behind the design was that future Mars rover missions, specifically looking towards the Mars 2020 mission, would have constraints around how big of an antenna it could carry. Taking this into account, along with the other constraint that NASA typically uses X-band for deep space communications like this, the researchers came up with the design to maximise the gain of the antenna. The result is this flat, phased array design which, when tested in a prototype 4 x 4 array, closely matched their simulated performance metrics.

With so many orbiters around Mars it might seem like a better direct to Earth communications relay wouldn’t be useful however there’s no guarantees that those relays will always be available. Currently mission support for most of those orbiters is slated to end in the near future with the furthest one out slated for decommissioning in 2024 (MAVEN). Since there’s a potential new rover slated to land sometime in 2020, and since we know how long these things can last once they’ve landed, having better on board communications might become crucial to the ongoing success of the mission. Indeed should any of the other rovers still be functioning at that time the new rover may have to take on board the relay responsibilities and that would demand a much better antenna design.

There’s still more research to be done with this particular prototype, namely scaling it up from its current 4 x 4 design to the ultimate 16 x 16 panel. Should the design prove to scale as expected then there’s every chance that you might see an antenna based on this design flying with an orbiter in the near future. I’m definitely keen to see how this progresses as, whilst it might have the singular goal of improving direct to Earth communications currently, the insights gleaned from this design could lead to better designs for all future deep space craft.

The Flock Review Screenshot Wallpaper Title Screen

The Flock: Our Hubris Doomed Us.

The hivemind of the gaming community collectively looks towards indie developers as the innovators. We praise them and put them up on pedestals because they dare to buck trends, trying out new concepts, mechanics and stories that big AAA developers would never touch. Sometimes this works out well, spawning new genres or revamping old ones, other times however the concept fails so hard to achieve its goals that the idea is burnt forever more. Indeed the risk is even higher when the developers attempt something that’s extremely high concept, much like what The Flock attempted to be. Unfortunately this time around the risk won’t be rewarded as The Flock is set to be a ghost town that will never achieve its vision.

The Flock Review Screenshot Wallpaper Title Screen

The world is a shadow of its former self, great cities lie in disrepair and what remains elsewhere has been long abandoned. All that remains now is the Flock, a race of subhuman creatures who skitter through the darkness searching for one thing: the Artefact. It is that sacred thing that can transform a member of the Flock into a Carrier, able to wield the power of the light and bring about the next phase of this world’s existence. However creature of the Flock wants the Artefact and will do anything to obtain it, even kill their own. There is limited time left for members of the Flock as their population is dwindling, every murdered carrier putting their entire species one step closer to extinction.

The Flock might not be the most pretty game in the world, thanks mostly to its drab aesthetic, but it does manage to punch above average in the graphics department. For the most part things look great from afar, especially when you’re on top of a building in the city overlooking everything, but up close it’s clear that detail is scant. The various bright and shiny things help break up the visual monotony a bit, as well as provide visual cues for some of the game’s core mechanics. Apart from that there’s really not much else write home about as the game’s focus isn’t purely on graphics.

The Flock Review Screenshot Wallpaper Objective

The premise of The Flock is an interesting one: you’re a member of The Flock’s race and you want to get The Artefact. Once you have it you’re transformed into The Carrier who can wield The Artefact’s power which is essentially a high powered torch. If another member of The Flock kills you they’ll then become The Carrier however if you shine the light on them, and they’re foolish enough to move even an inch while you have it on them, they’ll be burnt to cinders. It’s not as simple as standing still once you’ve got The Artefact however as you need to move to power it. There’s also objectives for you to complete, charging up blue glowing things with the light of The Artefact, which tempt you to come after them. Underpinning all this is the limited population that The Flock has and, once that’s exhausted, the game itself will no longer be on sale and only those who had purchased the game will be able to participate in the next stage.

In raw game terms The Flock is quite playable, that is if you can manage to scrounge together a game with more than just one other player. Each of the maps has numerous routes and places for The Flock to hide in, something which can make your life as The Carrier quite hard. The Artefact needing movement to be powered means that you’ll always be on the move, further increasing other Flock member’s chances of hunting you down. Indeed I can imagine that in a full game of 6 people it’d be quite the chaotic affair as even with just 3 it was hard to hold onto the artefact for any long stretch of time, especially if you went after objectives.

The Flock Review Screenshot Wallpaper Population

However the number of people playing The Flock is so abysmally poor that you’ll be lucky to ever see another person playing it. I spent probably half my time in game simply waiting for someone else to join me only to be disappointed nearly every time. Checking the population every 5 minutes or so revealed that yes, I was the only one playing since there were no other deaths happening anywhere else in the world. In the time I’ve been playing it the population has dropped by a paltry 1200 meaning, on average, there’s been one death every 30 seconds. At this rate the population will reach 0 sometime in the next 200 years, not exactly what the developers had in mind I’m sure.

This severe drop off in interest can probably be traced to The Flock’s lack of replayability. Those three maps in the screenshot below? Those are the only three maps you’ll have to play, meaning that after 3 games you’ve likely seen everything there is to see in The Flock. This would be fine if the game play was interesting enough however since all the objectives are the same and there are no different modes the longevity of The Flock is severely limited. Thus after the initial fervour there’s only going to be a handful of people playing at any moment. That’s not going to improve any time soon. especially with the developers being tight lipped about the whole thing.

The Flock Review Screenshot Wallpaper Forever Alone

The Flock will never achieve its ambition, the lack of variety in the game play not enough to sustain it until the huge population reaches 0. At a technical and mechanical level the game is sound, playable even at high pings that often happened due to the lack of players. However this game had grander visions, of enticing players in with the notion that they could be part of something exclusive. something that no game had attempted before. Unfortunately that vision will never be realised, the population set too high and the interest in the game too low. I would say I’m disappointed but, honestly, the developers grossly overestimated how popular their game would be and have been subsequently punished for their hubris.

Rating: 4/10

The Flock is available on PC right now for $19.99. Total play time was 1 hour.


iPad Pro: Imitation is the Most Sincere Form of Flattery.

Apple are the kings of taking what appears to be failed product ideas and turning them into gold mines. The iPhone took the smartphone market from a niche market of the geeky and technical elite into a worldwide sensation that continues today. The iPad managed to make tablet computing popular, even after both Apple and Microsoft tried to crack the elusive market. However the last few years haven’t seen a repeat of those moments with the last attempt, the Apple Watch, failing to become the sensation many believed it would be. Indeed their latest attempt, the iPad Pro and its host of attachments, feels like simple mimicry more than anything else.


The iPad Pro is a not-quite 13″ device that’s sporting all the features you’d expect in a device of that class. Apple mentions that the new 64bit A9X chip that’s powering it is “desktop class” able to bring a 1.8X CPU performance and 2X graphics performance improvement over the previous iPad Air 2. There’s also the huge display which allows you to run two iPad applications side by side, apparently with no compromises on experience. Alongside the iPad Pro Apple has released two accessories: the smart keyboard, which makes use of the new connector on the side of the iPad, and the Apple Pencil, an active stylus. Whilst all these things would make you think it was a laptop replacement it’s running iOS, meaning it’s still in the same category as its lower powered brethren.

If this is all sounding strangely familiar to you it’s because they’re basically selling an iOS version of the Surface Pro.

Now there’s nothing wrong with copying competitors, all the big players have been doing that for so long that even the courts struggle to agree on who was there first, however the iPad Pro feels like a desperate attempt to capture the Surface Pro’s market. Many analysts lump the Surface and the iPad into the same category however that’s not really the case: the iPad is a tablet and the Surface is a laptop replacement. If you compare the Surface Pro to the Macbook though you can see why Apple created the iPad Pro, their total Mac sales are on the order of $6 billion spread across no less than 7 different hardware lines. Microsoft’s Surface on the other hand has made $1 billion in a quarter from just the Surface alone, a significant chunk of sales that I doubt Apple has managed to make with just the Macbook alone. Thus they bring out a competitor that is almost a blow for blow replica of its main competitor.

However the problem with the iPad Pro isn’t the mimicry, it’s the last step they didn’t take to make the copy complete: putting a desktop OS on it. Whilst it’s clear that Apple’s plan is to eventually unify their whole range of products under the iOS banner not putting the iPad Pro on OSX puts it at a significant disadvantage. Sure the hardware is slightly better than the Surface is but that’s all for naught if you can’t do anything with it. Sure there’s a few apps on there but iOS, and the products that it’s based on, have always been focused on consumption rather than production. OSX on the other hand is an operating system focused on productivity, something that the iPad Pro needs in order to realise its full potential. It’s either that or iOS needs to see some significant rework in order to make the iPad Pro the laptop replacement that the Surface Pro is.

It’s clear that Apple needs to do something in order to re-energize the iPad market, with the sales figures being down both in current quarters and year on year, however I don’t believe that the iPad Pro will do it for them. The new ultra slim Macbook has already cannibalized part of the iPad’s market and this new iPad Pro is going to end up playing in the same space. However for those seeking some form of portable desktop environment in the Apple ecosystem I’m failing to see why you’d choose an iPad Pro over the Macbook. Had they gone with OSX the value proposition would’ve been far more clear however this feels like a token attempt to capture the Surface Pro market and I just don’t think it will work out.


Data Sovereignty, Cloud Services and the Folly of the USA’s Borderless Jurisdiction.

If you’ve worked in IT with a government organisation you’ll know the term “data sovereignty”. For those who haven’t had the pleasure the term refers to the laws that apply to data in the location that it’s stored in. When dealing with government entities this means that service providers have to make guarantees that the data won’t leave the Australian shores. Because, if it did, then the data wouldn’t be subject to Australian law any more and whatever government got a hold of it would be outside Australia’s jurisdiction. This has been the major limiting factor in the Australian Government’s adoption of cloud services as, until just recently, the major providers didn’t have an Australian presence. However even that might not suffice soon as the US government is attempting to break the idea of data sovereignty by requiring companies to disclose data that’s not within their jurisdiction.


This issue has arisen out of a long running court case that the US government has had against Microsoft. Essentially authorities in the USA want access to information that is stored on Microsoft servers in Dublin, Ireland. Their argument is that since Microsoft is in control of the servers they’re on the hook to provide the data. Microsoft’s argument has been that the US government should make that request from authorities within that jurisdiction. Indeed senior legal counsel from the Irish Supreme Court has said that such a request could be made under the Mutual Legal Assistance Treaty. This hasn’t satisfied the US authorities who believe that since the company is based in the USA all the data they control should be made available to them under their legal jurisdiction.

Putting aside the privacy concerns for the moment (and believe me there are many) if the US courts compel Microsoft to provide data from outside their jurisdiction then the notion of data sovereignty on any cloud service becomes null and void. No longer will anyone be able to assume that their data is subject to the laws of the country it resides in which raises a whole host of legal issues. Do companies that make use of locally provided but not locally owned services need to comply with US data retention laws like SOX? Are these requests for data going to be held to the same level of evidence requirements that other countries have? What’s stopping the US government from compelling US based companies from requesting other government’s data on these services? I could go on but it all comes down to the issue of the US government completely overstepping its jurisdiction.

For someone like me, who works primarily in the large government IT space, the attack feels even more personal. I’ve been a champion of cloud services for years and it’s only been recently that I’ve been able to make use of the public cloud with my clients. Should the US government continue with (and win) this case the ramifications will be instantaneous: all the government services running on cloud services will be in-housed as soon as possible. That’s not to mention the potential effects it could have on how international companies like mine will interact with government. Suddenly we wouldn’t be able to work with any client related data except when we’re on site, a tremendous blow to the way we do business.

The US government needs to realise just how damaging something like this could be both to their reputation internationally and the business that US based companies do elsewhere. Data sovereignty laws exist for a reason and breaking them just because your law enforcement agency doesn’t want to go through the proper channels isn’t a good enough excuse. If they continue down this path the IT industry will suffer immensely as a result and for nothing more than some saved paperwork and inflated egos.

Grow up, USA. Seriously.