crane-lifts-ixv-prototype

ESA’s IXV Splashes Down After Successful Maiden Flight.

The European Space Agency’s Intermediate eXperimental Vehicle (IXV)  is an interesting platform, ostensibly sharing some inspiration from the United States Air Force’s X-37B but with a very different purpose in mind. The IXV is set to be more of a general purpose craft, one that’s capable of testing new space technologies and running experiments that might not otherwise be feasible. It’s also set to be ESA’s first fully automated craft that’s capable of re-entry, an incredible technological feat that will inevitably find its way into other craft around the world. Today marks the completion of the IXV’s maiden flight, completing a sub-orbital journey that was, by all accounts, wildly successful.

crane-lifts-ixv-prototype

This flight was meant to be conducted towards the end of last year but was delayed due to the novel launch profile that the IXV flight required, something which the launch system wasn’t typically used for. The mission profile remained the same however, serving as a shakedown of all the key systems as well as providing a wealth of flight data around how all the systems functioned during the flight. This included things such as the automated guidance system, avionics and the thermal shielding that coats the bottom of the craft. The total flight time was approximately 100 minutes with the craft making a parachute assisted landing in the Pacific Ocean where it was retrieved by a recovery craft (pictured above).

Whilst the IXV platform is likely to see many more launches in the future it’s actually a stepping stone between a previous craft, the Atmospheric Reentry Demonstrator (ARD), and a future space plane called the Program for Reusable In-orbit Demonstrator in Europe (PRIDE). The ultimate goal of this program is to develop a fully reusable craft that the ESA can use for its missions in space and judging by the design of the IXV it’s a safe bet that it will likely end up looking something like the Space Shuttle. The IXV will never take human passengers to orbit, it’s simply too small to accomplish that feat, however much of the technology used to create it could be easily repurposed to a man rated craft.

I think the ESA has the right approach when it comes to developing these craft, opting for smaller, purpose built craft rather than a jack-of-all trades type which, as we’ve seen in the past, often results in complexity and cost. The total cost of the IXV craft (excluding the launcher) came out to a total of $170 million which is actually cheaper than the X-37B by a small margin. It will be interesting to see if the ESA gets as much use out of their IXV though as whilst it’s a reusable craft I haven’t heard talk of any further flights being planned anytime soon.

It’s great to see multiple nations pursuing novel ways of travelling to and from space as the increasing number of options means that there’s more and more opportunities for us to do work out there in the infinite void. The IXV might not become the iconic craft that it emulates but it will hopefully be the platform that enables the ESA to extend their capabilities far beyond their current station. The next few years are going to be ones of envelope pushing for the ESA and I, for one, am excited to see what they can accomplish.

Perfmon Data

Capturing a Before and After Performance Report for Windows Servers.

The current project I’m on has a requirement for being able to determine a server’s overall performance before and after a migration, mostly to make sure that it still functions the same or better once its on the new platform. Whilst it’s easy enough to get raw statistics from perfmon getting an at-a-glance view  of how a server is performing before and after a migration is a far more nuanced concept, one that’s not easily accomplished with some Excel wizardry. With that in mind I thought I’d share with you my idea for creating such a view as well as outlining the challenges I’ve hit when attempting to collate the data.

Perfmon Data

At a high level I’ve focused on the 4 core resources that all operating systems consume: CPU, RAM, disk and network. For the most part these metrics are easily captured by the counters that perfmon has however I wanted to go a bit further to make sure that the final comparisons represented a more “true” picture of before and after performance. To do this I included some additional qualifying metrics which would show if increased resource usage was negatively impacting on performance or if it was just the server consuming more resources because it could since the new platform had much more capacity. With that in mind these are the metrics I settled on using:

  • Average of CPU usage (24 hours), Percentage, Quantitative
  • CPU idle time on virtual host of VM (24 hours), Percentage, Qualifying
  • Top 5 services by CPU usage, List, Qualitative
  • Average of  Memory usage (24 hours), Percentage, Quantitative
  • Average balloon driver memory usage (24 hours), MB consumed, Qualifying
  • Top 5 services by Memory usage, List, Qualitative
  • Average of Network usage (24 hours), Percentage, Quantitative
  • Average TCP retransmissions (24 hours), Total, Qualifying
  • Top 5 services by Network bandwidth utilized, List, Qualitative
  • Average of Disk usage (24 hours), Percentage, Quantitative
  • Average queue depth (24 hours), Total, Qualifying
  • Top 5 services by Storage IOPS/Bandwidth utilized, List, Qualitative

Essentially these metrics can be broken down into 3 categories: quantitative, qualitative  and qualifying. Quantitative metrics are the base metrics which will form the main part of the before and after analysis. Qualitative metrics are mostly just informational (being the Top 5 consumers of X resource) however they’ll provide some useful insight into what might be causing an issue. For example if an SQL box isn’t showing the SQL process as a top consumer then it’s likely something is causing that process to take a dive before it can actually use any resources. Finally the qualifying metrics are used to indicate whether or not increased usage of a certain metric signals an impact to the server’s performance like say if the memory usage is high and the memory balloon size is high it’s quite likely the system isn’t performing very well.

The vast majority of these metrics are provided in perfmon however there were a couple that I couldn’t seem to get through the counters, even though I could see them in Resource Monitor. As it turns out Resource Monitor makes use of the Event Tracing for Windows (ETW) framework which gives you an incredibly granular view of all events that are happening on your machine. What I was looking for was a breakdown of disk and network usage per process (in order to generate the Top 5 users list) which is unfortunately bundled up in the IO counters available in perfmon. In order to split these out you have to run a Kernel Trace through ETW and then parse the resulting file to get the metrics you want. It’s a little messy but unfortunately there’s no good way to get those metrics separated. The resulting perfmon profile I created can be downloaded here.

The next issue I’ve run into is getting the data into a readily digestible format. You see not all servers are built the same and not all of them run the same amount of software. This means that when you open up the resulting CSV file from different servers the column headers won’t line up so you’ve got to either do some tricky Excel work (which is often prone to failure) or get freaky with some PowerShell (which is messy and complicated). I decided to go for the latter as at least I could maintain and extend the script somewhat easily whereas an Excel spreadsheet has a tendency to get out of control faster than anyone expects. That part is still a work in progress however but I’ll endeavour to update this post with the completed script once I’ve got it working.

After that point it’s a relatively simple task of displaying everything in a nicely formatted Excel spreadsheet and doing comparisons based on the metrics you’ve generated. If I had more time on my hands I probably would’ve tried to integrate it into something like a SharePoint BI site so we could do some groovy tracking and intelligence on it but due to tight time constraints I probably won’t get that far. Still a well laid out spreadsheet isn’t a bad format for presenting such information, especially when you can colour everything green when things are going right.

I’d be keen to hear other people’s thoughts on how you’d approach a problem like this as trying to quantify the nebulous idea of “server performance” has proven to be far more challenging than I first thought it would be. Part of this is due to the data manipulation required but it was also ensuring that all aspects of a server’s performance were covered and converted down to readily digestible metrics. I think I’ve gotten close to a workable solution with this but I’m always looking for ways to improve it or if there’s a magical tool out there that will do this all for me ;)

3D Vaccine Structure

3D Vaccines Pave the Way for Supercharged Immune Systems.

Vaccines are responsible for preventing millions upon millions of deaths each year through the immunity they grant us to otherwise life threatening diseases. Their efficacy and safety is undisputed (at least from a scientific perspective anyway, which is the only way that matters honestly) and this mostly comes from the fact that they use our own immune system as the mechanism of action. A typical vaccine uses part of the virus to trigger the immune system to produce the right antibodies without having to endure the potentially deadly symptoms that the virus can cause. This response is powerful enough to provide immunity from those diseases and so researchers have long looked for ways of harnessing the body’s natural defenses against other, more troubling conditions and a recent development could see vaccines used to treat a whole host of things that you wouldn’t think would be possible.

3D Vaccine Structure

Conditions that are currently considered terminal, like cancer, often stem from the body lacking the ability to mount a defensive response. For cancer this is because the cells themselves look the same as normal healthy cells, despite their nature to reproduce in an uncontrolled fashion, which means that the immune system ignores them. These cells do have signatures that we can detect however and we can actually program people’s immune systems to register those cells as foreign, triggering an immune response. However this treatment (which relies on extracting the patient’s white blood cells,  turning them into dendritic cells and programming them with the tumour’s antigens) is expensive and of limited on-going effectiveness. However the new treatment devised by researchers at the National Institute of Biomedical Imaging and Bioengineering uses a novel method which drastically increases this treatment’s effectiveness and duration.

The vaccine they’ve created uses 3D nano structures which, when injected into a patient, form a sort of microscopic haystack (pictured above). These structures can be loaded with all sorts of compounds however in this particular experiment they loaded them with the antigens found on a specific type of cancer cells. Once these rods have been injected they then capture within them the dendritic cells that are responsible for triggering an immune response. The dendritic cells are then programmed with the cancer antigens and, when released, trigger a body wide immune response. The treatment was highly effective in a mouse model with a 90% survival rate for animals who would have otherwise died at 25 days.

The potential for this is quite staggering as it provides us another avenue to elicit an immune response, one that appears to be far less invasive and more effective than current alternatives provide. Of course such treatments are still like years away from seeing clinical trials but with such promising results in the mouse model I’m sure it will happen eventually. What will be interesting to see is if this method of delivery can be used to deliver traditional vaccines as well, potentially paving the way for more vaccines to be administered in a single dose. I know that it seems like every other week we come up with another cure for cancer but this one seems to have some real promise behind it and I can’t wait to see how it performs in us humans.

vmware_vsphere

VMware Targets OpenStack with vSphere 6.

Despite the massive inroads that other virtualization providers have made into the market VMware still stands out as the king of the enterprise space. Part of this is due to the maturity of their toolset which is able to accommodate a wide variety of guests and configurations but they’ve also got the largest catalogue of value adds which helps vastly in driving adoption of their hypervisor. Still the asking price for any of their products has become something of a sore point for many and their proprietary platform has caused consternation for those looking to leverage public cloud services. With their latest release of their vSphere product VMware is looking to remedy at least the latter issue, embracing OpenStack compatibility for one of their distributions.

vmware_vsphere

The list of improvements that are coming with this new release are numerous (and I won’t bother repeating them all here) but suffice to say that most of them were expected and  in-line with what we’ve gotten previously. Configuration maximums have gone up for pretty much every aspect, feature limitations have been extended and there’s a handful of new features that will enable vSphere based clusters to do things that were previously impossible. In my mind the key improvements that VMware have made in this release come down to Virtual SAN 6, Long Distance vMotion and, of course, their support for OpenStack via their VMware Integrated OpenStack release.

Virtual SAN always felt like a bit of an also-ran when it first came out due to the rather stringent requirements it had around its deployment. I remember investigating it as part of a deployment I was doing at the time, only to be horrified at the fact that I’d have to deploy a vSphere instance at every site that I wanted to use it at. The subsequent releases have shifted the product’s focus significantly and now presents a viable option for those looking to bring software defined datacenter principles to their environment. The improvements that come in 6 are most certainly cloud focused with things like Fault Domains and All Flash configurations. I’ll be very interested to see how the enterprise reacts to this offering, especially for greenfields deployments.

Long Distance vMotion might sound like a minor feature but as someone who’s worked in numerous large, disparate organisations the flexibility that this feature will bring is phenomenal. Right now the biggest issue most organisations face when maintaining two sites (typically for DR purposes) is the ability to get workloads between the sites, often requiring a lengthy outage process to do it. With Long Distance vMotion making both sites active and simply vMotioning workloads between sites is a vastly superior solution and provides many of the benefits of SRM without the required investment and configuration.

The coup here though is, of course, the OpenStack compatibility through VMware’s integrated distribution. OpenStack is notorious for being a right pain in the ass to get running properly, even if you already have staff that have had some experience with the product set in the past. VMware’s solution to this is to provide a pre-canned build which exposes all the resources in a VMware cloud through the OpenStack APIs for developers to utilize. Considering that OpenStack’s lack of good management tools has been, in my mind, one of the biggest challenges to its adoption this solution from VMware could be the kick in the pants it needs to see some healthy adoption rates.

It’s good to see VMware jumping on the hybrid cloud idea as the solution going forward as I’ve long been of the mind that that will be the solution going forward. Cloud infrastructure is great and all but there are often requirements it simply can’t meet due to its commodity nature. Going hybrid with OpenStack as the intermediary layer will allow enterprises to take advantage of these APIs whilst still leveraging their investment in core infrastructure, utilizing the cloud on an as-needed basis. Of course that’s the nirvana state but it seems to get closer to realisation with every new release so here’s hoping VMware will be the catalyst to finally see it succeed.

Dying Light Review Screenshot Wallpaper Title Screen

Dying Light: Kick Squad, Transform and PARKOUR OUT.

It’s an unfortunate fact that creating a new IP is always fraught with danger. The wider gaming community is incredibly hard to judge and seemingly minor decisions on certain mechanics can have a huge reaching impact on how they perceive your game. There is a good chunk of this community that hungers for new and innovative content however and should you touch a nerve with them a new IP can quickly turn itself into a dynasty of its own. Dying Light is Techland’s most recent new IP, taking the lessons learned from Dead Island and using them to craft a better experience in a new world. For the most part they pull this off although the key differentiating mechanic is both its greatest and worst asset.

Dying Light Review Screenshot Wallpaper Title Screen

The city of Harran is in chaos. Months ago a mysterious outbreak occurred that turned people in ravenous monsters, feasting upon the flesh of others and spreading the contagion at a rapid pace. The city was quickly walled off however, containing the spread ensuring it wouldn’t cause a worldwide apocalypse. An organization called the Global Relief Effort has been instrumental in ensuring that some semblance of order remains, appointing a man called Kadir Sulaiman to keep the peace. However a tragedy involving his brother has sent him rogue and he is threatening to release a file that could put thousands of lives at risk. It’s up to you, Kyle Crane, to stop him and save not only the people of Harran, but the world at large.

Dying Light is built on Techland’s own Chrome Engine 6 which is exclusively for PC and next gen platforms. There’s a notable step up in graphics, in everything from textures to lighting to the detail of the models, which predictably sent my rig into slideshow mode in the more action heavy sequences. It’s definitely an evolutionary step rather than a revolutionary one as it still retains the same feel that Dead Island had in terms of graphics and effects, something which I noted early on before I found out that it was the same developer. That being said the environments are much bigger and broader in scope with a lot more attention to detail given the exploration heavy nature that the game has now taken on. In a nutshell it’s a solid amount of progress for Techland and it will be very interesting to compare and contrast it against Dead Island 2 when it debuts.

Dying Light Review Screenshot Wallpaper Swingin Mah Pipe

Dying Light is an open world, first person survival horror game that blends in a lot of RPG elements including talent trees and crafting. Those of you who’ve played Dead Island will find many of the mechanics to be very similar however many of them have been streamlined so you spend a lot less time diving through menus. Unlike its predecessor however Dying Light includes a heavy exploration aspect, allowing you to clamber all over everything in the world in good old fashioned parkour styling. This radically changes how the game plays out, giving you a vast array of options in how you tackle each situation. Finally you’ll engage in good old fashioned melee combat using crude tools like pipes and wrenches all the way up to fully automatic weaponry. All in all it’s best summed up as an improved version of Dead Island although some of the improvements aren’t without their drawbacks.

The combat in Dying Light will likely be an unique experience for everyone as there’s a huge number of combinations of weapons, modifications and talent builds that all affect how you hack and slash your way through the game. The melee aspect is the most polished although it still suffers from the trials and tribulations that is first person melee combat. Quite often you’ll find weapons (especially big 2 handed ones) not connecting like you think they should, requiring almost frame perfect timing to get them to land properly. The guns are by far the weakest aspect of the combat as they just don’t feel polished enough, especially when compared to the melee weapons. The parkour aspect allows you to alter the dynamic quite a lot, often allowing you to use the environment to gain a significant advantage.

Dying Light Review Screenshot Wallpaper Gas Mask Man

Indeed, as I alluded to earlier, the parkour/exploration aspect of Dying Light is simultaneously the best and worst aspect of the game. The good of it is that it adds a whole new layer onto the trope that Techland set up in Dead Island, significantly opening up the map to an incredible amount of exploration that is quite rewarding. There’s still a fair chunk of jumping puzzles which only have one proper solution to them but other than that you’re free to find the best angle of attack for your current challenge, something which can make the difference between a mission being a complete breeze and a total nightmare. Once you get the grappling hook upgrade it gets even better, enabling you to travel across the map at inhuman speed and get you out of jams that would’ve otherwise resulted in your death. Suffice to say the good of the parkour is really good but it’s marred by its less than stellar aspects.

There are numerous points in the game where the parkour simply doesn’t flow like you’d expect to. The visual cues to what you can and can’t climb on aren’t terribly consistent and judging whether or not you can make a particular jump is more of an art than a science. Worst of all the hit detection for your character to latch onto things fails way more often than it should, often sending you plummeting to the ground with no indication of why that failed. Worse still when you do get the grappling hook it will likely be disabled with the cop out message “You’re too exhausted to use your grappling hook right now” making the upgrade worthless. These are fixable issues, and undoubtedly something that will get patched in later versions, however it doesn’t detract from the fact that some of the parkour heavy sections can be seriously frustrating.

Dying Light Review Screenshot Wallpaper Talents

The talent system is well thought out, splitting out your character’s progression into 3 main categories: survivor, agility and power. The survivor tree is levelled up by completing objectives and gives you access to ancillary skills that will help you survive in Harran. Agility is progressed by simply moving around the world and enables you to move easier around the world. Finally the power tree is all about zombie killing, turning you from a schmuck wielding a pipe to a whirling death machine. With 3 different things to level up progression is consistent and constant, ensuring you’re never too far away from unlocking something to make you just that little bit better. Honestly though apart from the grappling hook most of the upgrades aren’t hugely impactful but after a while the sum of their parts starts to add up to something greater.

The crafting system is very rewarding and retains many of the characteristics from Dead Island. All crafting reagents, bar the base weapons, can be held on your character in unlimited quantities, ensuring that you always have all your materials on you to craft things. Gone is the requirement for using a bench or other things enabling you to craft whatever you need whenever you need it. The only issue I have with the system is that weapon upgrades (not modifications) cannot be crafted and so most of your weapons will likely have their upgrade slots unused. I’m pretty sure this isn’t me missing a key mechanic in the game either as my numerous Google searches on the issue came up blank. Suffice to say whilst I think the crafting system is strong in Dying Light it seems like one aspect was overlooked.

Dying Light Review Screenshot Wallpaper Zombie in Safe Zone

Dying Light was a relatively smooth experience for me, free of any major issues like crashing or game  breaking bugs. However there were numerous quirks where things happened that shouldn’t have like the picture above where my safe zone was somehow infested with zombies despite me having just cleared it out. There was also the rather unsettling thing of my character screaming, yelling or saying something random every time I loaded in which often made me think I had been dropped into the middle of combat before realising it was just him making noise for no reason. It kind of feels like a Bethesda release if I’m honest, where the core game is solid but stuff around the periphery is a little wonky and will likely take a couple patch cycles to sort out.

The story feels, at best, mediocre due mostly to its predictability and pacing issues that a present throughout most of the campaign missions. Now I’m willing to admit part of this is likely due to my campaign-first playstyle for these kinds of games (putting me at least than half of total completion by the end) however I’ve played several other games that have managed to get that right without relying on side missions to flesh things out. Combine that with the obvious plot twists and highly predictable emotional climaxes and you end up with a story that’s enough to drive you along but not enough to make you empathize with the characters. Indeed it’s one of the few aspects that doesn’t improve on Dead Island at all, a right shame considering that it was one of the more heavily criticised aspects.

Dying Light Review Screenshot Wallpaper Rais

Dying Light is a solid new IP for Techland, taking the essence of what made Dead Island great and translating that into a whole new game which stands well on its own. There’s a lot to love in Dying Light, from the parkour to the visceral combat to the crafting system that allows you to create weapons of untold destructive power. However much of the experience is marred in varying issues ranging from the fixable game play variety to the mediocre story which doesn’t add much overall. Suffice to say I still think it’s worth playing just maybe not by yourself, instead with a bunch of mates and a few beers to take the edge of the more frustating aspects.

Rating: 8.5/10

Dying Light is available on PC, PlayStation4 and XboxOne right now for $71.99, $78 and $78 respectively. Game was played on the PC with a total of 15 hours played with 46% of the achievements unlocked.

tom-wheeler-fcc

FCC to Solidify Net Neutrality Under Title II Provisions.

It’s undeniable that the freewheeling nature of the Internet is behind the exponential growth that it has experienced. It was a communications platform that was unencumbered by corporate overlords, free from gatekeepers that enabled people around the world to communicate with each other. However the gatekeepers of old have always tried to claw back some semblance of control at every point they can by imposing data caps, premium services and charging popular websites a premium to give their customers preferred access. Such things go against the pervasive idea of Net Neutrality that is a core tenant of the Internet’s strength however the Federal Communications Commission (FCC) in the USA is looking to change that.

tom-wheeler-fcc

FCC chairman Tom Wheeler has announced today that they will be seeking to classify Internet services under their Title II authority which would see them regulated in such a way as to guarantee the idea of net neutrality, ensuring open and unhindered access. The rules wouldn’t just be limited to fixed line broadband services either as Mr Wheeler stated this change in regulation would also cover wireless Internet services. The motion will have to be voted on before it can be enacted in earnest (and there’s still the possibility of Congress undermining it with additional legislation) however given the current makeup of the FCC board it’s almost guaranteed to pass which is a great thing for the Internet in the USA.

This will go a long way to combatting the anti-competitive practices that a lot of ISPs are engaging in. Companies like Netflix have been strong armed in the past into paying substantial fees to ISPs to ensure that their services run at full speed for their customers, something which only benefits the ISP. Under the Title II changes it would be illegal for ISPs to engage in such behaviour, ensuring that all packets that traverse the network were given the same priority. This would then ensure that no Internet based company would have to pay ISPs to ensure that their services ran acceptably which is hugely beneficial to Internet based innovators.

Of course ISPs have been quick to paint these changes in a negative light, saying that with this new kind of regulation we’re likely to see an increase in fees and all sorts of things that will trash anyone’s ability to innovate. Pretty much all of their concerns stem from the fact that they will be losing revenue from the deals that they’ve cut, ones that are directly in competition with the idea of net neutrality. Honestly I have little sympathy for them as they’ve already profited heavily from investment from the government and regulation that ensured competition between ISPs was kept at a minimum. The big winners in all of this will be consumers and open Internet providers like Google Fiber, things which are the antithesis to their outdated business models.

Hopefully this paves the way for similar legislation and regulation to make its way around the world, paving the way for an Internet free from the constraints of its corporate overlords. My only fear is that congress will mess with these provisions after the changes are made but hopefully the current incumbent government, who has gone on record in support of net neutrality, will put the kibosh on any plans to that effect. In any case the future of the Internet is looking brighter than it ever has and hopefully that trend will continue globally.

Mitochondria Repair Process

Britain Approves 3 Parent Babies.

Modern in-vitro fertilisation (IVF) treatments are a boon to couples who might otherwise not be able to conceive naturally. They’re also the only guaranteed method by which couples who have inherited conditions or diseases can avoid passing them on to their offspring through a process called preimplantation genetic diagnosis. However current methods are limited to selection only, being able to differentiate between a set of potential embryos and selecting the most viable ones. New techniques have been developed that can go further than this, replacing damaged genetic material from one parent with that of another individual, creating a child that essentially has three parents but none of the genetic defects. Up until today such a process wasn’t strictly legal however the UK has now approved this method, opening the treatment up to all those affected.

Mitochondria Repair Process

The process is relatively straightforward involving the standard IVF procedure initially with the more radical steps following later. For this particular condition, where the mitochondria (which are essentially the engines of our cells) are damaged, the nucleus of a fertilized (but non-viable) embryo can be transplanted into a healthy donor egg which can then be implanted. Alternatively the egg itself can be repaired in much the same fashion before fertilization occurs. The resulting embryo then doesn’t suffer from the mitochondrial defect and will be far more likely to result in a successful pregnancy, much to the joy of numerous people seeking such treatment.

Of course when things like this come up inevitably the conversation tends towards designer babies, genetic modifications and all the other “playing god” malarkey that seems to plague embryo related treatments. For starters this treatment, whilst it does give the child three parents doesn’t fool around with the embryo’s core genetic material. Instead it’s simply replacing the damaged/non-functional mitochondria from one person with that of another individual. This will have no more influence on any of their characteristics than the environment they grew up in. Although, to perfectly honest, I wouldn’t see any issue with people going down to a deeper level anyway, for multiple reasons.

We’re already playing fast and loose with the natural way of doing things with the numerous treatments we have at our disposal that have rapidly increased life expectancy across the globe. If you indulge in such treatments then you’re already playing god as you’re interfering with the world’s natural way things get killed off. Extending such treatments to our ability to procreate isn’t much of a stretch honestly and should we be able to create the genetic best of ourselves through science then I really can’t see a problem with it. Sure there needs to be some ethical bounds put on it, just like there are for any kind of medical treatment, but I don’t see being able to choose your baby’s hair or eye colour being that far removed from the treatments we currently use to select the best embryos for IVF.

That’s the transhumanist in me talking however and I know not everyone shares my rather liberal views of the subject. Regardless this treatment is no where near that and simply provides an opportunity to those who didn’t have it before. Hopefully the approval of this method will extend to other treatments as well, ensuring that the the option to procreate is available to everyone, not just those of us who were born with genetic capability to.

Raspberry Pi 2

Raspberry Pi 2 to Run Windows 10.

It’s not widely known that Microsoft has been in the embedded business for quite some time now with their various versions of Windows tailored specific for that purpose. Not that Microsoft has a particular stellar reputation in this field however as most of the time people find out that something was running Windows is when they crash spectacularly. However if you wanted to tinker with it yourself the process to do so was pretty arduous which wasn’t very conducive to generating much interest in the product. Microsoft seems set to change that however with the latest version of Windows 10 to run on the beefed up Raspberry Pi 2 and, best of all, it will be completely free to use.

Raspberry Pi 2

Windows has supported the ARM chipset that powers the Raspberry Pi since the original 8 release  however the diminutive specifications of the board precluded it from running even the cut down RT version. With the coming of Windows 10 however Microsoft is looking to develop an Internet of Things (IoT) line of Windows products which are specifically geared towards low power platforms such as the Raspberry Pi. Better still the product team behind those versions of Windows has specifically included the Raspberry Pi 2 as one of their supported platforms, meaning that it will work out of the box without needing to mess with its drivers or other configuration details. Whilst I’m sure the majority of users of the Raspberry Pi 2 will likely stick to their open source alternatives the availability of a free version of Windows for the platform does open it up to a whole host of developers who might not have considered the platform previously.

The IoT version of Windows is set to come in three different flavours: Industry, Mobile and Athens; with a revision of the .NET Micro framework for other devices that don’t fall into one of those categories. Industry is essentially the full version of Windows with features geared towards the embedded platform. The Mobile version is, funnily enough, geared towards always-on mobile devices but still retains much of the capabilities of its fully fledged brethren. Athens, the version that’s slated to be released on the Raspberry Pi 2, is a “resource focused” version of Windows 10 that still retains the ability to run Universal Apps. There’ll hopefully be some more clarity around these delineations as we get closer to Windows 10’s official release date but suffice to say if the Raspberry Pi 2 can run Universal Apps it’s definitely a platform I could see myself tinkering with.

These new flavours of Windows fit into Microsoft’s broader strategy of trying to get their ecosystem into as many places as they can, something they attempted to start with the WinRT framework and have reworked with Universal Apps. Whilst I feel that WinRT had merit it’s hard to say that it was successful in achieving what it set out to do, especially with the negative reception Metro Apps got with the wider Windows user base. Universal Apps could potentially be the Windows 7 to WinRT’s Vista, a similar idea reworked and rebranded for a new market that finds the feet its predecessors never had. The IoT versions of Windows are simply another string in this particular bow but whether or not it’ll pan out is not something I feel I can accurately predict.

Orion_NTP

NASA Investigation Nuclear Propulsion for Future Mars Missions.

Moving things between planets is a costly exercise no matter which way you cut it. Whilst we’ve come up with some rather ingenious ideas for doing things efficiently, like gravity assists and ion thrusters, these things can only take us so far and the trade offs usually come in the form of extended duration. For our robotic probes this is a no brainer as machines are more than happy to while away the time in space whilst the fleshy counterparts do their bits back here on Earth. For sending humans (and larger payloads) however these trade offs are less than ideal, especially if you want to do round trips in a reasonable time frame. Thus we have always been on the quest to find better ways to sling ourselves around the universe and NASA has committed to investigating an idea which has been dormant for decades.

Orion_NTP

NASA has been charged with the task of getting humans to Mars by sometime in the 2030s, something which shouldn’t sound like an ambitious feat (but it is, thanks to the budget they’ve got to work with). There are several technical hurdles that need to be overcome before this can occur not least of which is developing a launch system which will be able to get them there in a relatively short timespan. Primarily this is a function of the resources required to keep astronauts alive and functioning in space for that length of time without the continual support of launches from home. Current chemical propulsion will get us there in about 6 months which, whilst feasible, still means that any mission to there would take over a year. One kind of propulsion that could cut that time down significantly is Nuclear Thermal which NASA has investigated in the past.

There are numerous types of Nuclear Thermal Propulsion (NTP) however the one that’s showing the most promise, in terms of feasibility and power output, is the Gas Core Reactor. Mostly this comes from the designs high specific impulse which allows it to generate an incredible amount of thrust from a small amount of propellant which would prove invaluable for decreasing mission duration. Such designs were previously explored as part of the NERVA program back in the 1970s however it was cancelled when the supporting mission to Mars was cancelled. However with another Mars mission back on the books NASA has begun investigating the technology again as part of the Nuclear Thermal Rocket Element Environmental Simulator (NTREES) at their Huntsville facility.

NTP systems likely wouldn’t be used for the initial launch instead they’d form part of the later stage to be used once the craft had made it to space. This negates many of the potential negative aspects like radioactive material being dispersed into the atmosphere and would allow for some concessions in the designs to increase efficiency. Several potential craft have been drafted (including the one pictured above) which use this idea to significantly reduce travel times between planets or, in the case of supply missions, dramatically increase their effective payload. Whether any of these will see the light of day is up to the researchers and mission planners at NASA but there are few competing designs that provide as many benefits as the nuclear options do.

It’s good to see NASA pursuing alternative ideas like this as they could one day become the key technology for humanity to spread its presence further into our universe. The decades of chemical based rocketry that we have behind us have been very fruitful but we’re fast approaching the limitations of that technology and we need to be looking further ahead if we want to further our ambitions. With NASA (and others) investigating this technology I’m confident we’ll see it soon.

Tengami Review Screenshot Wallpaper Title Screen

Tengami: A Fold Between Worlds.

Games aspire to be many things but rarely do they aspire to emulate another medium, especially a physical one. The burgeoning genre of cinematic style games and walking simulators have their visions set towards emulating the medium of film but past that the examples are few and far between. You can then imagine my intrigue when I first saw Tengami, a game that seeks to emulate a pop-up book, lavishly styled to look like it was set within feudal Japan. It’s an ambitious idea, one that could easily go south if implemented incorrectly, but I’m happy to report that the whole experience is quite exceptional especially when it comes to the sound design and music.

Tengami Review Screenshot Wallpaper Title Screen

Tengami is probably the first game in a long time where I can’t sum up the opening plot in a single paragraph as whilst there’s some skerricks of story hidden within the short poems between scenes there’s really not a lot in them. As far as I can tell you’re searching for the blossoms to restore your cherry tree back to life although what your motivation for doing so isn’t exactly clear. Still the environments provide enough atmosphere and presence to give you a kind of motivation to move forward, if only to see more of the paper-laden world you’ve found yourself in.

The art style of Tengami really is its standout feature, done in pop-up book style using real paper textures that the devs scanned in. Initially it had a bit of a LittleBigPlanet feel to it, with the real world textures and 2D movement in a 3D world thing going, however it quickly moves away from that and firmly establishes its unique feel. All of the environments look and feel like they’re straight out of a pop up book, complete with the stretching and crumpling noises when you move various elements around. Tengami is simply a joy to look at and fiddle with, evoking that same sort of feeling you got when playing around with one as a kid.

Tengami Review Screenshot Wallpaper Autumns Fall

Coming in at a very close second is the original soundtrack done for Tengami by David Wise. The music seems to swell and abate at just the right times and the score is just incredible. I’m more than willing to admit that my love of the soundtrack might stem from my interest in all things Japanese but looking around at other reviews shows that I’m not the only one who thoroughly enjoyed it. I’m not sure if he was in charge of the foley as well since the soothing sounds of waterfalls, the ocean or just the breeze on quieter sections was just beautiful. If you’re playing Tengami on a mobile device I would wholeheartedly recommend you do so with a pair of headphones as otherwise you’d really be missing out.

With all that focus on the art and sound it would be somewhat forgivable if Tengami was a little light on in the mechanics department but thanks to its unique pop-up book style it’s actually quite an innovative little title. As you make your way through the world you’ll encounter parts which can be folded in and out of existence, between two planes or between different states. It’s like a pop-up book that’s able to exist in a higher dimension, able to shift elements in and out as it pleases. It’s quite intuitive and for the most part you’ll be able to quickly figure out what you need to do or, at least, stumble your way through by trying every option.

Tengami Review Screenshot Wallpaper Sailing on the Ocean

The puzzles are pretty straightforward, often only a couple slides or folds away from being completed. The challenge ramps up gradually as you progress through every scene and towards the end they actually start to become quite challenging. However the one fault here is that new mechanics aren’t introduced in a logical fashion and, if you’re like me and know a little Japanese, you can find yourself trying to solve a puzzle in completely the wrong way. The hint system (and the full official walkthrough) are enough to make sure that you won’t be stuck at these for too long but it’s still a mistake that a lot of these minimalist type games make.

The only drawback to Tengami’s incredible design and polish seems to be its length as the game is incredibly short, clocking in at just over an hour and a half for my playthrough. This is not to say that I would’ve preferred for them to cut corners on other things  in order to extend the play time, far from it, more that the focus is on quality rather than quantity. For some this can be a turn off, especially when you consider the current asking price, but for me the price admission was well worth the short time I got to spend with it.

Tengami Review Screenshot Wallpaper Deaths Embrace

Tengami is a beautifully crafted experience, recreating that tactile feel of a pop-up book in a new medium and elevating it with impressive visuals and an incredible soundscape. It’s a short and succinct experience, choosing to not overstay its welcome in favour of providing a far more highly polished experience. As a game it’s quite simple, and suffers a little due to its minimalist practices, but overall it’s a great experience one I’m sure multitudes of players will enjoy.

Rating: 8.75/10

Tengami is available on PC, iOS and WiiU right now for $9.99, $6.49 and $9.99 respectively. Game was played on the PC with 1.5 hours of total playtime and 100% of the achievements unlocked.