The brain is still largely a mystery to modern science. Whilst we’ve mapped out the majority of what parts do what we’re still in the dark about how they manage to accomplish their various feats. Primarily this is a function of the brain’s inherit complexity, containing some 100 trillion connections between the billions of neurons that make up its meagre mass. However like all problems the insurmountable challenge of decrypting the brain’s functions is made easier by looking at smaller parts of it. Researchers at the USC Viterbi School of Engineering and the Wake Forest Baptist Medical Center have been doing just that and have been able to recreate a critical part of the brain’s functionality in hardware.
The researchers have recreated the part of the brain called the hippocampus, the part of the brain that’s responsible for translating sensory input into long term memories. In patients that suffer from diseases like Alzheimer’s this is usually the first part that gets damaged, preventing them from forming new memories (but leaving old ones unaffected). The device they have created can essentially replace part of the hippocampus, facilitating the same encoding functions that a non-damaged section would provide. Such a device has the potential to drastically increase the quality of life of many people, enabling them to once again form new memories.
The device comes out of decades of research into how the brain processes sensory input into long term memories. The researchers initially tested their device on laboratory animals, implanting the device into healthy subjects. Then they recorded the input and output of the hippocampus, showing how the signals were translated for long term storage. This data was then used to create a model of this section of the hippocampus, allowing the researchers to then take over the job of encoding those signals. Previous research showed that, even when the animal’s long term memory function was impaired through drugs, the prosthesis was able to generate new memories.
That in and of itself is impressive however the researchers have been replicating their work with human patients. Using nine test subjects, all of whom had the requisite electrodes implanted in the right regions to treat chronic seizures, the researchers utilized the same process to develop a human based model. Whilst they haven’t yet used that to help in the creation of new memories in humans they have proven that their human model produces the same signals as the hippocampus does in 90% of cases. For patients who currently have no ability to form new long term memories this could very well be enough to drastically improve their quality of life.
This research has vast potential as there are many parts of the brain that could be mapped in the same way. The hippocampus is critical in the formation of non-procedural long term memories however there are other sections, like the motor and visual cortices, which could benefit from similar mapping. There’s every chance that those sections can’t be mapped directly like this but it’s definitely an area of potentially fruitful research. Indeed whilst we still not know how the brain stores information we might be able to repair the mechanisms that feed it, and that could help a lot of people.
We’ve known for some time that water exists in some forms on Mars. The Viking program, which consisted of both orbiter and lander craft, showed that Mars’ surface had many characteristics that must have been shaped by water. Further probes such as Mars Odyssey and the Phoenix Lander showed that much of the present day water that Mars holds is present at the poles, trapped in the vast frozen tundra. There’s been a lot of speculation about how liquid water could exist on Mars today however no conclusive proof had been found. That was until today when NASA announced it had proof that liquid water flows on Mars, albeit in a very salty form.
The report comes out of the Georgia Institute of Technology with collaborators from NASA’s Ames Research Center, Johns Hopkins University, University of Arizona and the Laboratoire de Planétologie et Géodynamique. Using data gathered from the Mars Reconnaissance Orbiter the researchers had identified that there were seasonal geologic features on Mars’ surface. These dark lines (pictured above) were dubbed recurring slope lineae would change over time, darkening and appearing to flow during the warmer months and then fading during the colder months. It has been thought for some time that these slopes were indicative of liquid water flows however there wasn’t any evidence to support that theory.
This is where the MRO’s Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) comes into play. This instrument was specifically designed to detect water on Mars by looking at varying wavelengths of light emitted from the planet’s surface. Once the target sites were identified CRISM was then pointed at them and their surface composition analysed. What was found at the RSL sites were minerals called hydrated salts which, when mixed with water, would lower the freezing point of the water significantly. Interestingly these hydrated salts were only detected in places were the RSL features were particularly wide as other places, where the RSLs were slimmer, did not show any signs of hydrated salts.
These salts, called perchlorates, have been seen before by several other Mars missions although they’ve never been witnessed in hydrated form before. These perchlorates can potentially keep water from freezing at temperatures down to -70°C. Additionally some of these perchlorates can be used in the manufacturing of rocket fuel, something which could prove to be quite valuable for future missions to Mars. Of course they’re likely not in their readily usable form, requiring some processing on site before they can be utilized.
Data like this presents many new opportunities for further research on Mars. It’s currently postulated that these RSLs are likely the result of a shallow subsurface flow which is wicking up to the surface when the conditions are warmer. If this is the case then these sites would be the perfect place for a rover to investigate as there’s every chance it could directly sample martian water at these sites. Considering that wherever we find liquid water on Earth we find life then there’s great potential for the same thing to happen on Mars. If there isn’t then that will also tell us a lot which means its very much worth investigating.
All gamers have an idea of a game they want to make. It could be anything from a novel mechanic through to a fully fleshed out story, but it’s there hanging around in the back of our minds. However for those of us who’ve attempted to bring that idea into reality we often come crashing into the cold hard truth of the games industry: making games is hard. For the precious few that make it through the process (and fewer still who see success from it) the scars of game development are forever burned into their psyche. The Magic Circle is a game that chronicles this journey, with all the dark humour and self-loathing that permeates much of the game industry.
To its fans The Magic Circle was a brilliant example of interactive fiction, a game deserving of the title of cult classic. The sequel however has been one of the most beleaguered projects in the history of gaming, having been in development for some 20 years with little to show for it. The creator’s perfectionism has kept the sequel in a perpetual state of unfinishedness, never being satisfied enough to ship anything. You are one of the games’ long time fans who’s been hired as a playtester for the current iteration of the game. Whilst your experience confirms that yes, there is a game, it’s no where near complete. However when you finish the small section you’re contacted by an AI from a previous generation of the game who shows you how to take control of this unfinished world.
The Magic Circle looks and feels like an unfinished game, although under the hood it’s anything but. The choice of a bleak black and white aesthetic for one world (and a low-res, 8-bit colour palette for the other) reinforces that unfinished feeling. Interestingly though the whole world is properly textured as evidenced by the fact that your character brings colour wherever it steps. It’s the kind of stuff you’d expect to see in a pre-alpha or similarly beta indie game although there’s an obvious layer of polish that would otherwise be missing from such early stage games. Suffice to say Question Games have done a good job of creating a “finished-unfinished” world.
Like most early stage games The Magic Circle is a mishmash of different ideas that are all cobbled together. The initial game starts out as something of a walking simulator with you just viewing the scenery. However it quickly transforms into a kind of puzzle game where you can modify the behaviour of enemies and objects within the world. This can be something as simple as making something your ally instead of your enemy or completely changing the way an object moves or interacts. This is how you start breaking the game, changing things around so you can access more areas that you shouldn’t be able to. Finally at the end you’re put in charge of actually developing a game level and you’ll get reviewed on how fun it is. This is all the while you’re privy to commentary from the game’s developers, giving you an insight into the creator’s vision and why it’s never quite managed to be released.
The initial game modification section of The Magic Circle is quite fun as there are numerous different ways to approach many of the puzzles when you first start out. These start to thin out a bit as you get towards the later puzzles as most of them really only have one solution. Still the rudimentary control you have over the NPCs does present some rather fun opportunities like sending wave after wave of rats at the Hive Queen in an attempt to defeat her. Of course there’s only so much mucking about you can do before you’ve found all the secrets and want to move on. Thankfully that’s not hard at all and it’s at that point the game takes on a very meta twist.
It’s at this point you’re thrust into a demo game for E4 and given the choice of whether or not to muck with it. This then leads onto you watching them playing the demo live on stage whilst all chaos breaks loose. Then after that you’re given the task of creating the sequel with a rudimentary level editor. It’s actually pretty interesting to try and figure out how to maximise the review score at the end and the commentary given to you by Old Pro is quite entertaining. You’re then thrown back to your desktop where you’re able to replay the game, redo your level or simply click around to find out some more details about the game.
It’s interesting to see a satirized version of events that are familiar to many gamers, namely sequels that seem to be forever in development due to its creator’s perfectionism. Indeed it feels like a game more for developers, industry insiders and observers more than anything. If anything the story is more like a 3 hour long treatise on the pitfalls of developing a game and the potential boons for those who manage to stick it through. Whilst I enjoyed it, even Ishmael’s long rant about how it’s all about the player and their destructive wishes, I know that kind of story isn’t for everyone.
The Magic Circle demonstrates in a beautifully satirical way the agony that is game development. The world is expertly crafted to resemble a pre-alpha game that’s a mash of too many ideas, all coexisting in the same code base which end up mashing together in unintended ways. This is reflected in the game play which is based around messing with things and changing up behaviours so you can access things you otherwise wouldn’t be able to. The story is one that definitely has a specific target audience in mind and, whilst it might not be for everyone, definitely plays to its strengths as a piece of commentary on the industry. It might not meet my criteria for a must-play game for everyone but if, like me, you feel like a part of the greater games industry, then there’s definitely a lot to like in The Magic Circle.
The Magic Circle is available on PC right now for $19.99. Total play time was approximately 3 hours with 36% of the achievements unlocked.
Scale is something that’s hard to comprehend when it comes to celestial sized objects. The sheer vastness of space is so far beyond anything that we see in our everyday lives that it becomes incomprehensible. Yet in such scale I find perspective and understanding, knowing that the universe is far greater than anything going on in just one of its countless planets. To really grasp that scale though you have to experience it; to understand that even in our cosmic backyard the breadth of space is astounding. That’s just what the following video does:
Computing as we know it today is all thanks to one plucky little component: the transistor. This simple piece of technology, which is essentially an on/off switch that can be electronically controlled, is what has enabled the computing revolution of the last half century. However it has many well known limitations most of which stem from the fact that it’s an electrical device and is thus constrained by the speed of electricity. That speed is about 1/100th of that of light so there’s been a lot of research into building a computer that uses light instead of electricity. One of the main challenges that an optical computer has faced is storage as light is a rather tricky thing to pin down and the conversion process into electricity (so it can be stored in traditional memory structures) would negate many of the benefits. This might be set to change as researchers have developed a non-volatile storage platform based on phase-change materials.
The research comes out of the Karlsruhe Institute of Technology with collaborations from the universities of Münster, Oxford, and Exeter. The memory cell which they’ve developed can be written at speeds of up to 1GHz, impressive considering most current memory devices are limited to somewhere around a 1/5th of that. The actual memory cell itself is made up of phase-change material (a material that can shift between crystalline and amorphous states) Ge2Sb2Te5, or GST for short. When this material is exposed to a high-intensity light beam its state will shift. This state can then be read later on by using less intense light, allowing a data cell to be changed and erased.
One novel property that the researchers have discovered is that their cell is capable of storing data in more than just a binary format. You see the switch between amorphous and crystalline states isn’t distinct like it is with a transistor which essentially means that a single optical cell could store more data than a single electrical cell. Of course to use such cells with current binary architecture would mean that these cells would need a proper controller to do the translation but that’s not exactly a new idea in computing. For a completely optical computer however that might not be required but such an idea is still a way off from seeing a real world implementation.
The only thing that concerns me about this is the fact that it’s based on phase change materials. There’s been numerous devices based on them, most often in the realms of storage, which have purported to revolutionize the world of computing. However to date not one of them has managed to escape the lab and the technology has always been a couple years away. It’s not that they don’t work, they almost always do, more that they either can’t scale or producing them at volume proves to be prohibitively expensive. This light cell faces the unique challenge that a computing platform built for it currently doesn’t exist yet and I don’t think it can compete with traditional memory devices without it.
It is a great step forward however for the realm of light based computing. With quantum computing likely being decades or centuries away from becoming a reality and traditional computing facing more challenges than it ever has we must begin investigating alternatives. Light based computing is one of the most promising fields in my mind and it’s great to see progress when it’s been so hard to come by in the past.
Nerve damage has almost always been permanent. For younger patients there’s hope for full recovery after an incident but as we get older the ability to repair nerve damage decreases significantly. Indeed by the time we reach our 60s the best we could hope for is what’s called “protective sensation”, the ability to determine things like hot from cold. The current range of treatments are mostly limited to grafts, often using nerves from the patient’s own body to repair the damage, however even those have limited success in practice. However that could all be set to change with the development of a process which can produce nerve regeneration conduits using 3D scanning and printing.
The process was developed by a collaboration of numerous scientists from the following institutions: University of Minnesota, Virginia Tech, University of Maryland, Princeton University, and Johns Hopkins University. The research builds upon one current cutting edge treatment which uses special structures to trigger regeneration, called nerve guidance conduits. Traditionally such conduits could only be produced in simple shapes, meaning they were only able to repair nerve damage in straight lines. This new treatment however can work on any arbitrary nerve structure and has proven to work in restoring both motor and sensory function in severed nerves both in-vitro (in a petri dish) and in-vivo (in a living thing).
How they accomplished this is really quite impressive. First they used a 3D scanner to reproduce the structure of the nerve they’re trying to regenerate, in this case it was the sciatic nerve (pictured above). Then they used the resulting model to 3D print a nerve guidance conduit that was the exact size and shape required. This was then implanted into a mouse who had a 10mm gap in their sciatic nerve (far too long to be sewn back together). This conduit then successfully triggered the regeneration of the nerve and after 10 weeks the rat showed a vastly improved ability to walk again. Since this process had only been verified on linear nerves before this process shows great promise for regenerating much more complicated nerve structures, like those found in us humans.
The great thing about this is that it can be used for any arbitrary nerve structure. Hospitals equipped with such a system would be able to scan the injury, print the appropriate nerve guide and then implant it into the patient all on site. This could have wide reaching ramifications for the treatment of nerve injuries, allowing far more to be treated and without the requisite donor nerves needing to be harvested.
Of course this treatment has not yet been tested in humans but the FDA has approved similar versions of this treatment in years past which have proven to be successful. With that in mind I’m sure that this treatment will prove successful in a human model and from there it’s only a matter of time before it finds its way into patients worldwide. Considering how slow progress has been in this area it’s quite heartening to see dramatic results like this and I’m sure further research into this area will prove just as fruitful.
Movie tie-in games are some of the most derided games ever to grace our presence and with good reason. Often they’re given a woefully insufficient amount of time to come up with a playable product and quality is the first thing that hits the chopping block, leaving them bug ridden messes of half finished dreck. The last few years have seen a few titles rise above the filth however and whilst none of them have been game of the year material they have been pleasurable surprises. Mad Max is one such gem, taking the essence of the movie and distilling it down into a very playable experience.
The world lies barren, scorched by an intense nuclear war. Those that survived the collapse struggle to survive, scavenging what they could from the remnants of society that lie scattered about. This world now belongs to the ruthless and violent with gangs and war bands patrolling the sandy dunes looking for people and places to pillage. You are Max, a survivor who has lost everything since the collapse and wants nothing to do with this world any more. So he has resigned himself to cross the Plains of Silence in his Black on Black, an Interceptor capable of making the long journey. However his plans are foiled by Scabrous Scrotus, son of the warlord Immortan Joe who steals everything from him. You are not so easily beaten however and you turn your eyes to recovering your Black on Black and making Scrotus pay for what he did.
The wasteland setting for Mad Max is quite beautiful with the plains stretching out to the horizon in every direction. It’s the definition of an open world game with nearly every bit of scenery that you can see being accessible and part of the game. There’s definitely been a lot of effort put into crafting certain aspects of the scenery, like the dust you kick up when going offroad and the slight changes in the howls that each engine emits. It’s also got enough visual variety that you don’t feel like you’re driving through the same place all the time as each area has its own distinct theme. Thankfully this all comes to you fully optimized, something which games with lots of open space like this often get wrong.
On first blush Mad Max is your typical open worlder, with all the standard trimmings of campaign missions, side missions and a lack of direction of which one you should do when. If I was to compare it to recent open world titles it’d be somewhere in the middle between Far Cry 4 and Batman: Arkham Knight. You’ve got your typical progression in the form of skills and equipment, both for your vehicle (the Magnum Opus) and Max himself, some of which are locked behind story missions whilst others through open world objectives. There’s camps for you to capture, places for you to explore and hordes of enemies bounding around for you to take out or avoid. Combat comes in two flavours: the stylized beat ’em up hand to hand combat while on foot as Max as well as some in-car combat which is a little more rudimentary. Suffice to say I was surprised at just how much was crammed into this game given its origins as a movie tie-in.
If you’re a fan of the Arkham series of combat then Mad Max is right up your alley with the controls and style being instantly familiar. There’s not as much variety in moves and finishers however it’s still quite a challenge to rack up long combo streaks without getting interrupted. There’s a few rough edges on the combat though which really start to show in the later stages of the game. Essentially you can get yourself into a situation where there’s no way for you to block or counter an incoming move, ruining your chain (and potentially losing you an upgrade point). Usually this happens when you’re doing a finisher which triggers a mini-cutscene which, when interrupted, feels unfair. There’s also a heavy reliance on consumables which aren’t readily available or farmable in the world for a lot of the big finisher moves so they often go unused. Overall it’s a good emulation of Rocksteady’s combat style, just in need of a little more tuning.
The car combat is pretty simplistic by comparison, usually involving you ramming the other car into submission. As you progress through the story missions there are weapon upgrades that allow you to more effectively dispatch your enemies, like a harpoon that can rip wheels off, but the heavy investment requirement means you’ll have to forgo quite a few other upgrades to get them. Additionally for the most part you don’t really need all the bells and whistles, just having the fastest car (both in terms of top speed and acceleration) is all that’s needed for most encounters. The final boss battle is the only exception to this as you’ll struggle to finish it in a timely manner if you don’t have at least a Level 4 harpoon and another similarly upgraded weapon. It’d probably be made a lot better if the driving controls were a little more refined as the slightly janky steering, even on cars with the top handling, makes things more difficult than they should be.
As you’d expect from an open world game there’s numerous activities for you to do most of which will provide you some form of benefit. Clearing out camps for instance will net you a periodic amount of scrap, the currency that underpins the economy of Mad Max. Doing “projects” in strongholds will unlock certain benefits like giving you a full water canteen or opening up new types of missions for you to complete. Winning races will unlock a permanent location where you can fuel up your car whenever you want. For people who like to meander through games, picking and choosing whichever mission takes their fancy, this kind of thing is probably what they’re after. For me though these little side distractions just didn’t feel rewarding enough for me to bother with them for long. In the end I’d only go on scrap hunting missions if I needed it to unlock the next campaign mission which I only had to do a couple times.
It’s not a perfect experience by any stretch of the imagination as the above screenshot will attest to. You see there’s no jumping in Mad Max but there are multiple heights and in some instances you’ll find yourself trapped in a place you can’t get out of. Some of them aren’t even as obvious as the one pictured above. In one particular mission I managed to roll over some pipes which I couldn’t roll back out of. It’s clear what’s missing here, the movement system isn’t coded to deal with situations where the difference in terrain height is above a certain threshold. Whilst not every game needs to have the parkour stylings of Assassin’s Creed a more robust move system would be key in alleviating the unfortunately frequent problems that arise from the current simplistic implementation.
The story, if it were standing on its own, is fairly rudimentary although since it serves as a kind of prequel to the world of the movie it’s a little more interesting. Strictly speaking it’s a separate story in terms of canon and indeed Max’s character is quite different to the one portrayed in the cinema. However it does give you a little bit more insight into the reasons why Max ends up the way he is in the movie. Still it’s not much more than your typical action script, albeit it bereft of some of the more common components in favour of more talk of cars as a religion and all the craziness that the movie demonstrated.
Mad Max is an example of what tie-in games can achieve if they have more than a token effort put into them. The barren wasteland world is beautifully realised with the landscapes reaching out from horizon to horizon. The core game mechanics are mostly well realised, often getting close to their more mature brethren from which they draw inspiration. For fans of the open world genre there’s more than enough activities to keep you going for numerous hours on end. For people like me though who aren’t so interested in the distractions the game is still readily playable if you do pretty much campaign missions only, you’ll just have to use your skill rather than your scrap to win fights. Suffice to say I was surprised by just how playable Mad Max was, especially given its tie in origins. If you’re one of the many raving fans of Fury Road then Mad Max is probably worth a look in.
Mad Max is available on PC, XboxOne and PlayStation4 right now for $59.99, $99.95 and $99.95 respectively. Game was played on the PC with approximately 13 hours of total playtime and 29% of the achievements unlocked.
Whilst I haven’t had a real programming job in the better part of a decade I have continued coding in my spare time for various reasons. Initially it was just a means to an end, making my life as an IT admin easier, however it quickly grew past that as I aspired to be one of those startup founders Techcrunch idolizes. Once I got over that my programming exploits took on a more personal approach, the programs I built mostly for my own interest or fixing a problem. I recently dove back into one of my code bases (after having a brainwave of how it could have been done better) and, seeing nothing salvageable, decided to have a crack at it again. This time around however, with far less time on my hands to dedicate to it, I started looking for quicker ways of doing things. It was then I realised that for all the years I’ve been coding I had been suffering from Not Invented Here syndrome and that was likely why I found making progress so hard.
The notion behind Not Invented Here is that using external solutions, like in the case of programming things like frameworks or libraries, to solve your problems isn’t ideal. The reasoning that drives this is wide and varied but it often comes down to trust, not wanting to depend on others or the idea that solutions you create yourself are better. For the most part it’s a total fallacy as any one programmer cannot claim to the be the expert in all fields and using third party solutions is common practice worldwide. However, like I found, it can manifest itself in all sorts of subtle ways and it’s only after taking a step back that you can see its effects.
I have often steered clear of many frameworks and libraries mostly because I felt the learning curve for them would be far too steep. Indeed when I’ve looked at other people’s code that makes use of them I often couldn’t make sense of them due to the additional code libraries they had made use of and resided myself to recoding them from scratch. Sometimes I’d do a rudimentary search for something to see if there was an easy answer to my problem but often I’d just wrangle the native libraries into doing what I wanted. The end result of this was code that, whilst functional, was far more verbose than it needed to be. Then when I went back to look at it I found myself wondering why I did things in this way, and decided there had to be a better way.
Fast forward to today and the new version of the application I’ve been working on makes use of numerous frameworks that have made my progress so much faster. Whilst there’s a little bloat from some frameworks, I’m looking at you Google APIs, it’s in the form of dlls and not code I have to maintain. Indeed much of the code that I had to create for handling edge cases and other grubby tasks is now handled much better by code written by people far more experienced with the problem space than I will ever be. Thus I’ve spent far less time troubleshooting my own work and a lot more time making progress.
I have to attribute part of this to the NuGet package management system in Visual Studio which downloads, installs and resolves all dependencies for any framework you want to install. In the past such tasks would fall to the developer and would often mean chasing down various binaries, making sure you had the right versions and repeating the whole process when a new version was released. NuGet broke down that barrier, enabling me to experiment with various frameworks to meet my end goals. I know similar things to this have existed in the past however I really only begun to appreciate them recently.
There’s still a learning curve when adopting a framework, and indeed your choice of framework will mean some design decisions are made for you. However in the short time I’ve been working with things like HtmlAgilityPack and Json.NET have shown me that the time invested in learning them is far smaller than trying to do what they do myself. I’m sure this seems obvious to seasoned programmers out there but for me, who was working with the mentality that I just needed to get things done, it never occurred to me that my approach was completely wrong.
I guess where I’m going with all this is that should you find yourself attempting to solve a problem the first thing you need to do is see if its been solved by someone else first. Chances are they’ve gone through the pain you’re about to put yourself through and can save you from it. Sure you might not understand fully how they did it and they might not do it the way you wanted to but those things mean little if they save you time and emotional capital. I know for myself if I had to redo everything I did previously I would be no where near where I am today and likely would get no further.
You’d be forgiven for not knowing that Amazon founder Jeff Bezos had founded a private space company. Blue Origin, as it’s known, isn’t one for the spotlight as whilst it was founded in 2000 (2 years before SpaceX) it wasn’t revealed publicly until some years later. The company has had a handful of successful test launches however, focusing primarily on the suborbital space with Vertical Takeoff/Vertical Landing (VTVL) capable rockets. Indeed their latest test vehicle, the New Shepard, was successfully launched at the beginning of this year. Outside of that though you’d be hard pressed to find out much more about Blue Origin however today they have announced that they will be launching from Cape Canaveral, using the SLC-36 complex which used to be used for the Atlas launch system.
It might not sound like the biggest deal however the press conference held for the announcement provided us some insight into the typically secretive company. For starters Blue Origins efforts have thus far been focused on space tourism, much like Virgin Galactic was. Indeed all their previous craft, including the latest New Shepard design, were suborbital craft designed to take people to the edge of space and back. This new launch site however is designed with much larger rockets in mind, ones that will be able to carry both humans and robotic craft alike into Earth’s orbit, putting them in direct competition with SpaceX and other private launch companies.
The new rocket, called Very Big Brother (pictured above), is slated to be Blue Origin’s first entry into the market. Whilst raw specifications aren’t yet forthcoming we do know that it will be based off Blue Origin’s BE-4 engine which is being co-developed with United Launch Alliance. This engine is slated to be the replacement for the RD-180 which is currently used as part of the Atlas-V launch vehicle. Comparatively speaking the engine is about half as powerful when compared to the RD-180, meaning that if the craft is similarly designed to the Atlas-V it’s payload will be somewhere in the 4.5 to 9 tonne range to LEO. Of course this could be wildly different to what they’re planning and we likely won’t know much more until the first craft launches.
Interestingly the craft is going to retain the VTVL capability that its predecessors had. This is interesting because no sizeable craft has that capability. SpaceX has been trying very hard to get it to work with the first stages of their Falcon-9 however they have yet to have a successful landing yet. Blue Origin likely won’t beat SpaceX to the punch on this however but it’s still interesting to see other companies adopting similar strategies in order to make their rockets reusable.
Also of note is the propellant that the rocket will use for the BE-4 engine. Unlike most rockets, which either run on liquid hydrogen/liquid oxeygen or RP-1(kerosene)/liquid oxygen the BE-4 will use natural gas and liquid oxygen. Indeed it has only been recently that methane has been considered as a viable propellant as I could not find an example of a mission that has flown using the fuel. However there must be something to it as SpaceX is going to use it for their forthcoming Raptor engines.
I’m starting to get the feeling that Blue Origin and SpaceX are sharing a coffee shop.
It’s good to finally get some more information out of Blue Origin, especially since we now know their ambitions are far beyond that of suborbital pleasure junkets. They’re entering a market that’s now swarming with competition however they’ve got both the capital and strategic relationships to at least have a good go at it. I’m very interested to see what they do at SLC-36 as more competition in this space is a good thing for all concerned.
The last decade has not been kind to AMD. It used to be a company that was readily comparable to Intel in almost every way, having much the same infrastructure (including chip fabs) whilst producing products that were readily comparable. Today however they’re really only competitive in the low end space, surviving mostly on revenues from the sales of both of the current generation of games consoles. Now with their market cap hovering at the $1.5 billion mark rumours are beginning to swirl about a potential takeover bid, something numerous companies could do at such a cheap price. The latest rumours point towards Microsoft and, in my humble opinion, an acquisition from them would be a mixed bag for both involved.
The rumour surfaced from an article on Fudzilla citing “industry sources” on the matter, so there’s potential that this will amount to nothing more than just a rumour. Still talks of an AMD acquisition by another company have been swirling for some time now however so the idea isn’t exactly new. Indeed AMD’s steadily declining stock price, one that has failed to recover ever since its peak shortly after it spun off Global Foundries, has made this a possibility for some time now. A buyer hasn’t been forthcoming however but let’s entertain the idea that Microsoft is interested to see where it leads us.
As Microsoft begins to expand itself further into the devices market there’s some of potential in owning the chip design process. They’re already using an AMD chip for the current generation console and, with total control over the chip design process, there’s every chance that they’d use one for a future device. There’s similar potential for the Surface however AMD has never been the greatest player in the low power space, so there’d likely need to be some innovation on their part to make that happen. Additionally there’s no real solid offering from AMD in the mobile space, ruling out their use in the Lumia line of devices. Based just on chips alone I don’t think Microsoft would go for it, especially with the x86 licensing deal that the previous article I linked to mentions.
Always of interest to any party though will be AMD’s warchest of patents, some 10,000 of them. Whilst the revenue from said patents isn’t substantial (at least I can’t find any solid figures on it, which means it isn’t much) they always have value when the lawsuits start coming down. For a company that has billions sitting in reserve those patents might well be worth AMD’s market cap, even with a hefty premium on top of it. If that’s the only value that an acquisition will offer however I can’t imagine AMD, as a company, sticking around for long afterwards unfortunately.
Of course neither company has commented on the rumour and, as of yet, there isn’t any other sources confirming this rumour. Considering the rather murky value proposition that such an acquisition offers both companies I honestly have trouble believing it myself. Still the idea of AMD getting taken over seems to come up more often than it used to so I wouldn’t put it past them courting offers from anyone and everyone that will hear them. Suffice to say AMD has been in need of a saviour for some time now, it might just not end up being Microsoft at this point.