Posts Tagged‘memory’

Light Based Memory Paves the Way for Optical Computing.

Computing as we know it today is all thanks to one plucky little component: the transistor. This simple piece of technology, which is essentially an on/off switch that can be electronically controlled, is what has enabled the computing revolution of the last half century. However it has many well known limitations most of which stem from the fact that it’s an electrical device and is thus constrained by the speed of electricity. That speed is about 1/100th of that of light so there’s been a lot of research into building a computer that uses light instead of electricity. One of the main challenges that an optical computer has faced is storage as light is a rather tricky thing to pin down and the conversion process into electricity (so it can be stored in traditional memory structures) would negate many of the benefits. This might be set to change as researchers have developed a non-volatile storage platform based on phase-change materials.

150922114949_1_900x600

The research comes out of the Karlsruhe Institute of Technology with collaborations from the universities of Münster, Oxford, and Exeter. The memory cell which they’ve developed can be written at speeds of up to 1GHz, impressive considering most current memory devices are limited to somewhere around a 1/5th of that. The actual memory cell itself is made up of phase-change material (a material that can shift between crystalline and amorphous states) Ge2Sb2Te5, or GST for short. When this material is exposed to a high-intensity light beam its state will shift. This state can then be read later on by using less intense light, allowing a data cell to be changed and erased.

One novel property that the researchers have discovered is that their cell is capable of storing data in more than just a binary format. You see the switch between amorphous and crystalline states isn’t distinct like it is with a transistor which essentially means that a single optical cell could store more data than a single electrical cell. Of course to use such cells with current binary architecture would mean that these cells would need a proper controller to do the translation but that’s not exactly a new idea in computing. For a completely optical computer however that might not be required but such an idea is still a way off from seeing a real world implementation.

The only thing that concerns me about this is the fact that it’s based on phase change materials. There’s been numerous devices based on them, most often in the realms of storage, which have purported to revolutionize the world of computing. However to date not one of them has managed to escape the lab and the technology has always been a couple years away. It’s not that they don’t work, they almost always do, more that they either can’t scale or producing them at volume proves to be prohibitively expensive. This light cell faces the unique challenge that a computing platform built for it currently doesn’t exist yet and I don’t think it can compete with traditional memory devices without it.

It is a great step forward however for the realm of light based computing. With quantum computing likely being decades or centuries away from becoming a reality and traditional computing facing more challenges than it ever has we must begin investigating alternatives. Light based computing is one of the most promising fields in my mind and it’s great to see progress when it’s been so hard to come by in the past.

Intel and Micron Announce 3D Xpoint Memory.

The never-ending quest to satisfy Moore’s Law means that we’re always looking for ways to making computers faster and cheaper. Primarily this focuses on the brain of the computer, the Central Processing Unit (CPU), which in most modern computers is now how to transistors numbering in the billions. All the other components haven’t been resting on their laurels however as shown by the radical improvement in speeds from things like Solid State Drives (SSDs), high-speed interconnects and graphics cards that are just as jam-packed with transistors as any CPU is. One aspect that’s been relatively stagnant however has been RAM which, whilst increasing in speed and density, has only seen iterative improvements since the introduction of the first Double Data Rate (DDR). Today Intel and Micron have announced 3D Xpoint, a new technology that sits somewhere between DRAM and NAND in terms of speed.

3D_XPoint_Die

Details on the underlying technology are a little scant at the moment however what we do know is that instead of storing information by trapping electrons, like all memory currently does, 3D Xpoint (pronounced cross point) instead stores bits via a change in resistance of the memory material. If you’re like me you’d probably think that this was some kind of phase change memory however Intel has stated that it’s not. What they have told us is that the technology uses a lattice structure which doesn’t require transistors to read and write cells, allowing them to dramatically increase the density, up to 128GB per die. This also comes with the added benefit of being much faster than current NAND technologies that power SSDs although slightly slower than current DRAM, albeit with the added advantage of being non-volatile.

Unlike most new memory technologies which often purport to be the replacements for one type of memory or another Intel and Micron are position 3D Xpoint as an addition to the current architecture. Essentially your computer has several types of memory, all of which are used for a specific purpose. There’s memory directly on the CPU which is incredibly fast but very expensive, so there’s only a small amount. The second type is the RAM which is still fast but can be had in greater amounts. The last is your long term storage, either in the form of spinning rust hard drives or a SSD. 3D Xpoint would sit in between the last two, providing a kind of high speed cache that could hold onto often used data that’s then persisted onto disk. Funnily enough the idea isn’t that novel, things like the XboxOne use a similar architecture, so there’s every chance that it might end up happening.

The reason why this is exciting is because Intel and Micron are already going into production with these new chips, opening up the possibility of a commercial product hitting our shelves in the very near future. Whilst integrating it in the way that they’ve stated in the press release would take much longer, due to the change in architecture, there’s a lot of potential for a new breed of SSD drives to be based on this technology. They might be an order of magnitude more expensive than current SSDs however there are applications where you can’t have too much speed and for those 3D Xpoint could be a welcome addition to their storage stack.

Considering the numerous technological announcements we’ve seen from other large vendors that haven’t amounted to much it’s refreshing to see something that could be hitting the market in short order. Whilst Intel and Micron are still being mum on the details I’m sure that the next few months will see more information make its way to us, hopefully closely followed by demonstrator products. I’m very interested to see what kind of tech is powering the underlying cells as a non-phase change, resistance based memory is something that would be truly novel and, once production hits at-scale levels, could fuel another revolution akin to the one we saw with SSDs all those years ago. Needless to say I’m definitely excited to see where this is heading and I hope Intel and Micron keep us in the loop with the new developments.

HP’s “The Machine” Killed, Surprising No One.

Back in the day it didn’t take much for me to get excited about a new technology. The rapid progressions we saw from the late 90s through to the early 2010s had us all fervently awaiting the next big thing as it seemed nearly anything was within our grasp. The combination of getting older and being disappointed a certain number of times hardened me against this optimism and now I routinely attempt to avoid the hype for anything I don’t feel is a sure bet. Indeed I said much the same about HP’s The Machine last year and it seems my skepticism has paid dividends although I can’t say I feel that great about it.

hp-machine-memristor-2015-06-05-01

For the uninitiated HP’s The Machine was going to be the next revolutionary step in computing. Whilst the mockups would be familiar to anyone who’s seen the inside of a standard server those components were going to be anything but, incorporating such wild technologies as memristors and optical interconnects. What put this above many other pie in the sky concepts (of which I include things like D-Wave’s quantum computers as the jury is still out on whether or not they’re providing a quantum speedup) is that it was based on real progress that HP had made in many of those spaces in recent years. Even that wasn’t enough to break through my cynicism however.

And today I found out I was right, god damnit.

The reasons cited were ones I was pretty sure would come to fruition, namely the fact that no one has been able to commercialize memristors at scale in any meaningful way. Since The Machine was supposed to be almost solely based off of that technology it should be no surprise that it’s been canned on the back of that. Now instead of being the moonshot style project that HP announced last year it’s instead going to be some form of technology demonstrator platform, ostensibly to draw software developers across to this new architecture in order to get them to build on it.

Unfortunately this will likely end up being not much more than a giant server with a silly amount of RAM stuffed into it, 320TB to be precise. Whilst this may attract some people to the platform out of curiosity I can’t imagine that anyone would be willing to shell out the requisite cash on the hopes that they’d be able to use a production version of The Machine sometime down the line. It would be like the Sony Cell processor all over again instead of costing you maybe a couple thousand to experiment with it you’d be in the tens of thousands, maybe hundreds, just to get your hands on some experimental architecture. HP might attempt to subsidise that but considering the already downgraded vision I can’t fathom them throwing even more money at it.

HP could very well turn around in 5 or 10 years with a working prototype to make me look stupid and, honestly, if they did I would very much welcome it. Whilst predictions about Moore’s Law ending happen at an inverse rate to them coming true (read: not at all) it doesn’t mean there isn’t a few ceilings we’ve seen on the horizon that will need to be addressed if we want to continue this rapid pace of innovation. HP’s The Machine was one of the few ideas that could’ve pushed us ahead of the curve significantly and its demise is, whilst completely expected, still a heart wrenching outcome.

Forgetting Might be an Adaptive Advantage.

Nearly all of us are born with what we’d consider less than ideal memories. We’ll struggle to remember where our keys our, draw a blank on that new coworker’s name and sometimes pause much longer than we’d like to remember a detail that should be front of mind. The idealised pinnacle, the photographic (or more accurately the eidetic) memory, always seems like an elusive goal, something you have to be born with rather than achieve. However it seems that our ability to forget might actually come from an evolutionary adaptation, enabling us to remember the pertinent details that helped us survive whilst suppressing those that might otherwise hinder us.

url-1024x683

The idea isn’t a new one, having existed in some form since at least 1997, but it’s only recently that researchers have had the tools to study the mechanism in action. You see it’s rather difficult to figure out which memories are being forgotten for adaptive reasons, I.E. to improve the survival of the organism, and which ones are simply forgotten due to other factors. The advent of functional Magnetic Resonance Imaging (fMRI) has allowed researchers to get a better idea of what the brain is doing at any one point, allowing them to set up situations to see what the brain is doing when it’s forgetting something. The results are quite intriguing, demonstrating that at some level forgetting might be an adaptive mechanism.

Back in 2007 researchers at Stanford University investigated the prospect that adaptive forgetting was potentially a mechanism for reducing the amount of brain power required to select the right memories for a particular situation. The hypothesis goes that remembering is an act of selecting a specific memory for a goal related activity. Forgetting then functions as an optimization mechanism, allowing the brain to more easily select the right memories by suppressing competing memories that might not be optimal. The research supported this notion, showing decreased activity in anterior cingulated cortex which is activated when people are weighing choices (like figuring out which memory is relevant).

More recent research into this phenomena, conducted by researchers at various institutes at the University of Birmingham and various institutes in Cambridge, focused on finding out if the active recollection of a specific memory hindered the remembering of others. Essentially this means that the act of remembering a specific memory would come at the cost of other, competing memories which in turn would lead to them being forgotten. They did this by getting subjects to view 144 picture and word associations and were then trained to remember 72 of them (whilst they were inside a fMRI machine). They were then given another set of associations for each word which would serve as the “competitive” memory for the first.

The results showed some interesting findings, some which may sound obvious on first glance. Attempting to recall the second word association led to a detriment in the subject’s ability to recall the first. That might not sound groundbreaking to start off with but subsequent testing showed a progressive detriment to the recollection of competing memories, demonstrating they were being actively repressed. Further to this the researchers found that their subject’s brain activity was lower for trained images than ones that weren’t part of the initial training set, an indication that these memories were being actively suppressed. There was also evidence to suggest that the trained memories showed the most average forgetting as well as increased activity in a region of the brain known to be associated with adaptive forgetting.

Whilst this research might not give you any insight into how to improve your memory it does give us a rare look into how our brain functions and why certain it behaves in ways we believe to be sub-optimal. Potentially in the future there could be treatments available to suppress that mechanism however what ramifications that might have on actual cognition is anyone’s guess. Needless to say though it’s incredibly interesting to find out why our brains do the things we do, even if we wished they did the exact opposite most of the time.

DDR4 Appears on The Market; I Realise I’ve Been Under a Rock.

Whilst I don’t spend as much time as I used to keeping current with all things PC hardware related I still maintain a pretty good working knowledge of where the field is going. That’s partly due to my career being in the field (although I’m technically a services guy) but mostly it’s because I love new tech. You’d think then that DDR4, the next generation in PC memory, making its commercial debut wouldn’t be much of a surprise to me but I had absolutely no idea it was in the pipeline. Indeed had I not been building out a new gaming rig for a friend of mine I wouldn’t have known it was coming, nor that I could buy it today if I was so inclined.

Professional Memory Holder

Double Data Rate Generation 4 (DDR4) memory is the direct successor to the current standard, DDR3, which has been in widespread use since 2007. Both standards (indeed pretty much all memory standards) were developed by the Joint Electron Device Engineering Council (JEDEC) who have been working on DDR4 since about 2005. The reasoning behind the long lead times on new standards like this is complicated but it comes down to a function of getting everyone to agree to the standard, manufacturers developing products around said standard and then, finally, them making their way into the hands of consumers. Thus whilst new memory modules come and go with the regular tech cycle typically the standards driving them remain standard for the better part of a decade or two which is probably why this writer neglected to keep current on it.

In terms of actual improvements DDR4 seems like an evolutionary step forward rather than a revolutionary one. That being said the improvements introduced with the new specification are nothing to sneeze at with one of the big improvements being a reduction in the voltage (and thus power) that the specification requires. Typical DDR4 modules will now use 1.2V compared to DDR3’s 1.5V and the low voltage variant, typically seen in low power systems like smartphones and the like, goes all the way down to 1.05V.To end consumers this won’t mean too much but for large scale deployments the savings from running this new memory add up very quickly.

As you’d expect there’s also been a bump up in the operating speed of DDR4 modules, ranging from 2133Mhz all the way up to 4266Mhz. Essentially the lowest tier of performance DDR4 memory will match the top performers of DDR3 and the amount of headroom for future development is quite significant. This will have a direct impact on the performance of systems that are powered by DDR4 memory and whilst most consumers won’t notice the difference it’s definitely going to be a defining feature of enthusiast PCs for the next couple years. I know that I updated my dream PC specs to include it even though the first generation of products is only just hitting the market.

DDR4 chips are also meant to be a lot more dense than their DDR3 predecessors, especially considering that the specification has also accommodated 3D layering technologies like Samsung’s V-NAND. Many are saying that this will lead to DDR4 being cheaper for a comparable amount of memory vs DDR3 however right now you’ll be paying about a 40% premium on pretty much everything if you want to build a system around the new style of memory. This is to be expected though and whilst I can eventually see DDR4 eclipsing DDR3 on a price per gigabyte basis that won’t be for several years to come. DDR3 has 7 years worth of economies of scale built up and they won’t become irrelevant for a very long time.

So whilst I might be a little shocked that I was so out of the loop I didn’t know a new memory standard had made its way into reality I’m glad it has. The improvements might be incremental rather than a bold leap forward but progress in this sphere is so slow that anything is worth celebrating. The fact that you can build systems with it today is just another bonus, one that I’m sure is making dents in geek’s budgets the world over.

Samsung Starts Producing V-NAND, Massive SSDs Not Far Off.

I’ve been in the market for a new PC for a little while now so occasionally I’ll indulge myself in a little hypothetical system building so I can figure out how much I want to spend (lots) and what kind of computer I’ll get out of it (a super fast one). One of the points that got me unstuck was the fact that whilst I can get semi-decent performance out of my RAID10 set which stores most of my stuff it’s no where near the performance of my SSD that holds the OS and my regularly used applications. Easy, I thought, I’ll just RAID together some SSDs and get the performance I want with enough space to hold all my games and other miscellany. Thing is though SSDs don’t like to be in RAID sets (thanks to TRIM not working with it) unless its RAID0 and I’m not terribly keen on halving the MTBF just so I can get some additional space. No what I need is a bigger drive and it looks like Samsung is ready to deliver on that.

V-NAND-04-0

That little chip is the key to realizing bigger SSDs (among other things). It’s a new type of flash memory called V-NAND based on a new gate technology called CTF and Samsung has just started mass production of them.

What’s really quite groovy about this new kind of NAND chip is that unlike all other computer chips which are planar in nature, I.E. all the transistors lie on a single plane, V-NAND (as you can likely guess) is actually a vertical stack of planar chips. This allows for incredible densities inside a single chip with this first generation clocking in at a whopping 128GB. Putting that in perspective the drive that I’m currently using has the same capacity as that single chip which means that if I replaced its memory with this new V-NAND I’d  be looking at a 1TB drive. For tech heads like me even hearing that it was theoretically possible to do something like that would make us weak at the knees but these are chips that you can start buying today.

Apparently this isn’t their most dense chip either as their new 3D NAND tech allows them to go up to 24 layers high. I can’t seem to find a reference that states just how many layers are in this current chip so I’m not sure how dense we’re talking here but it seems like this will be the first chip among many and I doubt they’ll stop at 24.

As if all that wasn’t enough Samsung is also touting higher reliability, from anywhere between 2x to 10x, as well as at least double the write performance of traditional NAND packages. All SSDs are at the point where the differences in write/read speeds are almost invisible to the end user so that may be moot for many but for system builders it’s an amazing leap forward. Considering we can already get some pretty amazing IOPS from the SSDs available today doubling that just means we can do a whole lot more with a whole lot less hardware and that’s always a good thing. Whether those claims hold up in the real world will have to be seen however but there’s a pretty close relationship between data density and increased throughput.

Unfortunately whilst these chips are hitting mass production today I couldn’t find any hint of which partners are creating drives based around them or if Samsung was working on one themselves. They’ve been releasing some pretty decent SSDs recently, indeed they were the ones I was eyeing off for my next potential system, so I can’t imagine they’d be too far off given that they have all the expertise to create one. Indeed they just recently released the gigantic 1.6TB SSD that uses the new PCIe interface NVMe to deliver some pretty impressive speeds so I wouldn’t be surprised if their next drive comes out on that platform using this new V-NAND.

It’s developments like this that are a testament to the fact that Moore’s Law will keep on keeping on long despite the numerous doubters ringing its death bell. With this kind of technology in mind its easy to imagine it being applied elsewhere, increasing density in other areas like CPU dies and volatile memory. Of course porting such technology is non-trivial but I’d hazard a guess that all the chip manufacturers worldwide are chomping at the bit to get in on this and I’m sure Samsung will be more than happy to license the patents to them.

For a princely sum, of course 😉

 

Remember Me: The Dark World of No Pain, Regrets or Remorse.

My previous post on games and female protagonists sparked an interesting conversation among my friends as we tried to recall all the games we’d played that had either a female lead character or at least one that played a major role in the game’s story. Even though we play a fairly broad range of titles the number of strong female characters we could name was dwarfed by their male counterparts, something that seems particularly odd now that 45% of all gamers are women. Thankfully that seems to be changing (albeit slowly) as games like Remember Me are becoming more frequent, even if they have to fight for their very existence.

Remember Me Title Screen

You awake in an all white cell, your memory being wiped clean as part of the intake process for the prison you’re being kept in. A doctor approaches you and starts asking you rudimentary questions, trying to figure out just how much of yourself remains after your treatment. It seems that you’re somewhat resistant to the Sensen’s memory wiping ability and need to be sent elsewhere for further treatments. However whilst you’re on your way to what appears to be your final doom you’re contacted by a man called Edge who helps you escape. The world you’re then thrust into however is a dark and terrifying one that’s under the control of the Memorize corporation. Not directly however, but simply because their technology allows anyone to forget the most painful moments of their life turning them into memory junkies. Edge wants you to fight them and you can’t fight the compulsion to do so.

Remember Me is pretty much what I’ve come to expect from current generation console titles as it’s able to make full use of all the hardware power that’s available to it. The game incorporates all the modern effects: high amounts of motion blur, high resolution textures and it’s own glitchy overlay whilst also keeping its frame rate at a solid 60 fps. I will take slight issue with the lip synching as, outside the cutscenes, it’s either done extremely poorly or just not at all. It’s really the only let down in the whole audio/visual experience as pretty much everything else is spot on.

Remember Me Epic Glitch

The game play of Remember Me is a mix of beat em up style combat, logic puzzles and an unique mechanic whereby you remix someone’s memories in order for them to do what you want them to. Whilst the fundamentals of each of these core mechanics will be familiar to most long time gamers they all have their own twist to them that makes them unique to the Remember Me world. By far the most intricate of them all is the combat system which you can heavily customize to suit your style of play. The logic puzzles and memory remixing are somewhat simplistic by comparison but are still an enjoyable part of the overall game play.

Combat follows the Arkham Asylum/Arkham City model of beat em up where you spend the majority of your time attempting to land combos whilst enemies throw themselves at you. It’s a little more nuanced and is reminiscent of fighter game combos where you must hit every button at the right time and in the right order to pull it off. However the combo aid at the bottom of the screen helps a lot and it’s also far more forgiving than any fighter game I’ve ever played. The really cool thing about the combat system though is the customization allowing you to change how the combo works and what benefits landing it will give you.

Remember Me Combo Lab

You have 4 types of “pressens” which are mapped to the buttons on the controller. The first is the damage one which, as its name implies, will increase the damage dealt by that particular strike. Regeneration ones will give you health upon landing a hit and cooldown pressens reduce the time between the use of your special abilities (more on those later). The final one is the chain pressen which inherits all the pressens that came before it making it a powerful tool for creating combos that are truly crazy. There’s also the twist of pressens having more effect the further along in the combo they are which, when you’re dealing with an 8 hit combo, can make a pressen that felt useless suddenly become really viable. You can also chop and change between the pressens during combat, allowing you to adjust your fighting style to the challenges at hand.

You’ll be doing this more often than you think as whilst towards the end you’ll have enough pressens and combos available to you to cover any situation initially you’ll either be short of either of them at any given time. My original 8 hit combo felt like the perfect fit for pretty much any situation but when you’re surrounded by 8 enemies at a time it became incredibly hard to land and thus needed to be reworked into a 5 and 3 hit combo respectively. There’s also certain types of enemies that will require you to build a combo just to take them down especially if their death relies on using one of your special abilities.

Remember Me S-Pressens

Augmenting your regular punches and kicks are s-pressens, special abilities that allow you to deal with the varying challenges much easier and quicker than you could do otherwise. They’re unlocked gradually, always as part of the game throwing a new type of enemy at you that basically requires that s-pressen to take them down, and how you use them is really up to you. They also rely on focus, shown as the white/blue bar above, which is generated whenever you hit or are hit by someone. In the beginning they’re quite cool and feel like the ultimate get out of jail free card but eventually their effectiveness starts to drop off and their use becomes something of a necessity.

This is probably where Remember Me starts to struggle as ramping up the difficulty involves nullifying the abilities that have been granted to you whilst throwing ever increasing numbers of enemies at you. It’s something that the whole games industry is struggling with at the moment, the idea of providing challenge whilst keeping the player engaged, but simply throwing more bodies or removing player options is most certainly more towards the anti-fun part of the spectrum and should honestly be avoided. Of course you could argue that due to its hack ‘n’slash nature Remember Me implies that this is how the challenge will be ramped up but I find that a poor excuse for a game that incorporates such a nuanced combat system in the first place. I don’t pretend to have  a solution to this, indeed even the game designers I know say that this is something that the best struggle to achieve, but it’s definitely one of those things that will count against a game in my view.

Remember Me Combat

The memory remixing puzzles are quite awesome as they play on the idea of small changes having big impacts on how something would play out. Whilst the outcomes are relatively fixed, I.E. there’s no emergent behaviour possible in any of them, the different outcomes are quite varied and the difference between a successful remix and a failure can be something as simple as doing something too early, or too late. There’s also a ton of red herrings in all of them, things that when modified won’t do anything at all, which keeps you second guessing your decisions right up until everything falls into place. I can’t really talk about it much more without spoiling the crap out of some of the puzzles but suffice to say it’s really good despite the fact it didn’t feature as prominently as I thought it would.

Outside of the memory remixing there’s a bunch of puzzles that make use of Remembranes, fragments of memory that you purloin from other people in order to move forward. They start off as being easy timing puzzles, usually involving you avoiding detection from robots that move in a predictable pattern, but they eventually graduate into riddles that unlock codes forcing you to decipher the ramblings of a man who was driven insane. They’re a small part of the game however and you could usually stumble through them without thinking about it too hard although I will admit I got caught on the second to last puzzle involving the hominus/m3morize/evolutio words.

Remember Me Remembrane

One point that bears mentioning is the strange, strange world that Remember Me exists in. Now I’m not talking about the major plot points that drive the story that revolve around the Memorize memory technology, more that whilst the developers have strived to create a world that feels alive they’ve in fact created one that’s just simply weird. There are robots everywhere, and I mean everywhere, but apart from the patrol robots not a single one will react to you, not even ones that are in places where you’re not supposed to be (despite being a wanted criminal). They’ve obviously been put there to make it feel like the city is alive in some way without them having to code in a lot of people (which do exist, but are few and far between) but instead it creates this weird atmosphere where you’d expect them to react to you but they don’t. You’d probably be better just leaving them out because having them there just creates this extremely odd atmosphere.

Remember Me’s story is quite gripping once you get over the stumbling block of Nilin implicitly trusting Edge and doing everything he asks. They touch on this very point with the inter-chapter monologues that help to bridge over some of the more glaring plot issues, but it essentially leaves Nilin without any particular motivation for a good chunk of the game. It does morph into a much more rich and detailed story towards the end however, even though quite a lot of things are still left unclear, and the last couple hours were intense enough for everyone in my house to stop what they were doing in order to watch everything to the end. It’s definitely far above what I’ve come to expect from these kinds of games and Dotnod Entertainment should be commended for making a strong female lead, even if there’s a few rough edges.

Remember Me Final Episode

For a new IP Remember Me does incredibly well, showcasing some incredibly refined game mechanics with a top notch story that combine to produce a well rounded and highly polished game experience. It still has some teething issues, something which is not uncommon to games trying out new ideas, but it manages to pull the majority of them off without sacrificing other aspects of the game. A strong female lead is also a welcome addition something which hopefully won’t be considered a controversial choice for too much longer. I thoroughly enjoyed my time with Remember Me and would recommend it for anyone seeking out a fresh experience that’s unlike anything else that’s come before it.

Rating: 9.25/10

Remember Me is available on PlayStation3, Xbox360 and PC right now for $79, $79 and $49.99 respectively. Game was played on the PlayStation 3 on the Errorist Agent difficulty with around 8 hours of total play time and 39% of the achievements unlocked.

Virtual Machine CPU Over-provisioning: Results From The Real World.

Back when virtualization was just starting to make headway into the corporate IT market the main aim of the game was consolidation. Vast quantities of CPU, memory and disk resources were being squandered as servers sat idle for the vast majority of their lives, barely ever using the capacity that was assigned to them. Virtualization allowed IT shops the ability to run many low resource servers on the one box, significantly reducing the hardware requirement cost whilst providing a whole host of other features. It followed then that administrators looked towards over-provisioning their hosts, I.E. creating more virtual machines than the host was technically capable of handling.

The reason this works is because of a feature of virtualization platforms called scheduling. In essence when you put a virtual machine on an over-provisioned host it will not be guaranteed to get resources when it needs them, instead it’s scheduled on and in order to keep it and all the other virtual machines running properly. Surprisingly this works quite well as for the most part virtual machines spend a good part of their life idle and the virtualization platform uses this information to schedule busy machines ahead of idle ones. Recently I was approached to find out what the limits were of a new piece of hardware that we had procured and I’ve discovered some rather interesting results.

The piece of kit in question is a Dell M610x blade server with the accompanying chassis and interconnects. The specifications we got were pretty good being a dual processor arrangement (2 x Intel Xeon X5660) with 96GB of memory. What we were trying to find out was what kind of guidelines should we have around how many virtual machines could comfortably run on such hardware before performance started to degrade. There was no such testing done with previous hardware so I was working in the dark on this one, so I’ve devised my own test methodology in order to figure out the upper limits of over-provisioning in a virtual world.

The primary performance bottleneck for any virtual environment is the disk subsystem. You can have the fastest CPUs and oodles of RAM and still get torn down by slow disk. However most virtual hosts will use some form of shared storage so testing that is out of the equation. The two primary resources we’re left with then are CPU and memory and the latter is already a well known problem space. However I wasn’t able to find any good articles on CPU over-provisioning so I devised some simple tests to see how the systems would perform when under a load that was well above its capabilities.

The first test was a simple baseline, since the server has 12 available physical cores (HyperThreading might say you get another core, but that’s a pipe dream) I created 12 virtual machines each with a single core. I then fully loaded the CPUs to max capacity. Shown below is a stacked graph of each virtual machine’s ready time which is a representation of how long the virtual machine was ready¹ to execute some instruction but was not able to get scheduled onto the CPU.

The initial part of this graph shows the machines all at idle. Now you’d think at that stage that their ready times would be zero since there’s no load on the server. However since VMware’s hypervisor knows when a virtual machine is idle it won’t schedule it on as often as the idle loops are simply wasted CPU cycles. The jumpy period after that is when I was starting up a couple virtual machines at a time and as you can see those virtual machine’s ready times drop to 0. The very last part of the graph shows the ready time rocketing down to nothing for all the virtual machines with the top grey part of the graph being the ready time of the hypervisor itself. 

This test doesn’t show anything revolutionary as this is pretty much the expected behaviour of a virtualized system. It does however provide us with a solid baseline from which we can draw some conclusions from further tests. The next test I performed was to see what would happen when I doubled the work load on the server, increasing the virtual core count from 12 to a whopping 24. 

For comparison’s sake the first graph’s peak is equivalent to the first peak of the second graph. What this shows is that when the CPU is oversubscribed by 100% the CPU wait times rocket through the roof with the virtual machines waiting up to 10 seconds in some cases to get scheduled back onto the CPU. The average was somewhere around half a second which for most applications is an unacceptable amount of time. Just imagine trying to use your desktop and having it freeze for half a second every 20 seconds or so, you’d say it was unusable. Taking this into consideration we now know that there must be some level of happy medium in the centre. The next test then aimed right bang in the middle of these two extremes, putting 18 CPUs on a 12 core host.

Here’s where it gets interesting. The graph depicts the same test running over the entire time but as you can see there are very distinct sections depicting what I call different modes of operation. The lower end of the graph shows a time when the scheduler is hitting bang on its scheduling and the wait times are overall quite low. The second is when the scheduler gives much more priority to the virtual machines that are thrashing their cores and the machines that aren’t doing anything get pushed to the side. However in both instances the 18 cores running are able to get the serviced in a maximum of 20 milliseconds or so, well within the acceptable range of most programs and user experience guidelines.

Taking this all into consideration it’s then reasonable to say that the maximum you can oversubscribe a virtual host in regards to CPU is 1.5 times the number of physical cores. You can extrapolate that further by taking into consideration the average load and if it’s below 100% constantly then you can divide the number of CPUs by that percentage. For example if the average load of these virtual machines was 50% then theoretically you could support 36 single core virtual machines on this particular host. Of course once you get into the very high CPU count things like overhead start to come into consideration, but as a hard and fast rule it works quite well.

If I’m honest I was quite surprised with these results as I thought once I put a single extra thrashing virtual machine on the server it’d fall over in a screaming heap with the additional load. It seems though that VMware’s scheduler is smart enough to be able to service a load much higher than what the server should be capable of without affecting the other virtual machines that adversely. This is especially good news for virtual desktop deployments as typically the limiting factor there was the number of CPU cores available. If you’re an administrator of a virtual deployment I hope you found this informative and it will help you when planning future virtual deployments.

¹CPU ready time was chosen as the metric as it most aptly showcases a server’s ability to serve a virtual machine’s request of the CPU when in a heavy scheduling scenario. Usage wouldn’t be an accurate metric to use since for all these tests the blade was 100% utilized no matter the number of virtual machines running.

Unravelling Your Mind.

The brain is a wonderfully complicated piece of organic matter and we’re still in the early stages in our understanding of how it all functions. For the most part the basic components are well understood, like neurons and synapses, however when the whole thing comes together we get some extrodinary emergent behaviour. One of the most interesting behaviours that we all experience is that of dreaming, and it was this behaviour that caused me to analyse the last year of my life whilst I was on Turtle Island.

Among the many theories about why we dream there are a couple that really stand out. The first being that dreams are in fact your brain’s way of training you for certain situations (Coutt’s theory). Whilst this might not make sense when you have a lot of fantastical dreams such as flying I can remember many dreams that mirrored real experiences later in life. Whilst I can’t truly estimate how helpful these dreams where some of them did get me thinking about certain ideals and beliefs I had held, sometimes resulting in me discarding them completely. It definitely feels like dreams do serve some form of cognitive evolution to strengthen yourself against the world.

The second, and I believe most important, is for the brain to process, link and organise your memories (R. Stickgold et al. “Sleep, Learning, and Dreams: Off-line Memory Reprocessing”). It goes hand in hand with studies done that show a prolonged lack of sleep affects memory. This also makes quite a bit of sense to me since, for the most part, my dreams usually have some theme from the day woven into them. You can then imagine my surprise then when on the second night on the island I had, and can distinctly remember, around 15 separate dreams with themes that I could trace back to events that happened well over a year ago. It didn’t take me long to formulate a theory on what happened based on the 2 dream theories I’ve described.

Now I don’t usually think I’m a stressed person, in fact I usually thrive in stressful situations. The last 6 months of my life could easily have been described as some of the most stressful in my life, what with the wedding, investment purchases going awry and almost being unemployed. As far as I could tell I can physically cope with stress pretty well, but this series of dreams and the mental clarity I had afterwards lends me to believe that there’s a possibility that my mind was somehow pent up processing my daily life and was in essence backed up on down time processing. With everything being provided for me and the stress of the last 6 months far behind me my brain when into over-drive catching up on processing and linking up those memories. It seems to line up nicely with the fact that I had been waking up tired for about the past 4 months no matter how much sleep I got, which would seem to indicate that my brain wanted more time to catch up on memory processing.

The next few days saw my thoughts become a lot more free flowing and the conversations at the dinner table all that more interesting. I’ve never really gone on a holiday where everything was provided for me so I guess the combination of relaxation and not having to think about anything allowed my mind to unravel itself from the tangled mess I had gotten it into over the past year. I guess the moral of the story is that we all need some downtime to let our brains relax and recover from the daily grind and mine just so happened to be the honeymoon.

Or maybe it was the Kava… 🙂