Posts Tagged‘ram’

HP’s “The Machine” Killed, Surprising No One.

Back in the day it didn’t take much for me to get excited about a new technology. The rapid progressions we saw from the late 90s through to the early 2010s had us all fervently awaiting the next big thing as it seemed nearly anything was within our grasp. The combination of getting older and being disappointed a certain number of times hardened me against this optimism and now I routinely attempt to avoid the hype for anything I don’t feel is a sure bet. Indeed I said much the same about HP’s The Machine last year and it seems my skepticism has paid dividends although I can’t say I feel that great about it.

hp-machine-memristor-2015-06-05-01

For the uninitiated HP’s The Machine was going to be the next revolutionary step in computing. Whilst the mockups would be familiar to anyone who’s seen the inside of a standard server those components were going to be anything but, incorporating such wild technologies as memristors and optical interconnects. What put this above many other pie in the sky concepts (of which I include things like D-Wave’s quantum computers as the jury is still out on whether or not they’re providing a quantum speedup) is that it was based on real progress that HP had made in many of those spaces in recent years. Even that wasn’t enough to break through my cynicism however.

And today I found out I was right, god damnit.

The reasons cited were ones I was pretty sure would come to fruition, namely the fact that no one has been able to commercialize memristors at scale in any meaningful way. Since The Machine was supposed to be almost solely based off of that technology it should be no surprise that it’s been canned on the back of that. Now instead of being the moonshot style project that HP announced last year it’s instead going to be some form of technology demonstrator platform, ostensibly to draw software developers across to this new architecture in order to get them to build on it.

Unfortunately this will likely end up being not much more than a giant server with a silly amount of RAM stuffed into it, 320TB to be precise. Whilst this may attract some people to the platform out of curiosity I can’t imagine that anyone would be willing to shell out the requisite cash on the hopes that they’d be able to use a production version of The Machine sometime down the line. It would be like the Sony Cell processor all over again instead of costing you maybe a couple thousand to experiment with it you’d be in the tens of thousands, maybe hundreds, just to get your hands on some experimental architecture. HP might attempt to subsidise that but considering the already downgraded vision I can’t fathom them throwing even more money at it.

HP could very well turn around in 5 or 10 years with a working prototype to make me look stupid and, honestly, if they did I would very much welcome it. Whilst predictions about Moore’s Law ending happen at an inverse rate to them coming true (read: not at all) it doesn’t mean there isn’t a few ceilings we’ve seen on the horizon that will need to be addressed if we want to continue this rapid pace of innovation. HP’s The Machine was one of the few ideas that could’ve pushed us ahead of the curve significantly and its demise is, whilst completely expected, still a heart wrenching outcome.

The Memristor is Almost Ready For Prime Time.

With the amount of NVRAM that’s used these days the amount of innovation in the sector has been comparatively little. For the most part the advances have come from the traditional avenues, die shrinks and new gate technologies, with the biggest advance in 3D construction only happening last week. There’s been musings about other kinds of technology for a long time like memristors which had their first patent granted back in 2007 and were supposed to making their way into our hands late last year, but that never eventuated. However news comes today of a new memory startup that’s promising a lot of things and whilst they don’t say it directly it looks like they might be one of the first to market with memristor based products.

Crossbar-Simple-CMOS-Integration-080213

Crossbar is a new company that’s been working in stealth for some time on a new type of memory product which, surprisingly, isn’t anything particularly revolutionary. It’s called Resistive RAM (RRAM) and a little research shows that there’s been companies working on this idea as far back as 2009. It’s based around a fairly interesting phenomena whereby a dielectric, an electric insulator, can be made to conduct through the application of high voltage. This forms a filament of low resistance which can then be reset, breaking the connection, and then set again using another high voltage jolt. This idea lends itself well to applications in memory as the two states translate perfectly to binary and if the specifications are anything to go by the performance that will come out of them should be quite spectacular.

If this is sounding familiar then you’re probably already familiar with the idea of memristors. These are the 4th fundamental component of electronic circuits that were postulated back in 1971 by Leon Chua and were made real by HP in 2007. In a basic sense their resistance is a function of the current following through them and when the current is removed that resistance is remembered, hence their name. As you can see this describes the function of RRAM pretty well and there is a solid argument to be made that all RRAM technologies are in fact memristors. Thus whilst it’s pretty spectacular that a start up has managed to perfect this technology to the point of producing it on a production fab it’s actually technology that’s been brewing for quite some time and one that everyone in the tech world is excited about.

Crossbar’s secret sauce could likely come from their fabrication process as they claim that the way they create their substrate means that they should be able to stack them, much in the same way that Samsung can now do with their VNAND. Now this is exciting because previously HP alluded to the fact that memristor based storage could be made much more dense than NAND, several orders of magnitude more dense to be precise, and considering the density gains Samsung got with their 3D chips a layered memristor device’s storage capacity could be astronomical. Indeed Crossbar claims this much with up to 1TB for a standard chip that could be stacked multiple times, enabling terabytes on a single chip. That puts good old fashioned spinning rust disks on notice as they just couldn’t compete, even when it comes to archival storage. Of course the end price will be a big factor in this but that kind of storage potential could drive the cost per GB through the floor.

So the next couple months are going to be quite interesting as we have Samsung, the undisputed king of NAND, already in the throws of producing some of the most dense storage available with Crossbar (and multiple other companies) readying memristor technology for the masses. In the short term I give the advantage to Samsung as they’ve got the capital and global reach to get their products out to anyone that wants them. However if memristor based products can do even half of what they’re claimed to be capable of they could quickly start eating Samsung’s lunch and I can’t imagine it’d be too long before they either bought the biggest players in the field or developed the technology themselves. Regardless of how this all plays out the storage market is heading for a shake up, one that can’t come quick enough in my opinion.

 

VMware vSphere 5: Technologically Awesome, Financially Painful.

I make no secret of the fact that I’ve pretty much built my career around a single line of products, specifically those from VMware. Initially I simply used their workstation line of products to help me through university projects that required Linux to complete but after one of my bosses caught wind of my “experience” with VMware’s products I was put on the fast line to become an expert in their technology. The timing couldn’t have been more perfect as virtualization then became a staple of every IT department I’ve had the pleasure of working with and my experience with VMware ensured that my resume always floated around near the top when it came time to find a new position.

In this time I’ve had a fair bit of experience with their flagship product now called vSphere. In essence it’s an operating system you can install on a server that lets you run multiple, distinct operating system instances on top of it. Since IT departments always bought servers with more capacity than they needed systems like vSphere meant they could use that excess capacity to run other, not so power hungry systems along side them. It really was a game changer and from then on servers were usually bought with virtualization being the key purpose in mind rather than them being for a specific system. VMware is still the leader in this sector holding an estimated 80% of the market and has arguably the most feature rich product suite available.

Yesterday saw the announcement of their latest product offering vSphere 5. From a technological standpoint it’s very interesting with many innovations that will put VMware even further ahead of their competition, at least technologically. Amongst the usual fanfare of bigger and better virtual machines and improvements to their current technologies vSphere 5 brings with it a whole bunch of new features aimed squarely at making vSphere the cloud platform for the future. Primarily these innovations are centred around automating certain tasks within the data centre, such as provisioning new servers and managing server loads including down to the disk level which wasn’t available previously. Considering that I believe the future of cloud computing (at least for government organisations and large scale in house IT departments) is a hybrid public/private model these improvements are a welcome change , even if I won’t be using them immediately.

The one place that VMware falls down and is (rightly) heavily criticized for is the price. With the most basic licenses costing around $1000 per core it’s not a cheap solution by any stretch of the imagination, especially if you want to take advantage of any of the advanced features. Still since the licencing was per processor it meant that you could buy a dual processor server (each with say, 6 cores) with oodles of RAM and still come out ahead of other virtualization solutions. However with vSphere 5 they’ve changed the way they do pricing significantly, to the point of destroying such a strategy (and those potential savings) along with it.

Licensing is still charged on a per-processor basis but instead of having an upper limit on the amount of memory (256GB for most licenses, Enterprise Plus gives you unlimited) you are now given a vRAM allocation per licence purchased. Depending on your licensing level you’ll get 24GB, 32GB or 48GB worth of vRAM which you’re allowed to allocate to virtual machines. Now for typical smaller servers this won’t pose much of a problem as a dual proc, 48GB RAM server (which is very typical) would be covered easily by the cheapest licensing. However should you exceed even 96GB of RAM, which is very easy to do, that same server will then require additional licenses to be purchased in order to be able to full utilize the hardware. For smaller environments this has the potential to make VMware’s virtualization solution untenable, especially when you put it beside the almost free competitor of Hyper-V from Microsoft.

The VMware user community has, of course, not reacted positively to this announcement. Whilst for many larger environments the problems won’t be so bad as the vRAM allocation is done at the data center level and not the server level (allowing over-allocated smaller servers to help out their beefier brethren) it does have the potential to hurt smaller environments especially those who heavily invested in RAM heavy, processor poor servers. It’s also compounded by the fact that you’ll only have a short time to choose to upgrade for free, thus risking having to buy more licenses, or abstain and then later have to pay an upgrade fee. It’s enough for some to start looking into moving to the competition which could cut into VMware’s market share drastically.

The reasoning behind these changes is simple: such pricing is much more favourable to a ubiquitous cloud environment than it is to the current industry norm for VMware deployments. VMware might be slightly ahead of the curve on this one however as most customers are not ready to deploy their own internal clouds with the vast majority of current cloud users being hosted solutions. Additionally many common enterprise applications aren’t compatible with VMware’s cloud and thus lock end users out of realising the benefits of a private cloud. VMware might be choosing to bite the bullet now rather than later in the hopes it will spur movement onto their cloud platform at a later stage. Whether this strategy works or not remains to be seen, but current industry trends are pushing very hard towards a cloud based future.

I’m definitely looking forward to working with vSphere 5 and there are several features that will definitely provide an immense amount of value to my current environment. The licensing issue, whilst I feel won’t be much of an issue, is cause for concern and whilst I don’t believe VMware will budge on it any time soon I do know that the VMware community is an innovative lot and it won’t be long before they work out how to make the best of this licensing situation. Still it’s definitely an in for the competition and whilst they might not have the technological edge they’re more than suitable for many environments.