Posts Tagged‘nand’

Samsung Starts Producing V-NAND, Massive SSDs Not Far Off.

I’ve been in the market for a new PC for a little while now so occasionally I’ll indulge myself in a little hypothetical system building so I can figure out how much I want to spend (lots) and what kind of computer I’ll get out of it (a super fast one). One of the points that got me unstuck was the fact that whilst I can get semi-decent performance out of my RAID10 set which stores most of my stuff it’s no where near the performance of my SSD that holds the OS and my regularly used applications. Easy, I thought, I’ll just RAID together some SSDs and get the performance I want with enough space to hold all my games and other miscellany. Thing is though SSDs don’t like to be in RAID sets (thanks to TRIM not working with it) unless its RAID0 and I’m not terribly keen on halving the MTBF just so I can get some additional space. No what I need is a bigger drive and it looks like Samsung is ready to deliver on that.

V-NAND-04-0

That little chip is the key to realizing bigger SSDs (among other things). It’s a new type of flash memory called V-NAND based on a new gate technology called CTF and Samsung has just started mass production of them.

What’s really quite groovy about this new kind of NAND chip is that unlike all other computer chips which are planar in nature, I.E. all the transistors lie on a single plane, V-NAND (as you can likely guess) is actually a vertical stack of planar chips. This allows for incredible densities inside a single chip with this first generation clocking in at a whopping 128GB. Putting that in perspective the drive that I’m currently using has the same capacity as that single chip which means that if I replaced its memory with this new V-NAND I’d  be looking at a 1TB drive. For tech heads like me even hearing that it was theoretically possible to do something like that would make us weak at the knees but these are chips that you can start buying today.

Apparently this isn’t their most dense chip either as their new 3D NAND tech allows them to go up to 24 layers high. I can’t seem to find a reference that states just how many layers are in this current chip so I’m not sure how dense we’re talking here but it seems like this will be the first chip among many and I doubt they’ll stop at 24.

As if all that wasn’t enough Samsung is also touting higher reliability, from anywhere between 2x to 10x, as well as at least double the write performance of traditional NAND packages. All SSDs are at the point where the differences in write/read speeds are almost invisible to the end user so that may be moot for many but for system builders it’s an amazing leap forward. Considering we can already get some pretty amazing IOPS from the SSDs available today doubling that just means we can do a whole lot more with a whole lot less hardware and that’s always a good thing. Whether those claims hold up in the real world will have to be seen however but there’s a pretty close relationship between data density and increased throughput.

Unfortunately whilst these chips are hitting mass production today I couldn’t find any hint of which partners are creating drives based around them or if Samsung was working on one themselves. They’ve been releasing some pretty decent SSDs recently, indeed they were the ones I was eyeing off for my next potential system, so I can’t imagine they’d be too far off given that they have all the expertise to create one. Indeed they just recently released the gigantic 1.6TB SSD that uses the new PCIe interface NVMe to deliver some pretty impressive speeds so I wouldn’t be surprised if their next drive comes out on that platform using this new V-NAND.

It’s developments like this that are a testament to the fact that Moore’s Law will keep on keeping on long despite the numerous doubters ringing its death bell. With this kind of technology in mind its easy to imagine it being applied elsewhere, increasing density in other areas like CPU dies and volatile memory. Of course porting such technology is non-trivial but I’d hazard a guess that all the chip manufacturers worldwide are chomping at the bit to get in on this and I’m sure Samsung will be more than happy to license the patents to them.

For a princely sum, of course 😉

 

Fusion-IO’s ioDrive Comparison: Sizing up Enterprise Level SSDs.

Of all the PC upgrades that I’ve ever done in the past the one that’s most notably improved performance of my rig is, by a wide margin, installing a SSD. Whilst good old fashioned spinning rust disks have come a long way in recent years in terms of performance they’re still far and away the slowest component in any modern system. This is what chokes most PC’s performance as the disk is a huge bottleneck, slowing everything down to its pace. The problem can be mitigated somewhat by using several disks in a RAID 0 or RAID 10 set but all of those pale in comparison when compared to even a single SSD.

The problem doesn’t go away for the server environment either, in fact most of the server performance problems I’ve diagnosed have had their roots in poor disk performance. Over the years I’ve discovered quite a few tricks to get around the problems presented by traditional disk drives but there are just some limitations you can’t overcome. Recently at work the issue of disk performance came to a head again as we investigated the possibility of using blade servers in our environment. I casually made mention of a company that I had heard of a while back, Fusion-IO, who specialised in making enterprise class SSDs. The possibility of using one of the Fusion-IO cards as a massive cache for the slower SAN disk was a tantalizing prospect and to my surprise I was able to snag an evaluation unit in order to put it through its paces.

The card we were sent was one of the 640GB ioDrives. It’s surprising heavily for its size, sporting gobs of NAND flash and a massive heat sink that hides the propeitary c ontroller. What intrigued me about the card initially was the NAND didn’t sport any branding I recognised before (usually its recognisable like Samsung) but as it turns out each chip is a 128GB Micron NAND Flash chip. If all that storage was presented raw it would total some 3.1 TB and this is telling of the underlying infrastructure of the Fusion-IO devices.

The total storage available to the operating system once this card is installed is around 640GB (600GB usable). Now to get that kind of storage out of the Micron NAND chips you’d only need 5 of them but the ioDrive comes with a grand total of 25 dotting the board. No traditional RAID scheme can account for the amount of storage presented. So based on the fact that there’s 25 chips and only 5 chips worth of capacity available it follows that the Fusion-IO card uses quintuplet sets of chips to provide the high level of performance that they claim. That’s an incredible amount of parallelism and if I’m honest I expected these chips to all be 256MB chips that were all RAID 1 to make one big drive.

Funnily enough I did actually find some Samsung chips on this card, two 1GB DDR2 chips. These are most likely used for the CPU on the ioDrive which has a front side bus of either 333 or 400MHz based on the RAM speed.

But enough of the techno geekery, what’s really important is how well this thing performs in comparison to traditional disks and whether or not it’s worth the $16,000 price tag that comes along with it. Now I had done some extensive testing of various systems in the past in order to ascertain whether the new Dell servers we were looking at where going to perform as well as their HP counterparts. All of this testing was purely disk based using IOMeter, a disk load simulator that tests and reports on nearly every statistic you want to know about your disk subsystem. If you’re interested in replicating the results I’ve got then I’ve uploaded a copy of my configuration file here. The servers included in the test are Dell M610x, Dell M710HD, Dell M910, Dell R710 and a HP DL380G7. For all the tests (bar the two labelled local install) all of them are a base install of ESXi 5 with a Windows 2008R2 virtual machine installed on top of it. The specs of the virtual machine are 4 vCPUs, 4GB RAM and a 40GB disk.

As you can see the ioDrive really is in a class all of its own. The only server that comes close in terms of IOPS is the M910 and that’s because it’s sporting 2 Samsung SSDs in RAID 0. What impresses me most about the ioDrive though is its random performance which manages to stay quite high even as the block size starts to get bigger. Although its not shown in these tests the one area where the traditional disks actually equal the Fusion-IO is in terms of throughput when you get up to really large write sizes, on the order of 1MB or so. I put this down to the fact that the servers in question, the R710s and DL380G7s, have 8 disks in them that can pump out some serious bandwidth when they need to. If I had 2 Fusion-IO cards though I’m sure I could easily double that performance figure.

What interested me next was to see how close I could get to the spec sheet performance. The numbers I just showed you are particularly incredible but Fusion-IO claims that this particular drive was capable of something on the order of 140,000 IOPS if I played my cards correctly. Using the local install of Windows 2008 I had on there I fired up IOMeter again and set up some 512B tests to see if I could get close to those numbers. The results, as shown in the Dell IO contoller software, are shown below:

Ignoring the small blip in the centre where I had to restart the test you can see that whilst the ioDrive is capable of some pretty incredible IO the advertised maximums are more than likely theoretical than practical. I tried several different tests and while a few averaged higher than this (approximately 80K IOPS was my best) it was still a far cry from the figures they have quoted. Had they gotten within 10~20% I would’ve given it to them but whilst the ioDrive’s performance is incredible it’s not quite as incredible as the marketing department would have you believe.

As a piece of hardware the Fusion-IO ioDrive is really the next step up in terms of performance. The virtual machines I had running directly on the card were considerably faster than their spinning rust counterparts and if you were in need of some really crazy performance you really couldn’t go past one of these cards. For the purpose we had in mind for it however (putting it inside a M610x blade) I can’t really recommend it as it’s a full height blade that only has the power of a half height. The M910 represents much better value with its crazy CPU and RAM count and the SSDs, whilst being far from Fusion-IO level, do a pretty good job of bridging the disk performance gap. I didn’t have enough time to see how it would improve some real world applications (it takes me longer than 10 days to get something like this into our production environment) but based on these figures I have no doubt it improve the performance of whatever I put it into considerably.