Posts Tagged‘raid’

Fusion-IO’s ioDrive Comparison: Sizing up Enterprise Level SSDs.

Of all the PC upgrades that I’ve ever done in the past the one that’s most notably improved performance of my rig is, by a wide margin, installing a SSD. Whilst good old fashioned spinning rust disks have come a long way in recent years in terms of performance they’re still far and away the slowest component in any modern system. This is what chokes most PC’s performance as the disk is a huge bottleneck, slowing everything down to its pace. The problem can be mitigated somewhat by using several disks in a RAID 0 or RAID 10 set but all of those pale in comparison when compared to even a single SSD.

The problem doesn’t go away for the server environment either, in fact most of the server performance problems I’ve diagnosed have had their roots in poor disk performance. Over the years I’ve discovered quite a few tricks to get around the problems presented by traditional disk drives but there are just some limitations you can’t overcome. Recently at work the issue of disk performance came to a head again as we investigated the possibility of using blade servers in our environment. I casually made mention of a company that I had heard of a while back, Fusion-IO, who specialised in making enterprise class SSDs. The possibility of using one of the Fusion-IO cards as a massive cache for the slower SAN disk was a tantalizing prospect and to my surprise I was able to snag an evaluation unit in order to put it through its paces.

The card we were sent was one of the 640GB ioDrives. It’s surprising heavily for its size, sporting gobs of NAND flash and a massive heat sink that hides the propeitary c ontroller. What intrigued me about the card initially was the NAND didn’t sport any branding I recognised before (usually its recognisable like Samsung) but as it turns out each chip is a 128GB Micron NAND Flash chip. If all that storage was presented raw it would total some 3.1 TB and this is telling of the underlying infrastructure of the Fusion-IO devices.

The total storage available to the operating system once this card is installed is around 640GB (600GB usable). Now to get that kind of storage out of the Micron NAND chips you’d only need 5 of them but the ioDrive comes with a grand total of 25 dotting the board. No traditional RAID scheme can account for the amount of storage presented. So based on the fact that there’s 25 chips and only 5 chips worth of capacity available it follows that the Fusion-IO card uses quintuplet sets of chips to provide the high level of performance that they claim. That’s an incredible amount of parallelism and if I’m honest I expected these chips to all be 256MB chips that were all RAID 1 to make one big drive.

Funnily enough I did actually find some Samsung chips on this card, two 1GB DDR2 chips. These are most likely used for the CPU on the ioDrive which has a front side bus of either 333 or 400MHz based on the RAM speed.

But enough of the techno geekery, what’s really important is how well this thing performs in comparison to traditional disks and whether or not it’s worth the $16,000 price tag that comes along with it. Now I had done some extensive testing of various systems in the past in order to ascertain whether the new Dell servers we were looking at where going to perform as well as their HP counterparts. All of this testing was purely disk based using IOMeter, a disk load simulator that tests and reports on nearly every statistic you want to know about your disk subsystem. If you’re interested in replicating the results I’ve got then I’ve uploaded a copy of my configuration file here. The servers included in the test are Dell M610x, Dell M710HD, Dell M910, Dell R710 and a HP DL380G7. For all the tests (bar the two labelled local install) all of them are a base install of ESXi 5 with a Windows 2008R2 virtual machine installed on top of it. The specs of the virtual machine are 4 vCPUs, 4GB RAM and a 40GB disk.

As you can see the ioDrive really is in a class all of its own. The only server that comes close in terms of IOPS is the M910 and that’s because it’s sporting 2 Samsung SSDs in RAID 0. What impresses me most about the ioDrive though is its random performance which manages to stay quite high even as the block size starts to get bigger. Although its not shown in these tests the one area where the traditional disks actually equal the Fusion-IO is in terms of throughput when you get up to really large write sizes, on the order of 1MB or so. I put this down to the fact that the servers in question, the R710s and DL380G7s, have 8 disks in them that can pump out some serious bandwidth when they need to. If I had 2 Fusion-IO cards though I’m sure I could easily double that performance figure.

What interested me next was to see how close I could get to the spec sheet performance. The numbers I just showed you are particularly incredible but Fusion-IO claims that this particular drive was capable of something on the order of 140,000 IOPS if I played my cards correctly. Using the local install of Windows 2008 I had on there I fired up IOMeter again and set up some 512B tests to see if I could get close to those numbers. The results, as shown in the Dell IO contoller software, are shown below:

Ignoring the small blip in the centre where I had to restart the test you can see that whilst the ioDrive is capable of some pretty incredible IO the advertised maximums are more than likely theoretical than practical. I tried several different tests and while a few averaged higher than this (approximately 80K IOPS was my best) it was still a far cry from the figures they have quoted. Had they gotten within 10~20% I would’ve given it to them but whilst the ioDrive’s performance is incredible it’s not quite as incredible as the marketing department would have you believe.

As a piece of hardware the Fusion-IO ioDrive is really the next step up in terms of performance. The virtual machines I had running directly on the card were considerably faster than their spinning rust counterparts and if you were in need of some really crazy performance you really couldn’t go past one of these cards. For the purpose we had in mind for it however (putting it inside a M610x blade) I can’t really recommend it as it’s a full height blade that only has the power of a half height. The M910 represents much better value with its crazy CPU and RAM count and the SSDs, whilst being far from Fusion-IO level, do a pretty good job of bridging the disk performance gap. I didn’t have enough time to see how it would improve some real world applications (it takes me longer than 10 days to get something like this into our production environment) but based on these figures I have no doubt it improve the performance of whatever I put it into considerably. 

A Tale of Woe and Eco-Friendly Hard Drives.

Up until recently most of my data at home hadn’t been living in the safest environment. You see like many people I kept all my data on single hard drives, their only real protection being that most of them spent their lives unplugged, sitting next to my hard drive docking bay. Of course tragedy struck one day when my playful feline companion decided that the power cord for one of the portable hard drives looked like something to play with and promptly pulled it onto the floor. Luckily nothing of real importance was on there (apart from my music collection that had some of the oldest files I had ever managed to keep) but it did get me thinking about making my data a little more secure.

The easiest way to provide at least some level of protection was to get my data onto a RAID set so that at least a single disk failure wouldn’t take out my data again. I figured that if I put one large RAID in my media box and a second in my main PC (which I was planning to do anyway) then I could keep copies of the data on each of them, as RAID on its own is not a backup solution. A couple thousand dollars and a weekend later I was in possession of a new main PC and all the fixings of a new RAID set on my media PC ready to hold my data. Everything was looking pretty rosy for a while, but then the problems started.

Now the media PC that I had built was something of a beast, sporting enough RAM and a good enough graphics card to be able to play most recent games at high settings. Soon after I had completed building it I was going to a LAN with a bunch of mates of mine, one of which who was travelling from Melbourne and wasn’t able to bring his PC with him. Too easy I thought, he can just use this new awesome beast of a box to play games with us and everything shall be good. In all honesty it was until I saw him reboot it once and the RAID controller flashed up a warning about the RAID being critical, which sent chills down my spine.

Looking at the RAID UI in Windows I found that yes indeed one of the disks had dropped out of the RAID set, but there didn’t seem to be anything wrong with it. Confused I started the rebuild on the RAID set and it managed to complete successfully after a few hours, leaving me to think that I might have bumped a cable or something to trigger the “failure”. When I got it home however the problem kept recurring, but it was random and never seemed to follow a distinct pattern, except for it being the same disk every time. Eventually however it stabilized and so I figured that it was just a transient problem and left it at that.

Unfortunately for me it happened again last night, but it wasn’t the same disk this time. Figuring it was a bung RAID controller I was preparing to siphon my data off it in order to rebuild it as a software RAID when my wife asked me if I had actually tried Googling around to see if others had had the same issue. I had done so in the past but I hadn’t been very thorough with it so I decided that it was probably worth the effort, especially if it could save me another 4 hours of babying the copy process. What I found has made me deeply frustrated, not just with certain companies but also myself for not researching this properly.

The drives I bought all those months ago where Seagate ST2000DL003 2TB Green drives which are cheap, low power drives that seemed perfect for a large amount of RAID storage. However there’s a slight problem with these kinds of drives when they’re put into a RAID set. You see the hard drives have error correction built into them but thanks to their “green” rating this process can be quite slow, on the order of 10 seconds to minutes if the drive is under heavy load. RAID controllers are programmed to mark disks as failed if they stop responding after a certain period of time, usually a couple seconds or so. That means should a drive start correcting itself and not respond quick enough to the RAID controller it will mark the disk as failed and remove it, putting the array into a critical state.

Seeing the possibility for this to cause issues for everyone hard drive manufacturers have developed a protocol called Time-Limited Error Recovery (or Error Recovery Correction for Seagate). TLER limits the amount of time the hard drive will spend attempting to recover from an error, so if it can’t be dealt with within that time frame it’ll then hand it off to the RAID controller, leaving the disk in the RAID and allowing it to recover. For the drives I had bought this setting is set to off as default and a quick Google has shown that any attempts to change it are futile. Most other brands are able to change this particular value but for these particular Seagate drives they are unfortunately locked in this state.

So where does this leave me? Well apart from hoping that Seagate releases a firmware update that allows me to change that particular value I’m up the proverbial creek without a paddle. Replacing these drives with similar drives from another manufacturer will set me back another $400 and a weekend’s worth of work so it’s not something I’m going to do immediately. I’m going to pester Seagate and hope that they’ll release a fix for this because other than that one issue they’ve been fantastic drives and I’d hate to have to get rid of them because of it. Hopefully they’re responsive about it but judging by what people are saying on the Seagate forums I shouldn’t hold my breath, but it’s all I’ve got right now.

The Build, The Results and The Tribulations.

So last week saw me pick up the components that would form my new PC, the first real upgrade I have bought in about 3 years. Getting new hardware is always an exciting experience for someone like me which is probably why I enjoy being in the datacenter so much these days, with all that new kit that I get to play with. I didn’t really have the time to build the PC until the weekend though and so I spent a good 5 days with all the parts laid out on the dining table beside me, begging me to put them together right now rather than waiting. My resolve held however and Saturday morning saw me settle down with a cup of coffee to begin the longest build I’ve ever undertaken.

I won’t go over the specifications again since I’ve already mentioned them a dozen times elsewhere but this particular build had a few unique challenges that you don’t see in regular PCs. For starters this would be my first home PC that had a RAID set in it, comprising of 4 1TB Seagate drives that would be held in a drive bay enclosure. Secondly the CPU would be watercooled using a Corsair H70 fully sealed system and since I hadn’t measured anything I wasn’t 100% sure I’d be able to fit it where I thought I could. Lastly with all these drives, watercooling and other nonsense the number of power cables required also posed a unique challenge as I wasn’t 100% sure I could get them all to fit in my mid-sized tower.

The build started off quite well as I was able to remove the old components without issue and give the case a good clean before installing bits and pieces in it. The motherboard, CPU and RAM all went together quite easily as you’d expect but when it came time to affix the mounting bracket for the watercooling I hit a bit of a stumbling block. You see the motherboard I purchased does you the favor of having the old style LGA775 mounting holes, letting you use old style coolers on the newer CPUs. This is all well and good but since the holes are only labelled properly on one side attempting to line up the backing plate with the right holes proved to be somewhat of a nightmare, especially considering that when it did line up it was at a rather odd angle. Still it mounted and fit flush to the motherboard so there was no issues there.

The next challenge was getting all the hard drives in. Taking off the front of my case to to do a dry fit of the drive bay extension showed that there was a shelf right smack bang in the middle of the 4 bays. No problem though it looked to just be screwed in however upon closer inspection it showed that the screws in the front could only be accessed by a right angle screw driver, since the holes that needed to be drilled for a regular driver hadn’t been done. After attempting several goes with a drive bit and a pair of pliers I gave up and got the drill out, leaving aluminium shavings all over the place and the shelf removed. Thankfully the drive bay extender mounted with no complaints at all after that.

Next came the fun part, figuring out where the hell the watercooling radiator would go. Initially I had planned to put it at the front of the case but the hosing was just a bit too short. I hadn’t bought any fan adapters either so mounting it on the back would’ve been a half arsed effort with cable ties and screws in the wrong place. After fooling around for a while I found that it actually fit quite snuggly under the floppy drive bays, enough so that it barely moved when I shook the case. This gave me the extra length to get to the CPU whilst also still being pretty much at the front of the case, although this also meant I could only attach one of the fans since part of the radiator was mere millimeters away from the end of the graphics card.

With everything all put together and wired up it was now the moment of truth, I took a deep breath and pressed the power button. After a tense couple milliseconds (it seemed like forever) the computer whirred into life and I was greeted with the BIOS screen. Checking around in the BIOS though revealed that it couldn’t see the 4 drives I had attached to the external SATA 6Gbps controller so I quickly booted into the windows installer to make sure they were all there. They did in fact come up and after a furious 2 hours of prodding around I found that the external controller didn’t support RAID at all, only the slower ports did. This was extremely disappointing as it was pretty much the reason why I got this particular board but figuring that the drives couldn’t saturate the old SATA ports anyway I hooked them up and was on my merry way with the Windows install being over in less than 10 minutes.

I’ve been putting the rig through its paces over the past week and I must say the biggest improvement in performance comes solely from the SSD. The longest part of the boot process is the motherboard initializing the 3 different controllers with Windows loading in under 30 seconds and being usable instantly after logging in. I no longer have to wait for things to load, every program loads pretty much instantaneously. The RAID array is none too shabby either with most games loading in a fraction of the time they used to.

Sadly with all games being optimized for consoles these days the actual performance improvement in nearly every game I’ve thrown at it has been very minimal. Still Crysis 2 with all the settings set to their maximum looks incredibly gorgeous even if I can’t seem to make it chug even on the biggest multi-player maps. The new mouse I bought (Logitech G700) is quite an amazing bit of kit too and the TRON keyboard my wife got me for my birthday just adds to the feeling that I’m using a computer from the future. Overall I’m immensely satisfied with it and I’m sure it’ll prove its worth once I throw a few more programs at it.

Speaking of which, I can’t wait to code on that beasty.

 

World of Warcraft: Cataclysm, A New World Torn From Old.

I’ve played my fair share of MMORPGs since my first introduction to this genre way back in 2004. After falling from the dizzying heights that I scaled within World of Warcraft I set about playing my way through several similar games only to either find them half done, unplayable or have their community boil down to just the hardcore in little over a month. There are only two MMORPGs that I’ve ever gone back to after an extended period of absence: World of Warcraft and EVE online. Both had characteristics that begged me to come back after I had left them for good and both have continued to reinvent themselves over the course of their long lifetimes. Today I want to take you through World of Warcraft’s latest revision, the Cataclysm expansion.

This expansion signals the return of Deathwing, one of the dragon aspects of Azeroth who’s first appearance in Blizzard’s Warcraft line of games dates all the way back to Warcraft 2: Beyond the Dark Portal. His emergence from the depths of Deepholm have torn the world asunder, laying waste to much of the original world and changing the landscape of Azeroth permanently. This expansion differs significantly from the previous 2 in that it did not add a whole new world, it reinvented the old whilst adding a few new zones. This allowed the developers the opportunity to redo the entire old world in order to make the 0-60 levelling experience more fluid as well as allowing everyone the opportunity to use their flying mounts in the old world. This is in addition to the complete overhaul of every class, 2 new races, a dozen new dungeons, 4 new raid encounters, a new secondary profession, rework of the stat system and an overhaul of the badge based reward system.

I had a few choices when it came to exploring this new old world that Blizzard had set before me. Reports from friends told me the levelling experience was quite nice and the new starting zones were of similar quality to that of the Death Knight area, long praised for its intensely immersive experience. Still I had 2 level 80 characters ready, willing and able to experience the new content right away and logging onto one of them I was instantly greeted by some of my long time World of Warcraft buddies. The decision to level my 80 Shaman had been made for me before I knew it and I set about blasting my way to 85.

The first thing I noticed was the vast improvements to the game experience that Blizzard have added since the last time I played. First there’s a quest helper that not only tracks all your quests it also points you in the right direction and marks out an area for you to find the mobs or items required to complete it. Additionally the character panel has seen a significant revamp with many of the stats now providing insight into what they mean, like the amount of hit required to not miss a certain level target. There’s also lots of tiny little additions that make the game experience just that much better, like the little icon that hovers above your head when you get 5 stacks of Maelstrom Weapon as a shaman something which required a whole other mod to achieve. The revamped raid/party bar is also quite good and a testament to how necessary the Grid mod was before Blizzard rolled their own. There are still a few things missing that I still consider necessary like a damage meter and a loot browser but overall Blizzard has shown just how closely they watch the community and listen to what their needs are so that they can include those things into the main game.

The levelling experience from 80 to 85 was incredibly enjoyable, probably the best experience I’ve had out of any of the previous releases. I was never lost for somewhere to quest as part of my usual trips back to Ogrimmar there would always be a quest on the Warchief’s board that would send me to a level appropriate area. Whilst this has left me with a couple areas left uncompleted (like Vashj’ir and Uldum) it did mean that I didn’t spend time on lower level quests that yield significantly lower experience. The usual line is that the levelling time from 80 to 85 was supposed to be the same as 70 to 80 but I found that it was significantly less, probably about half or so. I think this can be attributed to the random dungeon system they added in a while back with the added bonus that instead of having to do long quest chains to get those juicy dungeon quests nearly all dungeons have quest givers right at the start.

Like any of the Blizzard titles what really got me was the depth and breadth of the lore behind each of the areas. Whilst many of the quests are you’re standard kill X of those, gather Y of these type of encounters there are quite a few that really bring you into the world that Blizzard has created. The screenshot above is from one such encounter where after leading a band of goblins up the hill I’ve finally met with Alexstrasza who soon after takes me on a direct assault against Deathwing himself. There’s also extensive use of the phasing¹ technique giving you that feeling of being the hero of the world, even though you’re in a world of heroes. This lead me to follow many long quest chains to their completion as I just had to know what happened next, spending hours battling various foes and gobbling up the quest text at every opportunity.

The end game has improved significantly as well. Back in Ulduar Blizzard began experimenting with teleporters that would take you a fair way to the part of the instance you wanted to be at. They continued this in Icecrown Citadel and they have made their way into every instance I’ve played thus far. The instances themselves are also quite entertaining with new boss mechanics and some instances even having in game cinematics. Sure you’re over them once you’ve seen them for the 5th time but it’s a nice touch and goes a long way to revamp the old dungeon grind.

I’ve spent the last month playing through the level 80 to 85 content and I’m still not lost for new things to do in Cataclysm. It seems every other day I find myself in a new dungeon I hadn’t yet done or a new section of a quest area I hadn’t yet discovered and that’s just what keeps me coming back day after day. I’ve still yet to dive into the revamped old world in the form of levelling a new character but from reports I’m hearing from both long time veterans and first time players the experience is as enjoyable as my level 80 to 85 experience. So for those of you thinking about reactivating your old account or for anyone who’s had the slightest inclination to play World of Warcraft you won’t go wrong by starting now in the new world that was torn asunder in Cataclysm.

World of Warcraft: Cataclysm is available right now on PC for$39.95. Game was played over the course of the last month on the Oceanic Dreadmaul server as a Enhancement Shaman.

¹Phasing, in World of Warcraft, is when part of a world is in a sense instanced. This allows them to show a different world to different players which is usually used to show the effect of a quest on the world around you. The example given is that if you get 10 wooden planks to repair someone’s house it will in fact be repaired. However anyone who hasn’t yet done that quest will see that house as still damaged.