Posts Tagged‘ssd’

Crossbar-Simple-CMOS-Integration-080213

The Memristor is Almost Ready For Prime Time.

With the amount of NVRAM that’s used these days the amount of innovation in the sector has been comparatively little. For the most part the advances have come from the traditional avenues, die shrinks and new gate technologies, with the biggest advance in 3D construction only happening last week. There’s been musings about other kinds of technology for a long time like memristors which had their first patent granted back in 2007 and were supposed to making their way into our hands late last year, but that never eventuated. However news comes today of a new memory startup that’s promising a lot of things and whilst they don’t say it directly it looks like they might be one of the first to market with memristor based products.

Crossbar-Simple-CMOS-Integration-080213

Crossbar is a new company that’s been working in stealth for some time on a new type of memory product which, surprisingly, isn’t anything particularly revolutionary. It’s called Resistive RAM (RRAM) and a little research shows that there’s been companies working on this idea as far back as 2009. It’s based around a fairly interesting phenomena whereby a dielectric, an electric insulator, can be made to conduct through the application of high voltage. This forms a filament of low resistance which can then be reset, breaking the connection, and then set again using another high voltage jolt. This idea lends itself well to applications in memory as the two states translate perfectly to binary and if the specifications are anything to go by the performance that will come out of them should be quite spectacular.

If this is sounding familiar then you’re probably already familiar with the idea of memristors. These are the 4th fundamental component of electronic circuits that were postulated back in 1971 by Leon Chua and were made real by HP in 2007. In a basic sense their resistance is a function of the current following through them and when the current is removed that resistance is remembered, hence their name. As you can see this describes the function of RRAM pretty well and there is a solid argument to be made that all RRAM technologies are in fact memristors. Thus whilst it’s pretty spectacular that a start up has managed to perfect this technology to the point of producing it on a production fab it’s actually technology that’s been brewing for quite some time and one that everyone in the tech world is excited about.

Crossbar’s secret sauce could likely come from their fabrication process as they claim that the way they create their substrate means that they should be able to stack them, much in the same way that Samsung can now do with their VNAND. Now this is exciting because previously HP alluded to the fact that memristor based storage could be made much more dense than NAND, several orders of magnitude more dense to be precise, and considering the density gains Samsung got with their 3D chips a layered memristor device’s storage capacity could be astronomical. Indeed Crossbar claims this much with up to 1TB for a standard chip that could be stacked multiple times, enabling terabytes on a single chip. That puts good old fashioned spinning rust disks on notice as they just couldn’t compete, even when it comes to archival storage. Of course the end price will be a big factor in this but that kind of storage potential could drive the cost per GB through the floor.

So the next couple months are going to be quite interesting as we have Samsung, the undisputed king of NAND, already in the throws of producing some of the most dense storage available with Crossbar (and multiple other companies) readying memristor technology for the masses. In the short term I give the advantage to Samsung as they’ve got the capital and global reach to get their products out to anyone that wants them. However if memristor based products can do even half of what they’re claimed to be capable of they could quickly start eating Samsung’s lunch and I can’t imagine it’d be too long before they either bought the biggest players in the field or developed the technology themselves. Regardless of how this all plays out the storage market is heading for a shake up, one that can’t come quick enough in my opinion.

 

V-NAND-04-0

Samsung Starts Producing V-NAND, Massive SSDs Not Far Off.

I’ve been in the market for a new PC for a little while now so occasionally I’ll indulge myself in a little hypothetical system building so I can figure out how much I want to spend (lots) and what kind of computer I’ll get out of it (a super fast one). One of the points that got me unstuck was the fact that whilst I can get semi-decent performance out of my RAID10 set which stores most of my stuff it’s no where near the performance of my SSD that holds the OS and my regularly used applications. Easy, I thought, I’ll just RAID together some SSDs and get the performance I want with enough space to hold all my games and other miscellany. Thing is though SSDs don’t like to be in RAID sets (thanks to TRIM not working with it) unless its RAID0 and I’m not terribly keen on halving the MTBF just so I can get some additional space. No what I need is a bigger drive and it looks like Samsung is ready to deliver on that.

V-NAND-04-0

That little chip is the key to realizing bigger SSDs (among other things). It’s a new type of flash memory called V-NAND based on a new gate technology called CTF and Samsung has just started mass production of them.

What’s really quite groovy about this new kind of NAND chip is that unlike all other computer chips which are planar in nature, I.E. all the transistors lie on a single plane, V-NAND (as you can likely guess) is actually a vertical stack of planar chips. This allows for incredible densities inside a single chip with this first generation clocking in at a whopping 128GB. Putting that in perspective the drive that I’m currently using has the same capacity as that single chip which means that if I replaced its memory with this new V-NAND I’d  be looking at a 1TB drive. For tech heads like me even hearing that it was theoretically possible to do something like that would make us weak at the knees but these are chips that you can start buying today.

Apparently this isn’t their most dense chip either as their new 3D NAND tech allows them to go up to 24 layers high. I can’t seem to find a reference that states just how many layers are in this current chip so I’m not sure how dense we’re talking here but it seems like this will be the first chip among many and I doubt they’ll stop at 24.

As if all that wasn’t enough Samsung is also touting higher reliability, from anywhere between 2x to 10x, as well as at least double the write performance of traditional NAND packages. All SSDs are at the point where the differences in write/read speeds are almost invisible to the end user so that may be moot for many but for system builders it’s an amazing leap forward. Considering we can already get some pretty amazing IOPS from the SSDs available today doubling that just means we can do a whole lot more with a whole lot less hardware and that’s always a good thing. Whether those claims hold up in the real world will have to be seen however but there’s a pretty close relationship between data density and increased throughput.

Unfortunately whilst these chips are hitting mass production today I couldn’t find any hint of which partners are creating drives based around them or if Samsung was working on one themselves. They’ve been releasing some pretty decent SSDs recently, indeed they were the ones I was eyeing off for my next potential system, so I can’t imagine they’d be too far off given that they have all the expertise to create one. Indeed they just recently released the gigantic 1.6TB SSD that uses the new PCIe interface NVMe to deliver some pretty impressive speeds so I wouldn’t be surprised if their next drive comes out on that platform using this new V-NAND.

It’s developments like this that are a testament to the fact that Moore’s Law will keep on keeping on long despite the numerous doubters ringing its death bell. With this kind of technology in mind its easy to imagine it being applied elsewhere, increasing density in other areas like CPU dies and volatile memory. Of course porting such technology is non-trivial but I’d hazard a guess that all the chip manufacturers worldwide are chomping at the bit to get in on this and I’m sure Samsung will be more than happy to license the patents to them.

For a princely sum, of course ;)

 

The Memristor: Moore’s Law Gets a Jolt.

The computer (or whatever Internet capable device you happen to be viewing this on) is made up of various electronic components. For the most part these are semiconductors, devices which allow the flow of electricity but don’t do it readily, but there’s also a lot of supporting electronics that are what we call fundamental components of electronics. As almost any electrical enthusiast will tell you there are 3 such components: the resistor, capacitor and inductor each of them with their own set of properties that makes them useful in electronic circuits. There’s been speculation of a 4th fundamental component for about 40 years but before I talk about that I’ll need to give you a quick run down on what the current fundamentals properties are.

The resistor is the simplest of the lot, all it does is impede the flow of electricity. They’re quite simple devices, usually a small brown package banded by 4 or more colours which denotes just how resistive it actually is. Resistors are often used as current limiters as the amount of current that can pass through them is directly related to the voltage and level of resistance of said resistor. In essence you can think of them as narrow pathways in which electric current has to squeeze through.

Capacitors are intriguing little devices and can be best thought of as batteries. You’ve seen them if you’ve taken apart any modern device as they’re those little canister looking things attached to the main board of said device. They work by storing charge in an electrostatic field between two metal plates that’s separated by an insulating material called a dielectric. Modern day capacitors are essentially two metal plates and the dielectric rolled up into a cylinder, something which you could see if you cut one open. I’d only recommend doing this with a “solid” capacitor as the dielectrics used in other capacitors are liquids and tend to be rather toxic and/or corrosive.

Inductors are very similar to capacitors in the respect that they also store charge but instead of an electrostatic field they store it in a magnetic field. Again you’ve probably seen them if you’ve cracked open any modern device (or say looked inside your computer) as they look like little circles of metal with wire coiled around them. They’re often referred to as “chokes” as they tend to oppose the current that induces the magnetic field within them and at high frequencies they’ll appear as a break in the circuit, useful if you’re trying to keep alternating current out of your circuit. 

For quite a long time these 3 components formed the basis of all electrical theory and nearly any component could be expressed in terms of them. However back in 1971 Leon Chua explored the symmetry between these fundamental components and inferred that there should be a 4th fundamental component, the Memristor. The name is a combination of memory and resistor and Chua stated that this component would not only have the ability to remember its resistance, but also have it changed by passing current through it. Passing current in one direction would increase the resistance and reversing it would decrease it. The implications of such a component would be huge but it wasn’t until 37 years later that the first memristor was created by researchers in HP’s lab division.

What’s really exciting about the memristor is its potential to replace other solid state storage technologies like Flash and DRAM. Due to memristor’s simplicity they are innately fast and, best of all, they can be integrated directly onto the chip of processors. If you look at the breakdown of a current generation processor you’ll notice that a good portion of the silicone used is dedicated to cache, or onboard memory. Memristors have the potential to boost the amount of onboard memory to extraordinary levels, and HP believes they’ll be doing that in just 18 months:

Williams compared HP’s resistive RAM technology against flash and claimed to meet or exceed the performance of flash memory in all categories. Read times are less than 10 nanoseconds and write/erase times are about 0.1-ns. HP is still accumulating endurance cycle data at 10^12 cycles and the retention times are measured in years, he said.

This creates the prospect of adding dense non-volatile memory as an extra layer on top of logic circuitry. “We could offer 2-Gbytes of memory per core on the processor chip. Putting non-volatile memory on top of the logic chip will buy us twenty years of Moore’s Law, said Williams.

To put this in perspective Intel’s current flagship CPU ships with a total of 8MB of cache on the CPU and that’s shared between 4 cores. A similar memristor based CPU would have a whopping 8GB of on board cache, effectively negating the need for external DRAM. Couple this with a memristor based external drive for storage and you’d have a computer that’s literally decades ahead of the curve in terms of what we thought was possible, and Moore’s Law can rest easy for a while.

This kind of technology isn’t you’re usual pie in the sky “it’ll be available in the next 10 years” malarkey, this is the real deal. HP isn’t the only one looking into this either, Samsung (one of the world’s largest flash manufacturers) has also been aggressively pursuing this technology and will likely début products around the same time. For someone like me it’s immensely exciting as it shows that there are still many great technological advances ahead of us, just waiting to be uncovered and put into practice. I can’t wait to see how the first memristor devices perform as it will truly be a generational leap ahead in technology.

 

A SSD By Any Other Synthetic Benchmark Would Be As Fast.

Like any technology geek real world performance of a component is the most important aspect for me when I’m looking to purchase new hardware. Everyone knows manufacturer’s can’t be trusted with ratings, especially when they come up with their own systems that provide big numbers that mean absolutely nothing, so I primarily base my purchasing decisions based on aggregating reviews from various sources around the Internet in order to get a clear picture of which brand/revision I should get. After that point I usually go for the best performance per dollar as whilst it’s always nice to have the best components the price differential is usually not worth the leap, mostly because you won’t notice the incremental increase. There are of course notable exceptions to this hard and fast rule and realistically my decision in the end wasn’t driven by rational thought so much as it was pure geeky lust after the highest theoretical performance.

Solid State Drives present quite an interesting value proposition for us consumers. They are leaps and bounds faster than their magnetic predecessors thanks to their ability to access data instantaneously and their extremely high throughput rates. Indeed with the hard drive being the bottleneck of performance for nearly every computer in the world the most effective upgrade you can get is that of a SSD. Of course nothing can beat magnetic hard drives for their cost, durability and capacity so it’s very unlikely that we’ll be seeing the end of them anytime soon. Still the enormous gap that separates SSDs from any other storage medium brings about some interesting issues of its own: benchmarks, especially synthetic ones, are almost meaningless for end users.

I’ll admit I was struck by geek lust when I saw the performance specs for the OCZ Vertex 3, they were just simply amazing. Indeed the drive has matched up to my sky high expectations with me being able to boot, login and open up all my applications in the time it took my previous PC just to get to the login screen. Since then I’ve been recommending the Vertex 3 to anyone who was looking to get a new drive but just recently OCZ announced their new budget line of SSDs, the Agility 3.  Being almost $100 cheaper and sporting very similar performance specs to that of the Vertex it’s a hard thing to argue against especially when you consider just how fast these SSDs are in the first place.

Looking at the raw figures it would seem like the Agility series are around 10% slower than their Vertex counterparts on average, which isn’t bad for a budget line. However when you consider that the 10% performance gap is the difference between your windows loading in 6.3 seconds rather than 7 and your applications launching in 0.9 seconds instead of 1 then the gap doesn’t seem all that big. Indeed I’d challenge anyone to be able to spot the differences between two identical systems configured with different SSDs as these kinds of performance differences will only matter to benchmarkers and people building high traffic systems.

Indeed one of my mates had been running a SSD for well over a year and a half before I got mine and from what he tells me the performance of units back then was enough for him to not notice any slow down after not formatting for that entire time. Likely then if you’re considering getting a SSD but are turned off by the high price of current models you’ll be quite happy with the previous generation as the perceived performance will be identical. Although with the Agility 3 120GB version going for a mere $250 the price difference between generations isn’t really that much anymore.

Realistically SSDs are just the most prominent example of why synthetic benchmarks aren’t a good indicator of real world performance. There’s almost always an option that will provide similar performance for a drastically reduced price and for the end user the difference will likely be unnoticeable. SSDs are just so far away from their predecessors that the differentials between the low and high end are usually not worth mentioning, especially if you’re upgrading from good old spinning rust. Of course there will always be geeks like me whose lust will overcome their sensibility and reach for the ultimate in performance, which is why those high end products still exist today.

The Build, The Results and The Tribulations.

So last week saw me pick up the components that would form my new PC, the first real upgrade I have bought in about 3 years. Getting new hardware is always an exciting experience for someone like me which is probably why I enjoy being in the datacenter so much these days, with all that new kit that I get to play with. I didn’t really have the time to build the PC until the weekend though and so I spent a good 5 days with all the parts laid out on the dining table beside me, begging me to put them together right now rather than waiting. My resolve held however and Saturday morning saw me settle down with a cup of coffee to begin the longest build I’ve ever undertaken.

I won’t go over the specifications again since I’ve already mentioned them a dozen times elsewhere but this particular build had a few unique challenges that you don’t see in regular PCs. For starters this would be my first home PC that had a RAID set in it, comprising of 4 1TB Seagate drives that would be held in a drive bay enclosure. Secondly the CPU would be watercooled using a Corsair H70 fully sealed system and since I hadn’t measured anything I wasn’t 100% sure I’d be able to fit it where I thought I could. Lastly with all these drives, watercooling and other nonsense the number of power cables required also posed a unique challenge as I wasn’t 100% sure I could get them all to fit in my mid-sized tower.

The build started off quite well as I was able to remove the old components without issue and give the case a good clean before installing bits and pieces in it. The motherboard, CPU and RAM all went together quite easily as you’d expect but when it came time to affix the mounting bracket for the watercooling I hit a bit of a stumbling block. You see the motherboard I purchased does you the favor of having the old style LGA775 mounting holes, letting you use old style coolers on the newer CPUs. This is all well and good but since the holes are only labelled properly on one side attempting to line up the backing plate with the right holes proved to be somewhat of a nightmare, especially considering that when it did line up it was at a rather odd angle. Still it mounted and fit flush to the motherboard so there was no issues there.

The next challenge was getting all the hard drives in. Taking off the front of my case to to do a dry fit of the drive bay extension showed that there was a shelf right smack bang in the middle of the 4 bays. No problem though it looked to just be screwed in however upon closer inspection it showed that the screws in the front could only be accessed by a right angle screw driver, since the holes that needed to be drilled for a regular driver hadn’t been done. After attempting several goes with a drive bit and a pair of pliers I gave up and got the drill out, leaving aluminium shavings all over the place and the shelf removed. Thankfully the drive bay extender mounted with no complaints at all after that.

Next came the fun part, figuring out where the hell the watercooling radiator would go. Initially I had planned to put it at the front of the case but the hosing was just a bit too short. I hadn’t bought any fan adapters either so mounting it on the back would’ve been a half arsed effort with cable ties and screws in the wrong place. After fooling around for a while I found that it actually fit quite snuggly under the floppy drive bays, enough so that it barely moved when I shook the case. This gave me the extra length to get to the CPU whilst also still being pretty much at the front of the case, although this also meant I could only attach one of the fans since part of the radiator was mere millimeters away from the end of the graphics card.

With everything all put together and wired up it was now the moment of truth, I took a deep breath and pressed the power button. After a tense couple milliseconds (it seemed like forever) the computer whirred into life and I was greeted with the BIOS screen. Checking around in the BIOS though revealed that it couldn’t see the 4 drives I had attached to the external SATA 6Gbps controller so I quickly booted into the windows installer to make sure they were all there. They did in fact come up and after a furious 2 hours of prodding around I found that the external controller didn’t support RAID at all, only the slower ports did. This was extremely disappointing as it was pretty much the reason why I got this particular board but figuring that the drives couldn’t saturate the old SATA ports anyway I hooked them up and was on my merry way with the Windows install being over in less than 10 minutes.

I’ve been putting the rig through its paces over the past week and I must say the biggest improvement in performance comes solely from the SSD. The longest part of the boot process is the motherboard initializing the 3 different controllers with Windows loading in under 30 seconds and being usable instantly after logging in. I no longer have to wait for things to load, every program loads pretty much instantaneously. The RAID array is none too shabby either with most games loading in a fraction of the time they used to.

Sadly with all games being optimized for consoles these days the actual performance improvement in nearly every game I’ve thrown at it has been very minimal. Still Crysis 2 with all the settings set to their maximum looks incredibly gorgeous even if I can’t seem to make it chug even on the biggest multi-player maps. The new mouse I bought (Logitech G700) is quite an amazing bit of kit too and the TRON keyboard my wife got me for my birthday just adds to the feeling that I’m using a computer from the future. Overall I’m immensely satisfied with it and I’m sure it’ll prove its worth once I throw a few more programs at it.

Speaking of which, I can’t wait to code on that beasty.

 

OCZ Vertex 3: Don’t Play With My Heart (Or The SSD Conundrum).

My main PC at home is starting to get a little long in the tooth, having been ordered back in the middle of 2008 and only receiving upgrades of a graphics card and a hard drive since then. Like all PCs I’ve had it suffered a myriad of problems that I just usually put up with until I stumbled across a work around, but I think the vast majority of them can be traced to a faulty motherboard (Can’t put more than 4GB of RAM in it or it won’t post) and a batch of faulty hard drives (that would randomly park the heads causing it to freeze). At the time I had the wonderful idea of buying the absolute latest so I could upgrade cheaply for the next few years, but thanks to the consolization of games I found that wasn’t really necessary.

To be honest it’s not even really necessary now either, with all the latest games still running at full resolution and most at high settings to boot. I am starting to lag on the technology front however with my graphics card not supporting DirectX 11 and everything but the RAM being 2 generations behind (yes, I have a Core 2 Duo). So I took it upon myself to build a rig that combined the best performance available of the day rather than trying to focus on future compatibility. Luckily for me it looks like those two are coinciding.

Just because like any good geek I love talking shop when it comes to building new PCs here are the specs of the potential beast in making:

  • Intel Core i7 2600K
  • Asrock P67 Motherboard
  • Corsair Vengeance 1600MHz DDR3 16GB
  • Radeon HD6950
  • 4 x 1TB Seagate HDD in RAID 10
  • OCZ Vertex 3 120GB

The first couple choices I made for this rig were easy. Hands down the best performance out there is with the new Sandy Bridge i7 chips with the 2600K being the top of the lot thanks to its unlocked multiplier and hyperthreading, which chips below the 2600 lack. The choice of graphics cards was a little harder as whilst the Radeon comes out leagues ahead on a price to performance ratio the NVIDIA cards still had a slight performance lead overall, but hardly enough to justify the price. Knowing that I wanted to take advantage of the new SATA 6Gbps  range of drives that were coming out my motherboard choice was almost made for me as the Asrock P67 seems to be one of the few that has more than 4 of the ports available (it has 6, in fact).

The choice of SSD however, whilst extremely easy at the time, became more complicated recently.

You see back in the initial pre-production review round the OCZ Vertex 3 came out shooting, blasting away all the competition in a seemingly unfair comparison to its predecessors. I was instantly sold especially considering the price was looking to be quite reasonable, around the $300 mark for a 120GB drive. Sure I could opt for the bigger drive and dump my most frequently played games on it but in reality a RAID10 array of SATA 6Gbps drives should be close enough without having to overspend on the SSD. Like any pre-production reviews I made sure to keep my ear to the ground just in case something changed once they started churning them out.

Of course, something did.

The first production review that grabbed my attention was from AnandTech, renowned for their deep understanding of SSDs and producing honest and accurate reviews. The results for my drive size of choice, the 120GB, were decidedly mixed on a few levels with it falling down in several places where the 240GB version didn’t suffer any such problems. Another review confirmed the figures were in the right ballpark although unfortunately lacking a comparison to the 240GB version. The reasons behind the performance discrepancies are simple, whilst functionally the same drives the differences come from the number of NAND chips used to create the drive. The 240GB version has double the amount of the 120GB version which allows for higher throughput and additionally grants the drive a larger scratch space that it can use to optimize its performance¹.

So of course I started to rethink my position. The main reason for getting a real SSD over something like the PCIe bound RevoDrive was that I could use it down the line as a jumbo flash drive if I wanted to and I wouldn’t have to sacrifice one of my PCIe lanes to use it. The obvious competitor to the OCZ Vertex 3 would be something like the Intel 510 SSD but the reviews haven’t been very kind to this device, putting it barely in competition with previous generation devices.

After considering all my options I think I’ll still end up going with the OCZ Vertex 3 at the 120GB size. Whilst it might not be the kind of performance in every category it does provide tremendous value when compared to a lot of other SSDs and it will be in another league when compared to my current spinning rust hard drive. Once I get around to putting this new rig together you can rest assured I’ll put the whole thing through its paces, if at the very least to see how the OCZ Vertex 3 stacks up against the numbers that have already been presented.

¹Ever wondered why some SSDs are odd sizes? They are in fact good old fashioned binary sizes (128GB and 256GB respectively) however the drive reserves a portion of that (8GB and 16GB) to use as scratch space to write and optimize data before committing it. Some drives also use it as a buffer for when flash cells become unwritable (flash cells don’t usually die, you just can’t write to them anymore) so that the drive’s capacity doesn’t degrade.

Solid State Drives, Not Just All Talk.

Last year Intel made headlines by releasing the X25-E, an amazing piece of hardware that showed everyone that it was possible to get a large amount of flash and use it as a main disk drive without having to spend thousands of dollars on custom hardware. Even though the price tag was even outside most enthusiasts price ranges it still came out as the piece of hardware that everyone wanted and dreamed about.

Fast forward a year and several other players have entered the SSD market space. Competition is always a good thing as it will lead to companies fighting it out by offering products at varying price points in order to entice people into the market. However, although there appeared to be competition on the outside a deeper look into most of the other drives showed that they shared a controller (from JMicron, the JMF602B MLC) except for Samsung and Intel. Unfortunately these drives focused on sequential throughput (transferring big files and the like) at the cost of random write performance. This in turn made all operating systems that were installed on them appeared to freeze for seconds at a time, since any Operating System is constantly writing small things to disk in the background.

However, thanks to a recent AnandTech reviewer, one company has stepped up to the plate and addressed these issues, giving a low cost option (circa $400 for a 60GB drive, as oppose to Intel’s $900 for 32GB) for people wanting to try SSDs but not put up with a freezing computer. One of my tech friends just informed me that a recent update to the firmware of the drive saw improvements up to 3~4 times that of the original drive, an amazing improvement by any metric.

So are these things worth the money? Pretty much everyone I’ve talked to believe they are. These things really aren’t meant to be your main storage drive and once the paradigm shifts from disks being slow I believe you’ll see many more systems built around a tiered storage arrangement. Have your OS and favourite applications on the SSD and keep your giant lumbering magnetic disks trundling along in the background holding all your photos, music and the like. There’s always been a strong disconnect between the blistering fast memory of your computer when compared to the slow crawl of the hard disk and it would seem that SSDs will bridge that gap, making the modern PC a much more usable device.

I am fortunate enough to be working with some of the latest gear from HP which includes solid state drives (for work, of course! :)). For the hardware geeks out there we’ve just taken delivery of 2 HP C7000 Blade Chassis, 4 BL495c FLEX10 blades with 32GB of memory and dual 32GB SSD drives (they’re Samsung SLC drives) and all the bibs and bobs that are needed to hook all this up as our new VMware environment. It is a pity that they won’t let me put them together myself (How dare they tempt a geek with a myriad of boxes of components!) but I can understand my boss’ requirements of having someone else do it, just so we can blame them should anything go wrong.

So we’ve seen what SSDs can do for the consumer market, I’ll let you know how they go in the corporate world :)