The popular interpretation of Moore’s Law is that computing power, namely of the CPU, doubles every two years or so. This is then extended to pretty much all aspects of computing such as storage, network transfer speeds and so on. Whilst this interpretation has held up reasonably well in the past 40+ years since the law has coined it’s actually not completely accurate as Moore was actually referring to the number of components that could be integrated into a single package for a minimum cost. Thus the real driver behind Moore’s law isn’t performance, per se, it’s the cost at which we can provide said integrated package. Keeping on track with this law hasn’t been easy but innovations like Intel’s new 14nm process are what have been keeping us on track.
CPUs are created through a process called Photolithography whereby a substrate, typically a silicon wafer, has the transistors etched onto it through a process not unlike developing a photo. The defining characteristic of this process is the minimum size of a feature that the process can etch on the wafer which is usually expressed in terms of nanometers. It was long thought that 22nm would be the limit for semiconductor manufacturing as this process was approaching the physical limitations of the substrates used. However Intel, and many other semiconductor manufacturers, have been developing processes that push past this and today Intel has released in depth information regarding their new 14nm process.
The improvements in the process are pretty much what you’d come to expect from a node improvement of this nature. A reduction in node size typically means that a CPU can be made with more transistors that performs better and uses less power than a similar CPU built on a larger sized node. This is most certainly the case with Intel’s new 14nm fabrication process and, interesting enough, they appear to be ahead of the curve so to speak, with the improvements in this process being slightly ahead of the trend. However the most important factor, at least in respect Moore’s Law, is that they’ve managed to keep reducing the cost per transistor.
One of the biggest cost drivers for CPUs is what’s called the yield of the wafer. Each of these wafers costs a certain amount of money and, depending on how big and complex your CPU is, you can only fit a certain number of them on there. However not all of those CPUs will turn out to be viable and the percentage of usable CPUs is what’s known as the wafer yield. Moving to a new node size typically means that your yield takes a dive which drives up the cost of the CPU significantly. The recently embargoed documents from Intel reveals however that the yield for the 14nm process is rapidly approaching that of the 22nm process which is considered to be Intel’s best yielding process to date. This, plus the increased transistor density that’s possible with the new manufacturing process, is what has led to the price per transistor dropping giving Moore’s law a little more breathing room for the next couple years.
This 14nm process is what will be powering Intel’s new Broadwell set of chips, the first of which is due out later this year. Migrating to this new manufacturing process hasn’t been without its difficulties which is what has led to Intel releasing only a subset of the Broadwell chips later this year, with the rest to come in 2015. Until we get our hands on some of the actual chips there’s no telling just how much of an improvement these will be over their Haswell predecessors but the die shrink alone should see some significant improvements. With the yields fast approaching those of its predecessors they’ll hopefully be quite reasonably priced too, for a new technology at least.
It just goes to show that Moore’s law is proving to be far more robust than anyone could have predicted. Exponential growth functions like that are notoriously unsustainable however it seems every time we come up against another wall that threatens to kill the law off another innovative way to deal with it comes around. Intel has long been at the forefront of keeping Moore’s law alive and it seems like they’ll continue to be its patron saint for a long time to come.
Rewind back a couple years that the idea of wearable computing was something reserved for the realms of the ultra-geek and science fiction. Primarily this was a function of the amount of computing power and power capacity we could stuff into a gadget that anyone would be willing to wear as anything that could be deemed useful was far too bulky to be anything but a concept. Today the idea is far more mainstream with devices like Google Glass and innumerable smart watches flooding the market but that seems to be as far as wearable technology goes now. Should Intel have its way though this could be set for a rapid amount of change with the announcement of Intel Edison, a x86 processor that comes in a familiar (and very small) package.
It’s an x86 processor the size of a SD card and included in that package is a 400MHz processor (for the sake of argument I am assuming that it’s the same SOC that powers Intel’s Galileo platform, just a 22nm version), WiFi and low power Bluetooth. It can run a standard version of Linux and, weirdly enough, even has its own little app store. Should it retain its Galileo roots it will also be Arduino compatible whilst also gaining the capability to run the new Wolfram programming language. Needless to say it’s a pretty powerful little package and the standard form factor should make it easy to integrate into a lot of products.
By itself the Edison doesn’t suddenly make all wearable computing ideas feasible, indeed the progress made in this sector in the last year is a testament to that, instead it’s more of an evolutionary jump that should help to jump start the next generation of wearable devices. We’ve been able to go far with devices that have a tenth of the computing power of the Edison so it will be interesting to see what kinds of applications are made possible by the additional grunt it gives. Indeed Intel believes strongly in the idea that Edison will be the core of future wearable devices and has set up the Make It Wearable challenge, with over $1 million in prizes, in order to spur product designers on.
It will be interesting to see how the Edison stacks up against the current low power giant ARM as they have a bevy of devices already available that would be comparable to the Edison. Indeed it seems that Edison is meant to be a shot across ARM’s bow as it’s one of the few devices that Intel will allow third parties to license, much in the same way as ARM does today. There’s no question that Intel has been losing out hard in this space so the idea of marketing the Edison towards the wearable computing sector is likely a coy play to carve out a good chunk of that market before ARM cements themselves in it (like they did with smart phones).
One thing is for certain though, the amount of computing power available in such small packages is on the rise enabling us to integrate technology into more and more places. It’s the first tenuous steps towards creating an Internet of Things where seamless and unbounded communication is possible between almost any device. The results of Intel’s Make It Wearable competition will be a good indication of where this market is heading and what we, the consumers, can expect to see in the coming years.
In the general computing game you’d be forgiven for thinking there’s 2 rivals locked in a contest for dominance. Sure there’s 2 major players, Intel and AMD, and whilst they are direct competitors with each other there’s no denying the fact that Intel is the Goliath to AMD’s David, trouncing them in almost every way possible. Of course if you’re looking to build a budget PC you really can’t go past AMD’s processors as they provide an incredible amount of value for the asking price but there’s no denying that Intel has been the reigning performance and market champion for the better part of a decade now. However the next generation of consoles have proved to be something of a coup for AMD and it could be the beginnings of a new era for the beleaguered chip company.
Both of the next generation consoles, the PlayStation 4 and XboxOne, both utilize an almost identical AMD Jaguar chip under the hood. The reasons for choosing it seem to align with Sony’s previous architectural idea for Cell (I.E. having lots of cores working in parallel rather than fewer working faster) and AMD is the king of cramming more cores into a single consumer chip. Although the reasons for going for AMD over Intel likely stem from the fact that Intel isn’t too crazy about doing custom hardware and the requirements that Sony and Microsoft had for their own versions of Jaguar could simply not be accommodated. Considering how big the console market is this would seem like something of a misstep by Intel, especially judging by the PlayStation4′s day one sales figures.
If you hadn’t heard the PlayStation 4 managed to move an incredible 1 million consoles on its first day of launch and that was limited to the USA. The Nintendo Wii by comparison took about a week to move 400,000 consoles and it even had a global launch window to beef up the sales. Whether the trend will continue or not considering that the XboxOne just got released yesterday is something we’ll have to wait to see but regardless every one of those consoles being purchased now contains in it an AMD CPU and they’re walking away with a healthy chunk of change from each one.
To put it in perspective out of every PlayStation 4 sale (and by extension every XboxOne as well) AMD is taking away a healthy $100 which means that in that one day of sales AMD generated some $100 million for itself. For a company who’s annual revenue is around the $1.5 billion mark this is a huge deal and if the XboxOne launch is even half that AMD could have seen $150 million in the space of a week. If the previous console generations were anything to go by (roughly 160 million consoles between Sony and Microsoft) AMD is looking at a revenue steam of some $1.6 billion over the next 8 years, a 13% increase to their bottom line. Whilst it’s still a far cry from the kinds of revenue that Intel sees on a monthly basis it’s a huge win for AMD and something they will hopefully be able to use to leverage themselves more in other markets.
Whilst I may have handed in my AMD fanboy badge after many deliriously happy years with my watercooled XP1800+ I still think they’re a brilliant chip company and their inclusion in both next generation consoles shows that the industry giants think the same way. The console market might not be as big as the consumer desktop space nor as lucrative as the high end server market but getting their chips onto both sides of the war is a major coup for them. Hopefully this will give AMD the push they need to start muscling in on Intel’s turf again as whilst I love their chips I love robust competition between giants a lot more.
I’ve worked with a lot of different hardware in my life, from the old days of tinkering with my Intel 80286 through to esoteric Linux systems running on DEC tin until I, like everyone else in the industry, settled on x86-64 as the defacto standard. Among the various platforms I was happy to avoid (including such lovely things as Sun SPARC) was Intel’s Itanium range as it’s architecture was so foreign from anything else it was guaranteed that whatever you were trying to do, outside of building software specifically for that platform, was doomed to failure. The only time I ever came close to seeing it being deployed was on the whim of a purchasing manager who needed guaranteed 100% uptime until they realised the size of the cheque they’d need to sign to get it.
If Intel’s original dream was to be believed then this post would be coming to you care of their processors. You see back when it was first developed everything was still stuck in the world of 32bit and the path forward wasn’t looking particularly bright. Itanium was meant to be the answer to this, with Intel’s brand name and global presence behind it we would hopefully see all applications make their migration to the latest and greatest 64bit platform. However the complete lack of any backwards compatibility with any currently developed software and applications meant adopting it was a troublesome exercise and was a death knell for any kind of consumer adoption. Seeing this AMD swooped in with their dually compatible x86-64 architecture which proceeded to spread to all the places that Itanium couldn’t, forcing Intel to adopt the standard in their consumer line of hardware.
Itanium refused to die however finding a home in the niche high end market due to its redundancy features and solid performance for optimized applications. However the number of vendors continuing to support the platform dwindled from their already low numbers with it eventually falling to HP being the only real supplier of Itanium hardware in the form of their NonStop server line. It wasn’t a bad racket for them to keep up though considering the total Itanium market was something on the order of $4 billion a year and with only 55,000 servers shipped per year you can see how much of a premium they attract). Still all the IT workers of the world have long wondered when Itanium would finally bite the dust and it seems that that day is about to come.
HP has just announced that it will be transitioning its NonStop server range from Itanium to x86 effectively putting an end to the only sales channel that Intel had for their platform. What will replace it is still up in the air but it’s safe to assume it will be another Intel chip, likely one from their older Xeon line that shares many of the features that the Itanium had without the incompatible architecture. Current Itanium hardware is likely to stick around for an almost indefinite amount of time however due to the places it has managed to find itself in, much to the dismay of system administrators everywhere.
In terms of accomplishing it’s original vision Itanium was an unabashed failure, never finding the consumer adoption that it so desired and never becoming the herald of 64bit architecture. Commercially though it was somewhat of a success thanks to its features that made it attractive to the high end market but even then it was only a small fraction of total worldwide server sales, barely enough to make it a viable platform for anything but wholly custom solutions. The writing was on the wall when Microsoft said that Windows Server 2008 was the last version to support it and now with HP bowing out the death clock for Itanium has begun ticking in earnest, even if the final death knell won’t come for the better part of a decade.
One thing that not many people knew was that I was pretty keen on the whole Google TV idea when it was announced 2 years ago. I think that was partly due to the fact that it was a collaboration between several companies that I admire (Sony, Logitech and, one I didn’t know about at the time, Intel) and also because of what it promised to deliver to the end users. I was a fairly staunch supporter of it, to the point where I remember getting into an argument with my friends that consumers were simply not ready for something like it rather than it being a failed product. In all honesty I can’t really support that position any more and the idea of Google TV seems to be dead in the water for the foreseeable future.
What I didn’t know was that whilst Google, Sony and Logitech might have put the idea to one side Intel has been working on developing their own product along similar lines, albeit from a different angle than you’d expect. Whilst I can’t imagine that they had invested that much in developing the hardware for the TVs (a quick Google search reveals that they were Intel Atoms, something they had been developing for 2 years prior to Google TV’s release) it appears that they’re still seeking some returns on that initial investment. At the same time however reports are coming in that Intel is dropping anywhere from $100 million to $1 billion on developing this new product, a serious amount of coin that industry analysts believe is an order of magnitude above anyone who’s playing around in this space currently.
The difference between this and other Internet set top boxes appears to be the content deals that Intel is looking to strike with current cable TV providers. Now anyone who’s ever looked into getting any kind of pay TV package knows that whatever you sign up for you’re going to get a whole bunch of channels you don’t want bundled in alongside the ones you do, effectively diluting the value you derive from the service significantly. Pay TV providers have long fought against the idea of allowing people to pick and choose (and indeed anyone who attempted to provide such a service didn’t appear to last long, ala SelecTV Australia) but with the success of on demand services like NetFlix and Hulu it’s quite possible that they might be coming around to the idea and see Intel as the vector of choice.
The feature list that’s been thrown around press prior to an anticipated announcement at CES next week (which may or may not happen, according to who you believe) does sound rather impressive, essentially giving you the on demand access that everyone wants right alongside the traditional programming that we’ve come to expect from pay TV services. The “Cloud DVR” idea, being able to replay/rewind/fast-forward shows without having to record them yourself, is evident of this and it would seem that the idea of providing the traditional channels as well would just seem to be a clever ploy to get the content onto their network. Of course traditional programming is required for certain things like sports and other live events, something which the on demand services have yet to fully incorporate into their offerings.
Whilst I’m not entirely enthused with the idea of yet another set top box (I’m already running low on HDMI ports as it is) the information I’ve been able to dig up on Intel’s offering does sound pretty compelling. Of course many of the features aren’t exactly new, you can do many of the things now with the right piece of hardware and pay TV subscriptions, but the ability to pick and choose channels would be and then getting that Hulu-esque interface to watch previous episodes would be something that would interest me. If the price point is right, and its available globally rather than just the USA, I could see myself trying it out for the select few channels that I’d like to see (along with their giant back catalogues, of course).
In any case it will be very interesting to see if Intel does say anything about their upcoming offering next week as if they do we’ll have information direct from the source and if they don’t we’ll have a good indication of which analysts really are talking to people who are involved in the project.
I’ve been using Windows 8 as my main system for the better part of 2 months now and, whilst I’ll leave the in-depth impressions for the proper review, I have to say I’m pretty happy with it. Sure I wasn’t particularly happy with the way things were laid out initially but for the most part if you just blunder along like its Windows 7 you’re not going to struggle with it for very long. I might not use any of the modern styled applications, they don’t feel like they’re particularly well suited to the mouse/keyboard scenario if I’m honest, but everything else about it works as expected. Of course whilst Microsoft has already sold 40 million licenses of Windows 8 most people are focusing on Windows RT, care of the Surface tablet.
For the technically inclined the differences between the two are pretty stark and we’ve known for a long time that the Surface is essentially Microsoft’s answer to the iPad. The lines are a little bit more blurry between Surface/RT and the full version of Windows 8, thanks to the Modern Styled UI being shared between them, but the lack of a desktop made it pretty clear where the delineation lay. It seems however that there’s a feeling among some the bigger media outlets that Windows 8 is suffering from an identity crisis of sorts which has been perplexing me all morning:
What we’re seeing, I think, is Microsoft dancing around an uncomfortable reality: Windows RT just doesn’t have much to offer, so it’s hard to explain how it’s different from Windows 8 without making it look inferior.
The only distinct advantage for Windows RT is its support for “connected standby,” a power-saving mode that lets the device keep an eye on e-mail and other apps while it’s not in use. It’s a nice feature to have, but on its own it’s a tough sell compared to Windows 8′s wider software support. (UPDATE: As Eddie Yasi points out in the comments, the Atom-based chips that Windows 8 tablets are using, codenamed Clover Trail, support connected standby as well.)
The main thrust of the article, and another one it linked to, is that there’s been no real information from Microsoft about the differences between the fully fledged version of Windows 8 and its RT cousin. I’ll be fair to the article and not use anything past its publication date but for anyone so inclined I wrote about the differences between the two platforms well over a year ago and I was kind of late to the party on it too. Indeed the vast majority of the tech press surrounding the Surface release understood these differences quite clearly and it appears that both Time and The Verge were both being willingly ignorant simply to get a story.
Granted The Verge has something of a point that the retail representatives didn’t know the product but then again why were you asking in depth technical questions of a low wage retail worker? Most people who are looking for a Surface/iPad like device aren’t going to want to know if their legacy applications will run on it because, to them, they’re not the same thing. You could argue that the customer might have seen the Modern UI at home and then assumed that the Surface was exactly the same but I’d struggle to find someone who had installed Windows 8 this early in the piece and wasn’t aware that the Surface was a completely different beast.
Indeed the quote paragraphs above imply that Jared Newman (writer of the Time article) isn’t aware that the RT framework, the thing that powers the Modern UI, is the glue that will join all of Microsoft’s ecosystem together. Not only does it underpin Windows 8 but it’s also the framework for Windows Phone 8 and (I am speculating here but the writing is on the wall) the upcoming Xbox. What Windows RT devices offer you is the same experience that you’ll be able to get anywhere else Microsoft ecosystem but on low power devices. Newman makes the point that they could very well run them on Atom processors however anyone who’s actually used one can tell you that their performance is not up to scratch with their i3/5/7 line and is barely usable for desktop applications. They’re comparable in the low power space, meaning they would have made a decent replacement for ARM, but considering that 95% of the world’s portable devices run on the ARM it makes much more sense to go with the dominant platform rather than using something that’s guaranteed to give a sub-par experience.
I don’t like doing these kinds of take down posts, it usually feels like I’m shouting at a brick wall, but when there’s a fundamental lack of understanding or wilful ignorance of the facts I feel compelled to say something. The Windows8/RT distinctions are clear and, should you do even a small amount of research, the motives for doing so are also obvious. Thankfully most of the tech press was immune to this (although TechCrunch got swept up in this as well, tsk tsk) so there’s only a few bad apples that needed cleaning up.
The ability to swap components around has been an expected feature for PC enthusiasts ever since I can remember. Indeed the use of integrated components was traditionally frowned upon as they were typically of lower quality and should they fail you were simply left without that functionality with no recourse but to buy a new motherboard. Over time however the quality of integrated components has increased significantly and many PC builders, myself included, now forego the cost of additional add-in cards in favour of their integrated brethren. There are still some notable exceptions to this rule however, like graphics cards for instance, and there were certain components that most of us never thought would end up as being an integrated component, like the CPU.
Turns out we could be dead wrong about that.
Now it’s not like fully integrated computers are a new thing, in fact this blog post is coming to you via a PC that has essentially 0 replaceable/upgradable parts, commonly referred to as a laptop. Apple has famously taken this level of integration to its logical extreme in order to create its relatively high powered line of laptops with slim form factors and many other companies have since followed suit due to the success Apple’s laptop line have had. Still they were a relatively small market compared to the other big CPU consumers of the world (namely desktops and servers) which have both resisted the integrated approach mostly because it didn’t provide any direct benefits like it did for laptops. That may change if the rumours about Intel’s next generation chip, Haswell, turn out to be true.
Reports are emerging that Haswell won’t be available in a Land Grid Array (LGA) package and will only be sold in the Ball Grid Array (BGA) form factor. For the uninitiated the main difference between the two is that the former is the current standard which allows for processors to be replaced on a whim. BGA on the other hand is the package used when an integrated circuit is to be permanently attached to its circuit board as the “ball grid” is in fact blobs of solder that will be used to attach it. Not providing a LGA package essentially means the end for any kind of user-replaceable CPU, something which has been a staple of the enthusiast PC community ever since its inception. It also means a big shake up of the OEM industry who now have to make decisions about what kinds of motherboards they’re going to make as the current wide range of choice can’t really be supported with the CPUs being integrated.
My initial reaction to this was one of confusion as this would signify a really big change away from how the PC business has been running for the past 3 decades. This isn’t to say that change isn’t welcome, indeed the integration of rudimentary components like the sound card and NIC were very much welcome additions (after their quality improved), however making the CPU integrated essentially puts the kibosh on the high level of configurability that we PC builders have enjoyed for such a long time. This might not sound like a big deal but for things like servers and fleet desktop PCs that customizability also means that the components are interchangeable, making maintenance far easier and cheaper. Upgradeability is another reason however I don’t believe that’s a big of a factor as some would make it out to be, especially with how often socket sizes have changed over the past 5 years or so.
What’s got most enthusiasts worried about this move is the siloing of particular feature sets to certain CPU designations. To put it in perspective there’s typically 3 product ranges for any CPU family: the budget range (typically lower power, less performance but dirt cheap), the mid range (aimed at budget concious enthusiasts and fleet units) and the high end performance tier (almost exclusively for enthusiasts and high performance computing situations). If these CPUs are tied to the motherboard it’s highly likely that some feature sets will be reserved for certain ranges of CPUs. Since there are many applications where a low power PC can take advantage of high end features (like oodles of SATA ports for instance) and vice versa this is a valid concern and one that I haven’t been able to find any good answers to. There is the possibility of OEMs producing CPU daughter boards like the slotkets of old however without an agree upon standard you’d be effectively locking yourself into that vendor, something which not everyone is comfortable doing.
Still until I see more information its hard for me to make up my mind where I stand on this. There’s a lot of potential for it to go very, very wrong which could see Intel on the wrong side of a community that’s been dedicated to it for the better part of 30 years. They’re arguably in the minority however and its very possible that Intel is getting increasing numbers of orders that require BGA style chips, especially where their Atoms can’t cut it. I’m not sure what they could do right in this regard to win me over but I get the feeling that, just like the other integrated components I used to despise, there may come a time when I become indifferent to it and those zero insertion force sockets of old will be a distant memory, a relic of PC computing’s past.
My main PC at home is starting to get a little long in the tooth, having been ordered back in the middle of 2008 and only receiving upgrades of a graphics card and a hard drive since then. Like all PCs I’ve had it suffered a myriad of problems that I just usually put up with until I stumbled across a work around, but I think the vast majority of them can be traced to a faulty motherboard (Can’t put more than 4GB of RAM in it or it won’t post) and a batch of faulty hard drives (that would randomly park the heads causing it to freeze). At the time I had the wonderful idea of buying the absolute latest so I could upgrade cheaply for the next few years, but thanks to the consolization of games I found that wasn’t really necessary.
To be honest it’s not even really necessary now either, with all the latest games still running at full resolution and most at high settings to boot. I am starting to lag on the technology front however with my graphics card not supporting DirectX 11 and everything but the RAM being 2 generations behind (yes, I have a Core 2 Duo). So I took it upon myself to build a rig that combined the best performance available of the day rather than trying to focus on future compatibility. Luckily for me it looks like those two are coinciding.
Just because like any good geek I love talking shop when it comes to building new PCs here are the specs of the potential beast in making:
The first couple choices I made for this rig were easy. Hands down the best performance out there is with the new Sandy Bridge i7 chips with the 2600K being the top of the lot thanks to its unlocked multiplier and hyperthreading, which chips below the 2600 lack. The choice of graphics cards was a little harder as whilst the Radeon comes out leagues ahead on a price to performance ratio the NVIDIA cards still had a slight performance lead overall, but hardly enough to justify the price. Knowing that I wanted to take advantage of the new SATA 6Gbps range of drives that were coming out my motherboard choice was almost made for me as the Asrock P67 seems to be one of the few that has more than 4 of the ports available (it has 6, in fact).
The choice of SSD however, whilst extremely easy at the time, became more complicated recently.
You see back in the initial pre-production review round the OCZ Vertex 3 came out shooting, blasting away all the competition in a seemingly unfair comparison to its predecessors. I was instantly sold especially considering the price was looking to be quite reasonable, around the $300 mark for a 120GB drive. Sure I could opt for the bigger drive and dump my most frequently played games on it but in reality a RAID10 array of SATA 6Gbps drives should be close enough without having to overspend on the SSD. Like any pre-production reviews I made sure to keep my ear to the ground just in case something changed once they started churning them out.
Of course, something did.
The first production review that grabbed my attention was from AnandTech, renowned for their deep understanding of SSDs and producing honest and accurate reviews. The results for my drive size of choice, the 120GB, were decidedly mixed on a few levels with it falling down in several places where the 240GB version didn’t suffer any such problems. Another review confirmed the figures were in the right ballpark although unfortunately lacking a comparison to the 240GB version. The reasons behind the performance discrepancies are simple, whilst functionally the same drives the differences come from the number of NAND chips used to create the drive. The 240GB version has double the amount of the 120GB version which allows for higher throughput and additionally grants the drive a larger scratch space that it can use to optimize its performance¹.
So of course I started to rethink my position. The main reason for getting a real SSD over something like the PCIe bound RevoDrive was that I could use it down the line as a jumbo flash drive if I wanted to and I wouldn’t have to sacrifice one of my PCIe lanes to use it. The obvious competitor to the OCZ Vertex 3 would be something like the Intel 510 SSD but the reviews haven’t been very kind to this device, putting it barely in competition with previous generation devices.
After considering all my options I think I’ll still end up going with the OCZ Vertex 3 at the 120GB size. Whilst it might not be the kind of performance in every category it does provide tremendous value when compared to a lot of other SSDs and it will be in another league when compared to my current spinning rust hard drive. Once I get around to putting this new rig together you can rest assured I’ll put the whole thing through its paces, if at the very least to see how the OCZ Vertex 3 stacks up against the numbers that have already been presented.
¹Ever wondered why some SSDs are odd sizes? They are in fact good old fashioned binary sizes (128GB and 256GB respectively) however the drive reserves a portion of that (8GB and 16GB) to use as scratch space to write and optimize data before committing it. Some drives also use it as a buffer for when flash cells become unwritable (flash cells don’t usually die, you just can’t write to them anymore) so that the drive’s capacity doesn’t degrade.