Posts Tagged‘intel’

AMD Logo Official

The One Horse Race That is CPUs.

Roll back the clock a decade or so and the competition for what kind of processor ended up in your PC was at a fever pitch with industry heavyweights Intel and AMD going blow for blow. The choice of CPU, at least for me and my enthusiast brethren, almost always came down to what was fastest but the lines were often blurry enough that brand loyalty was worth more than a few FPS here or there. For the longest time I was an AMD fan, sticking stalwartly to their CPUs which provided me with the same amount of grunt as their Intel brethren for a fraction of the cost. However over time the gap between what an AMD CPU could provide and what Intel offered was too wide to ignore, and it’s only been getting wider since then.

AMD Logo Official

The rift is seen in adoption rates across all products that make use of modern CPUs with Intel dominating nearly any sector that you find them in. When Intel first retook the crown all those years ago the reasons were clear, Intel just performed well enough to justify the cost, however as time went on it seemed like AMD was willing to let that gap continue to grow. Indeed if you look at them from a pure technology basis they’re stuck about 2 generations behind where Intel is today with the vast majority of their products being produced on a 28nm process, with Intel’s latest release coming out on 14nm. Whilst they pulled a major coup in winning over all of the 3 major consoles that success has had much onflow to the rest of the business. Indeed since they’ll be producing the exact same chips for the next 5+ years for those consoles they can’t really do much with them anyway and I doubt they’d invest in a new foundry process unless Microsoft or Sony asked them nicely.

What this has translated into is a monopoly by default, one where Intel maintains it’s massive market share without having to worry about any upstarts rocking their boat. Thankfully the demands of the industry are pressure enough to keep them innovating at the rapid pace they set way back when AMD was still biting at their heels but there’s a dangerously real chance that they could just end up doing the opposite. It’s a little unfair to put the burden on AMD to keep Intel honest however it’s hard to think of another company who has the required pedigree and experience to be the major competition to their platform.

The industry is looking towards ARM as being the big competition for Intel’s x86 platform although, honestly, they’re really not in the same market. Sure nearly every phone under the sun is now powered by some variant of the ARM architecture however when it comes to consumer or enterprise compute you’d be struggling to find anything that runs on it. There is going to have to be an extremely compelling reason for everyone to want to translate to that platform and, as it stands right now, mobile and low power are the only places where it really fits. For ARM to really start eating Intel’s lunch it’d need to make some serious inroads into those spaces, something which I don’t see happening for decades at least.

There is some light in the form of Kaveri however it’s less than stellar performance when compared to Intel’s less tightly coupled solution does leave a lot to be desired. At a high level the architecture does feel like the future of all computing, well excluding radical paradigm shifts like HP’s The Machine (which is still vaporware at this point),  but until it equals the performance of discreet components it’s not going anywhere fast. I get the feeling that if AMD had kept up with Intel’s die shrinks Kaveri would be looking a lot more attractive than it is currently, but who knows what it might have cost them to get to that stage.

In any other industry you’d see this kind of situation as one that was ripe for disruption however the capital intensive nature, plus an industry leader who isn’t resting on their laurels, means that there are few who can hold a candle to Intel. The net positive out of all of this is that we as consumers aren’t suffering however we’ve all seen what happens when a company remains at the top for far too long. Hopefully the numerous different sectors which Intel is currently competing in will be enough to offset their monopolistic nature in the CPU market but that doesn’t mean more competition in that space isn’t welcome.

14nmCosts

Intel Keeps Moore’s Law Alive With 14nm Fabrication.

The popular interpretation of Moore’s Law is that computing power, namely of the CPU, doubles every two years or so. This is then extended to pretty much all aspects of computing such as storage, network transfer speeds and so on. Whilst this interpretation has held up reasonably well in the past 40+ years since the law has coined it’s actually not completely accurate as Moore was actually referring to the number of components that could be integrated into a single package for a minimum cost. Thus the real driver behind Moore’s law isn’t performance, per se, it’s the cost at which we can provide said integrated package. Keeping on track with this law hasn’t been easy but innovations like Intel’s new 14nm process are what have been keeping us on track.

14nmCosts

CPUs are created through a process called Photolithography whereby a substrate, typically a silicon wafer, has the transistors etched onto it through a process not unlike developing a photo. The defining characteristic of this process is the minimum size of a feature that the process can etch on the wafer which is usually expressed in terms of nanometers. It was long thought that 22nm would be the limit for semiconductor manufacturing as this process was approaching the physical limitations of the substrates used. However Intel, and many other semiconductor manufacturers, have been developing processes that push past this and today Intel has released in depth information regarding their new 14nm process.

The improvements in the process are pretty much what you’d come to expect from a node improvement of this nature. A reduction in node size typically means that a CPU can be made with more transistors that performs better and uses less power than a similar CPU built on a larger sized node. This is most certainly the case with Intel’s new 14nm fabrication process and, interesting enough, they appear to be ahead of the curve so to speak, with the improvements in this process being slightly ahead of the trend. However the most important factor, at least in respect Moore’s Law, is that they’ve managed to keep reducing the cost per transistor.

One of the biggest cost drivers for CPUs is what’s called the yield of the wafer. Each of these wafers costs a certain amount of money and, depending on how big and complex your CPU is, you can only fit a certain number of them on there. However not all of those CPUs will turn out to be viable and the percentage of usable CPUs is what’s known as the wafer yield. Moving to a new node size typically means that your yield takes a dive which drives up the cost of the CPU significantly. The recently embargoed documents from Intel reveals however that the yield for the 14nm process is rapidly approaching that of the 22nm process which is considered to be Intel’s best yielding process to date. This, plus the increased transistor density that’s possible with the new manufacturing process, is what has led to the price per transistor dropping giving Moore’s law a little more breathing room for the next couple years.

This 14nm process is what will be powering Intel’s new Broadwell set of chips, the first of which is due out later this year. Migrating to this new manufacturing process hasn’t been without its difficulties which is what has led to Intel releasing only a subset of the Broadwell chips later this year, with the rest to come in 2015. Until we get our hands on some of the actual chips there’s no telling just how much of an improvement these will be over their Haswell predecessors but the die shrink alone should see some significant improvements. With the yields fast approaching those of its predecessors they’ll hopefully be quite reasonably priced too, for a new technology at least.

It just goes to show that Moore’s law is proving to be far more robust than anyone could have predicted. Exponential growth functions like that are notoriously unsustainable however it seems every time we come up against another wall that threatens to kill the law off another innovative way to deal with it comes around. Intel has long been at the forefront of keeping Moore’s law alive and it seems like they’ll continue to be its patron saint for a long time to come.

OLYMPUS DIGITAL CAMERA

Intel’s Edison: The Kick in the Pants Wearable Computing Needs.

Rewind back a couple years that the idea of wearable computing was something reserved for the realms of the ultra-geek and science fiction. Primarily this was a function of the amount of computing power and power capacity we could stuff into a gadget that anyone would be willing to wear as anything that could be deemed useful was far too bulky to be anything but a concept. Today the idea is far more mainstream with devices like Google Glass and innumerable smart watches flooding the market but that seems to be as far as wearable technology goes now. Should Intel have its way though this could be set for a rapid amount of change with the announcement of Intel Edison, a x86 processor that comes in a familiar (and very small) package.

OLYMPUS DIGITAL CAMERA

It’s an x86 processor the size of a SD card and included in that package is a 400MHz processor (for the sake of argument I am assuming that it’s the same SOC that powers Intel’s Galileo platform, just a 22nm version), WiFi and low power Bluetooth. It can run a standard version of Linux and, weirdly enough, even has its own little app store. Should it retain its Galileo roots it will also be Arduino compatible whilst also gaining the capability to run the new Wolfram programming language. Needless to say it’s a pretty powerful little package and the standard form factor should make it easy to integrate into a lot of products.

By itself the Edison doesn’t suddenly make all wearable computing ideas feasible, indeed the progress made in this sector in the last year is a testament to that, instead it’s more of an evolutionary jump that should help to jump start the next generation of wearable devices. We’ve been able to go far with devices that have a tenth of the computing power of the Edison so it will be interesting to see what kinds of applications are made possible by the additional grunt it gives. Indeed Intel believes strongly in the idea that Edison will be the core of future wearable devices and has set up the Make It Wearable challenge, with over $1 million in prizes, in order to spur product designers on.

It will be interesting to see how the Edison stacks up against the current low power giant ARM as they have a bevy of devices already available that would be comparable to the Edison. Indeed it seems that Edison is meant to be a shot across ARM’s bow as it’s one of the few devices that Intel will allow third parties to license, much in the same way as ARM does today. There’s no question that Intel has been losing out hard in this space so the idea of marketing the Edison towards the wearable computing sector is likely a coy play to carve out a good chunk of that market before ARM cements themselves in it (like they did with smart phones).

One thing is for certain though, the amount of computing power available in such small packages is on the rise enabling us to integrate technology into more and more places. It’s the first tenuous steps towards creating an Internet of Things where seamless and unbounded communication is possible between almost any device. The results of Intel’s Make It Wearable competition will be a good indication of where this market is heading and what we, the consumers, can expect to see in the coming years.

AMD Logo

The Real Winner of the Console Wars: AMD.

In the general computing game you’d be forgiven for thinking there’s 2 rivals locked in a contest for dominance. Sure there’s 2 major players, Intel and AMD, and whilst they are direct competitors with each other there’s no denying the fact that Intel is the Goliath to AMD’s David, trouncing them in almost every way possible. Of course if you’re looking to build a budget PC you really can’t go past AMD’s processors as they provide an incredible amount of value for the asking price but there’s no denying that Intel has been the reigning performance and market champion for the better part of a decade now. However the next generation of consoles have proved to be something of a coup for AMD and it could be the beginnings of a new era for the beleaguered chip company.

AMD LogoBoth of the next generation consoles, the PlayStation 4 and XboxOne, both utilize an almost identical AMD Jaguar chip under the hood. The reasons for choosing it seem to align with Sony’s previous architectural idea for Cell (I.E. having lots of cores working in parallel rather than fewer working faster) and AMD is the king of cramming more cores into a single consumer chip. Although the reasons for going for AMD over Intel likely stem from the fact that Intel isn’t too crazy about doing custom hardware and the requirements that Sony and Microsoft had for their own versions of Jaguar could simply not be accommodated. Considering how big the console market is this would seem like something of a misstep by Intel, especially judging by the PlayStation4’s day one sales figures.

If you hadn’t heard the PlayStation 4 managed to move an incredible 1 million consoles on its first day of launch and that was limited to the USA. The Nintendo Wii by comparison took about a week to move 400,000 consoles and it even had a global launch window to beef up the sales. Whether the trend will continue or not considering that the XboxOne just got released yesterday is something we’ll have to wait to see but regardless every one of those consoles being purchased now contains in it an AMD CPU and they’re walking away with a healthy chunk of change from each one.

To put it in perspective out of every PlayStation 4 sale (and by extension every XboxOne as well) AMD is taking away a healthy $100 which means that in that one day of sales AMD generated some $100 million for itself. For a company who’s annual revenue is around the $1.5 billion mark this is a huge deal and if the XboxOne launch is even half that AMD could have seen $150 million in the space of a week. If the previous console generations were anything to go by (roughly 160 million consoles between Sony and Microsoft) AMD is looking at a revenue steam of some $1.6 billion over the next 8 years, a 13% increase to their bottom line. Whilst it’s still a far cry from the kinds of revenue that Intel sees on a monthly basis it’s a huge win for AMD and something they will hopefully be able to use to leverage themselves more in other markets.

Whilst I may have handed in my AMD fanboy badge after many deliriously happy years with my watercooled XP1800+ I still think they’re a brilliant chip company and their inclusion in both next generation consoles shows that the industry giants think the same way. The console market might not be as big as the consumer desktop space nor as lucrative as the high end server market but getting their chips onto both sides of the war is a major coup for them. Hopefully this will give AMD the push they need to start muscling in on Intel’s turf again as whilst I love their chips I love robust competition between giants a lot more.

 

Oh Shiny Itanium

So Long Itanium, You Will Not Be Missed.

I’ve worked with a lot of different hardware in my life, from the old days of tinkering with my Intel 80286 through to esoteric Linux systems running on DEC tin until I, like everyone else in the industry, settled on x86-64 as the defacto standard. Among the various platforms I was happy to avoid (including such lovely things as Sun SPARC) was Intel’s Itanium range as it’s architecture was so foreign from anything else it was guaranteed that whatever you were trying to do, outside of building software specifically for that platform, was doomed to failure. The only time I ever came close to seeing it being deployed was on the whim of a purchasing manager who needed guaranteed 100% uptime until they realised the size of the cheque they’d need to sign to get it.

Oh Shiny ItaniumIf Intel’s original dream was to be believed then this post would be coming to you care of their processors. You see back when it was first developed everything was still stuck in the world of 32bit and the path forward wasn’t looking particularly bright. Itanium was meant to be the answer to this, with Intel’s brand name and global presence behind it we would hopefully see all applications make their migration to the latest and greatest 64bit platform. However the complete lack of any backwards compatibility with any currently developed software and applications meant adopting it was a troublesome exercise and was a death knell for any kind of consumer adoption. Seeing this AMD swooped in with their dually compatible x86-64 architecture which proceeded to spread to all the places that Itanium couldn’t, forcing Intel to adopt the standard in their consumer line of hardware.

Itanium refused to die however finding a home in the niche high end market due to its redundancy features and solid performance for optimized applications. However the number of vendors continuing to support the platform dwindled from their already low numbers with it eventually falling to HP being the only real supplier of Itanium hardware in the form of their NonStop server line. It wasn’t a bad racket for them to keep up though considering the total Itanium market was something on the order of $4 billion a year and with only 55,000 servers shipped per year you can see how much of a premium they attract). Still all the IT workers of the world have long wondered when Itanium would finally bite the dust and it seems that that day is about to come.

HP has just announced that it will be transitioning its NonStop server range from Itanium to x86 effectively putting an end to the only sales channel that Intel had for their platform. What will replace it is still up in the air but it’s safe to assume it will be another Intel chip, likely one from their older Xeon line that shares many of the features that the Itanium had without the incompatible architecture. Current Itanium hardware is likely to stick around for an almost indefinite amount of time however due to the places it has managed to find itself in, much to the dismay of system administrators everywhere.

In terms of accomplishing it’s original vision Itanium was an unabashed failure, never finding the consumer adoption that it so desired and never becoming the herald of 64bit architecture. Commercially though it was somewhat of a success thanks to its features that made it attractive to the high end market but even then it was only a small fraction of total worldwide server sales, barely enough to make it a viable platform for anything but wholly custom solutions. The writing was on the wall when Microsoft said that Windows Server 2008 was the last version to support it and now with HP bowing out the death clock for Itanium has begun ticking in earnest, even if the final death knell won’t come for the better part of a decade.

 

Intel Smart TV

Intel Could Be Your Next Pay TV Provider.

One thing that not many people knew was that I was pretty keen on the whole Google TV idea when it was announced 2 years ago. I think that was partly due to the fact that it was a collaboration between several companies that I admire (Sony, Logitech and, one I didn’t know about at the time, Intel) and also because of what it promised to deliver to the end users. I was a fairly staunch supporter of it, to the point where I remember getting into an argument with my friends that consumers were simply not ready for something like it rather than it being a failed product. In all honesty I can’t really support that position any more and the idea of Google TV seems to be dead in the water for the foreseeable future.

Intel Smart TV

What I didn’t know was that whilst Google, Sony and Logitech might have put the idea to one side Intel has been working on developing their own product along similar lines, albeit from a different angle than you’d expect. Whilst I can’t imagine that they had invested that much in developing the hardware for the TVs (a quick Google search reveals that they were Intel Atoms, something they had been developing for 2 years prior to Google TV’s release) it appears that they’re still seeking some returns on that initial investment. At the same time however reports are coming in that Intel is dropping anywhere from $100 million to $1 billion on developing this new product, a serious amount of coin that industry analysts believe is an order of magnitude above anyone who’s playing around in this space currently.

The difference between this and other Internet set top boxes appears to be the content deals that Intel is looking to strike with current cable TV providers. Now anyone who’s ever looked into getting any kind of pay TV package knows that whatever you sign up for you’re going to get a whole bunch of channels you don’t want bundled in alongside the ones you do, effectively diluting the value you derive from the service significantly. Pay TV providers have long fought against the idea of allowing people to pick and choose (and indeed anyone who attempted to provide such a service didn’t appear to last long, ala SelecTV Australia) but with the success of on demand services like NetFlix and Hulu it’s quite possible that they might be coming around to the idea and see Intel as the vector of choice.

The feature list that’s been thrown around press prior to an anticipated announcement at CES next week (which may or may not happen, according to who you believe) does sound rather impressive, essentially giving you the on demand access that everyone wants right alongside the traditional programming that we’ve come to expect from pay TV services. The “Cloud DVR” idea, being able to replay/rewind/fast-forward shows without having to record them yourself, is evident of this and it would seem that the idea of providing the traditional channels as well would just seem to be a clever ploy to get the content onto their network. Of course traditional programming is required for certain things like sports and other live events, something which the on demand services have yet to fully incorporate into their offerings.

Whilst I’m not entirely enthused with the idea of yet another set top box (I’m already running low on HDMI ports as it is) the information I’ve been able to dig up on Intel’s offering does sound pretty compelling. Of course many of the features aren’t exactly new, you can do many of the things now with the right piece of hardware and pay TV subscriptions, but the ability to pick and choose channels would be and then getting that Hulu-esque interface to watch previous episodes would be something that would interest me. If the price point is right, and its available globally rather than just the USA, I could see myself trying it out for the select few channels that I’d like to see (along with their giant back catalogues, of course).

In any case it will be very interesting to see if Intel does say anything about their upcoming offering next week as if they do we’ll have information direct from the source and if they don’t we’ll have a good indication of which analysts really are talking to people who are involved in the project.

Surface RT vs Pro

The Windows 8/RT Distinction is Clear, Should You Not be an Idiot.

I’ve been using Windows 8 as my main system for the better part of 2 months now and, whilst I’ll leave the in-depth impressions for the proper review, I have to say I’m pretty happy with it. Sure I wasn’t particularly happy with the way things were laid out initially but for the most part if you just blunder along like its Windows 7 you’re not going to struggle with it for very long. I might not use any of the modern styled applications, they don’t feel like they’re particularly well suited to the mouse/keyboard scenario if I’m honest, but everything else about it works as expected. Of course whilst Microsoft has already sold 40 million licenses of Windows 8 most people are focusing on Windows RT, care of the Surface tablet.

For the technically inclined the differences between the two are pretty stark and we’ve known for a long time that the Surface is essentially Microsoft’s answer to the iPad. The lines are a little bit more blurry between Surface/RT and the full version of Windows 8, thanks to the Modern Styled UI being shared between them, but the lack of a desktop made it pretty clear where the delineation lay. It seems however that there’s a feeling among some the bigger media outlets that Windows 8 is suffering from an identity crisis of sorts which has been perplexing me all morning:

What we’re seeing, I think, is Microsoft dancing around an uncomfortable reality: Windows RT just doesn’t have much to offer, so it’s hard to explain how it’s different from Windows 8 without making it look inferior.

The only distinct advantage for Windows RT is its support for “connected standby,” a power-saving mode that lets the device keep an eye on e-mail and other apps while it’s not in use. It’s a nice feature to have, but on its own it’s a tough sell compared to Windows 8′s wider software support. (UPDATE: As Eddie Yasi points out in the comments, the Atom-based chips that Windows 8 tablets are using, codenamed Clover Trail, support connected standby as well.)

The main thrust of the article, and another one it linked to, is that there’s been no real information from Microsoft about the differences between the fully fledged version of Windows 8 and its RT cousin. I’ll be fair to the article and not use anything past its publication date but for anyone so inclined I wrote about the differences between the two platforms well over a year ago and I was kind of late to the party on it too. Indeed the vast majority of the tech press surrounding the Surface release understood these differences quite clearly and it appears that both Time and The Verge were both being willingly ignorant simply to get a story.

Granted The Verge has something of a point that the retail representatives didn’t know the product but then again why were you asking in depth technical questions of a low wage retail worker? Most people who are looking for a Surface/iPad like device aren’t going to want to know if their legacy applications will run on it because, to them, they’re not the same thing. You could argue that the customer might have seen the Modern UI at home and then assumed that the Surface was exactly the same but I’d struggle to find someone who had installed Windows 8 this early in the piece and wasn’t aware that the Surface was a completely different beast.

Indeed the quote paragraphs above imply that Jared Newman (writer of the Time article) isn’t aware that the RT framework, the thing that powers the Modern  UI, is the glue that will join all of Microsoft’s ecosystem together. Not only does it underpin Windows 8 but it’s also the framework for Windows Phone 8 and (I am speculating here but the writing is on the wall) the upcoming Xbox. What Windows RT devices offer you is the same experience that you’ll be able to get anywhere else Microsoft ecosystem but on low power devices. Newman makes the point that they could very well run them on Atom processors however anyone who’s actually used one can tell you that their performance is not up to scratch with their i3/5/7 line and is barely usable for desktop applications. They’re comparable in the low power space, meaning they would have made a decent replacement for ARM, but considering that 95% of the world’s portable devices run on the ARM it makes much more sense to go with the dominant platform rather than using something that’s guaranteed to give a sub-par experience.

I don’t like doing these kinds of take down posts, it usually feels like I’m shouting at a brick wall, but when there’s a fundamental lack of understanding or wilful ignorance of the facts I feel compelled to say something. The Windows8/RT distinctions are clear and, should you do even a small amount of research, the motives for doing so are also obvious. Thankfully most of the tech press was immune to this (although TechCrunch got swept up in this as well, tsk tsk) so there’s only a few bad apples that needed cleaning up.

Haswell Chip Wafer

Intel’s Next Generation CPU To Be Non-Removable, Drawing Enthusiast’s Ire.

The ability to swap components around has been an expected feature for PC enthusiasts ever since I can remember. Indeed the use of integrated components was traditionally frowned upon as they were typically of lower quality and should they fail you were simply left without that functionality with no recourse but to buy a new motherboard. Over time however the quality of integrated components has increased significantly and many PC builders, myself included, now forego the cost of additional add-in cards in favour of their integrated brethren. There are still some notable exceptions to this rule however, like graphics cards for instance, and there were certain components that most of us never thought would end up as being an integrated component, like the CPU.

Turns out we could be dead wrong about that.

Now it’s not like fully integrated computers are a new thing, in fact this blog post is coming to you via a PC that has essentially 0 replaceable/upgradable parts, commonly referred to as a laptop. Apple has famously taken this level of integration to its logical extreme in order to create its relatively high powered line of laptops with slim form factors and many other companies have since followed suit due to the success Apple’s laptop line have had. Still they were a relatively small market compared to the other big CPU consumers of the world (namely desktops and servers) which have both resisted the integrated approach mostly because it didn’t provide any direct benefits like it did for laptops. That may change if the rumours about Intel’s next generation chip, Haswell, turn out to be true.

Reports are emerging that Haswell won’t be available in a Land Grid Array (LGA) package and will only be sold in the Ball Grid Array (BGA) form factor. For the uninitiated the main difference between the two is that the former is the current standard which allows for processors to be replaced on a whim. BGA on the other hand is the package used when an integrated circuit is to be permanently attached to its circuit board as the “ball grid” is in fact blobs of solder that will be used to attach it. Not providing a LGA package essentially means the end for any kind of user-replaceable CPU, something which has been a staple of the enthusiast PC community ever since its inception. It also means a big shake up of the OEM industry who now have to make decisions about what kinds of motherboards they’re going to make as the current wide range of choice can’t really be supported with the CPUs being integrated.

My initial reaction to this was one of confusion as this would signify a really big change away from how the PC business has been running for the past 3 decades. This isn’t to say that change isn’t welcome, indeed the integration of rudimentary components like the sound card and NIC were very much welcome additions (after their quality improved), however making the CPU integrated essentially puts the kibosh on the high level of configurability that we PC builders have enjoyed for such a long time. This might not sound like a big deal but for things like servers and fleet desktop PCs that customizability also means that the components are interchangeable, making maintenance far easier and cheaper. Upgradeability is another reason however I don’t believe that’s a big of a factor as some would make it out to be, especially with how often socket sizes have changed over the past 5 years or so.

What’s got most enthusiasts worried about this move is the siloing of particular feature sets to certain CPU designations. To put it in perspective there’s typically 3 product ranges for any CPU family: the budget range (typically lower power, less performance but dirt cheap), the mid range (aimed at budget concious enthusiasts and fleet units) and the high end performance tier (almost exclusively for enthusiasts and high performance computing situations). If these CPUs are tied to the motherboard it’s highly likely that some feature sets will be reserved for certain ranges of CPUs. Since there are many applications where a low power PC can take advantage of high end features (like oodles of SATA ports for instance) and vice versa this is a valid concern and one that I haven’t been able to find any good answers to. There is the possibility of OEMs producing CPU daughter boards like the slotkets of old however without an agree upon standard you’d be effectively locking yourself into that vendor, something which not everyone is comfortable doing.

Still until I see more information its hard for me to make up my mind where I stand on this. There’s a lot of potential for it to go very, very wrong which could see Intel on the wrong side of a community that’s been dedicated to it for the better part of 30 years. They’re arguably in the minority however and its very possible that Intel is getting increasing numbers of orders that require BGA style chips, especially where their Atoms can’t cut it. I’m not sure what they could do right in this regard to win me over but I get the feeling that, just like the other integrated components I used to despise, there may come a time when I become indifferent to it and those zero insertion force sockets of old will be a distant memory, a relic of PC computing’s past.

OCZ Vertex 3: Don’t Play With My Heart (Or The SSD Conundrum).

My main PC at home is starting to get a little long in the tooth, having been ordered back in the middle of 2008 and only receiving upgrades of a graphics card and a hard drive since then. Like all PCs I’ve had it suffered a myriad of problems that I just usually put up with until I stumbled across a work around, but I think the vast majority of them can be traced to a faulty motherboard (Can’t put more than 4GB of RAM in it or it won’t post) and a batch of faulty hard drives (that would randomly park the heads causing it to freeze). At the time I had the wonderful idea of buying the absolute latest so I could upgrade cheaply for the next few years, but thanks to the consolization of games I found that wasn’t really necessary.

To be honest it’s not even really necessary now either, with all the latest games still running at full resolution and most at high settings to boot. I am starting to lag on the technology front however with my graphics card not supporting DirectX 11 and everything but the RAM being 2 generations behind (yes, I have a Core 2 Duo). So I took it upon myself to build a rig that combined the best performance available of the day rather than trying to focus on future compatibility. Luckily for me it looks like those two are coinciding.

Just because like any good geek I love talking shop when it comes to building new PCs here are the specs of the potential beast in making:

  • Intel Core i7 2600K
  • Asrock P67 Motherboard
  • Corsair Vengeance 1600MHz DDR3 16GB
  • Radeon HD6950
  • 4 x 1TB Seagate HDD in RAID 10
  • OCZ Vertex 3 120GB

The first couple choices I made for this rig were easy. Hands down the best performance out there is with the new Sandy Bridge i7 chips with the 2600K being the top of the lot thanks to its unlocked multiplier and hyperthreading, which chips below the 2600 lack. The choice of graphics cards was a little harder as whilst the Radeon comes out leagues ahead on a price to performance ratio the NVIDIA cards still had a slight performance lead overall, but hardly enough to justify the price. Knowing that I wanted to take advantage of the new SATA 6Gbps  range of drives that were coming out my motherboard choice was almost made for me as the Asrock P67 seems to be one of the few that has more than 4 of the ports available (it has 6, in fact).

The choice of SSD however, whilst extremely easy at the time, became more complicated recently.

You see back in the initial pre-production review round the OCZ Vertex 3 came out shooting, blasting away all the competition in a seemingly unfair comparison to its predecessors. I was instantly sold especially considering the price was looking to be quite reasonable, around the $300 mark for a 120GB drive. Sure I could opt for the bigger drive and dump my most frequently played games on it but in reality a RAID10 array of SATA 6Gbps drives should be close enough without having to overspend on the SSD. Like any pre-production reviews I made sure to keep my ear to the ground just in case something changed once they started churning them out.

Of course, something did.

The first production review that grabbed my attention was from AnandTech, renowned for their deep understanding of SSDs and producing honest and accurate reviews. The results for my drive size of choice, the 120GB, were decidedly mixed on a few levels with it falling down in several places where the 240GB version didn’t suffer any such problems. Another review confirmed the figures were in the right ballpark although unfortunately lacking a comparison to the 240GB version. The reasons behind the performance discrepancies are simple, whilst functionally the same drives the differences come from the number of NAND chips used to create the drive. The 240GB version has double the amount of the 120GB version which allows for higher throughput and additionally grants the drive a larger scratch space that it can use to optimize its performance¹.

So of course I started to rethink my position. The main reason for getting a real SSD over something like the PCIe bound RevoDrive was that I could use it down the line as a jumbo flash drive if I wanted to and I wouldn’t have to sacrifice one of my PCIe lanes to use it. The obvious competitor to the OCZ Vertex 3 would be something like the Intel 510 SSD but the reviews haven’t been very kind to this device, putting it barely in competition with previous generation devices.

After considering all my options I think I’ll still end up going with the OCZ Vertex 3 at the 120GB size. Whilst it might not be the kind of performance in every category it does provide tremendous value when compared to a lot of other SSDs and it will be in another league when compared to my current spinning rust hard drive. Once I get around to putting this new rig together you can rest assured I’ll put the whole thing through its paces, if at the very least to see how the OCZ Vertex 3 stacks up against the numbers that have already been presented.

¹Ever wondered why some SSDs are odd sizes? They are in fact good old fashioned binary sizes (128GB and 256GB respectively) however the drive reserves a portion of that (8GB and 16GB) to use as scratch space to write and optimize data before committing it. Some drives also use it as a buffer for when flash cells become unwritable (flash cells don’t usually die, you just can’t write to them anymore) so that the drive’s capacity doesn’t degrade.