Posts Tagged‘cpu’

Carbon Nanotubes Break Barriers for Moore’s Law.

In the last decade there’s been a move away from raw CPU speed as an indicator of performance. Back when single cores were the norm it was an easy way to judge which CPU would be faster than the other in a general sense however the switch to multiple cores threw this into question. Partly this comes from architecture decisions and software’s ability to make use of multiple cores but it also came hand in hand with a stalling CPU speeds. This is mostly a limitation of current technology as faster switching meant more heat, something most processors could not handle more of. This could be set to change however as research out IBM’s Thomas J. Watson Research Center proposes a new way of constructing transistors that overcomes that limitation.

Carbon Nanotube Transistors

Current day processors, whether they be the monsters powering servers or the small ones ticking away in your smartwatch, are all constructed through a process called photolithography. In this process a silicon wafer is covered in a photosensitive chemical and then exposed to light through a mask. This is what imprints the CPU pattern onto the blank silicon substrate, creating all the circuitry of a CPU. This process is what allows us to pack billions upon billions of transistors into a space little bigger than your thumbnail. However it has its limitations related to things like the wavelength of light used (higher frequencies are needed for smaller features) and the purity of the substrate. IBM’s research takes a very different approach by instead using carbon nanotubes as the transistor material and creating features by aligning and placing them rather than etching them in.

Essentially what IBM does is take a heap of carbon nanotubes, which in their native form are a large unordered mess, and then aligns them on top of a silicon wafer. When the nanotubes are placed correctly, like they are in the picture shown above, they form a transistor. Additionally the researchers have devised a method to attach electrical connectors onto these newly formed transistors in such a way that their electrical resistance is independent of their width. What this means is that the traditional limitation of increasing heat with increased frequency is now decoupled, allowing them to greatly reduce the size of the connectors potentially allowing for a boost in CPU frequency.

The main issue such technology faces is that it is radically different from the way we currently manufacture CPUs today. There’s a lot of investment in current lithography based fabs and this method likely can’t make use of that investment. So the challenge these researchers face is creating a scalable method with which they can produce chips based on this technology, hopefully in a way that can be adapted for use in current fabs. This is why you’re not likely to see processors based on this technology for some time, probably not for another 5 years at least according to the researchers.

What it does show though is that there is potential for Moore’s Law to continue for a long time into the future. It seems whenever we brush up against a fundamental limitation, one that has plagued us for decades, new research rears its head to show that it can be tackled. There’s every chance that carbon nanotubes won’t become the new transistor material of choice but insights like these are what will keep Moore’s Law trucking along.

Microsoft Rumoured to be Looking to Acquire AMD.

The last decade has not been kind to AMD. It used to be a company that was readily comparable to Intel in almost every way, having much the same infrastructure (including chip fabs) whilst producing products that were readily comparable. Today however they’re really only competitive in the low end space, surviving mostly on revenues from the sales of both of the current generation of games consoles. Now with their market cap hovering at the $1.5 billion mark rumours are beginning to swirl about a potential takeover bid, something numerous companies could do at such a cheap price. The latest rumours point towards Microsoft and, in my humble opinion, an acquisition from them would be a mixed bag for both involved.

amd_lonestar_campus_bizjournals1

The rumour surfaced from an article on Fudzilla citing “industry sources” on the matter, so there’s potential that this will amount to nothing more than just a rumour. Still talks of an AMD acquisition by another company have been swirling for some time now however so the idea isn’t exactly new. Indeed AMD’s steadily declining stock price, one that has failed to recover ever since its peak shortly after it spun off Global Foundries, has made this a possibility for some time now. A buyer hasn’t been forthcoming however but let’s entertain the idea that Microsoft is interested to see where it leads us.

As Microsoft begins to expand itself further into the devices market there’s some of potential in owning the chip design process. They’re already using an AMD chip for the current generation console and, with total control over the chip design process, there’s every chance that they’d use one for a future device. There’s similar potential for the Surface however AMD has never been the greatest player in the low power space, so there’d likely need to be some innovation on their part to make that happen. Additionally there’s no real solid offering from AMD in the mobile space, ruling out their use in the Lumia line of devices. Based just on chips alone I don’t think Microsoft would go for it, especially with the x86 licensing deal that the previous article I linked to mentions.

Always of interest to any party though will be AMD’s warchest of patents, some 10,000 of them. Whilst the revenue from said patents isn’t substantial (at least I can’t find any solid figures on it, which means it isn’t much) they always have value when the lawsuits start coming down. For a company that has billions sitting in reserve those patents might well be worth AMD’s market cap, even with a hefty premium on top of it. If that’s the only value that an acquisition will offer however I can’t imagine AMD, as a company, sticking around for long afterwards unfortunately.

Of course neither company has commented on the rumour and, as of yet, there isn’t any other sources confirming this rumour. Considering the rather murky value proposition that such an acquisition offers both companies I honestly have trouble believing it myself. Still the idea of AMD getting taken over seems to come up more often than it used to so I wouldn’t put it past them courting offers from anyone and everyone that will hear them. Suffice to say AMD has been in need of a saviour for some time now, it might just not end up being Microsoft at this point.

The Ultrabook Upgrade Conundrum.

I’ve had my ASUS Zenbook UX32V for almost three years now and, if I’m quite honest, the fact that it’s managed to last this long has surprised me. Notsomuch from a “it’s still working” perspective, more that it still seems just as capable today as it did back then. Still it has begun to show its age in some regards, like the small 28GB SSD (which for some reason doesn’t show up as a unified device) being unable to do any in-place upgrades due to the limited space. Plus I figured this far down the line there was bound to be something better, sleeker and, possibly, far cheaper and so I began the search for my ultrabooks replacement. The resulting search has shown that, whilst there’s dozens of options available, compromise on one or more aspects is the name of the game.

Two Dell Alienware 13 Non-Touch notebook computers.

Two Dell Alienware 13 Non-Touch notebook computers.

Essentially what I was looking for was a modern replacement of the UX32V which, in my mind had the following features: small, light, discrete graphics and a moderately powerful CPU. Of course I’d be looking to improve on most other aspects as much as I could such as a better screen, longer battery life (it’ll get at most a couple hours when gaming now) and a large SSD so I don’t run into the same issues that I have been. In general terms pretty much every ultrabook out there ticks most of those boxes however once I start adding in certain must-have features things start to get a little sticky.

For starters a discrete graphics card isn’t exactly standard affair for an ultrabook, even though I figured since they crammed in a pretty powerful unit into the UX32V that they’d likely be everywhere the next time I went to look. No for most ultrabooks, which seem to be defined as slim and light laptops now, the graphics card of choice is the integrated Intel chipset, one that isn’t particularly stellar for anything that’s graphically intensive. Larger ultrabooks, especially those with very high res screens, tend to come with a lower end discrete card in them but, unfortunately, they also bring with them the added bulk of their size.

Indeed it seems anything that brings with it a modicum of power, whether it be from the discrete graphics chip or say a beefier processor, also comes with an additional increase in heft. After poking around for a while I found out that many of the smaller models came with a dual core chip, something which can mean it will be CPU bound for tasks. However adding in a quad core chip usually means the laptop swells in thickness in order to accommodate the additional heat output of the larger chip, usually pushing it out of ultrabook territory.

In the end the conclusion I’ve come to is that a sacrifice needs to be made so that I can get the majority of my requirements met. Out of all the ultrabooks I looked at the Alienware 13 (full disclosure: I work for Dell, their parent company) meets most of the specifications whilst unfortunately falling short on the CPU side and also being noticeably thicker than my current Zenbook is. However those are two tradeoffs I’m more than willing to make given the fact it meets everything other requirement I have and the reviews of it seem to be good. I haven’t taken the plunge yet, I’m still wondering if there’s another option out there that I haven’t seen yet, but I’m quickly finding out that having all the choice in the world may mean you really have no choice at all.

The One Horse Race That is CPUs.

Roll back the clock a decade or so and the competition for what kind of processor ended up in your PC was at a fever pitch with industry heavyweights Intel and AMD going blow for blow. The choice of CPU, at least for me and my enthusiast brethren, almost always came down to what was fastest but the lines were often blurry enough that brand loyalty was worth more than a few FPS here or there. For the longest time I was an AMD fan, sticking stalwartly to their CPUs which provided me with the same amount of grunt as their Intel brethren for a fraction of the cost. However over time the gap between what an AMD CPU could provide and what Intel offered was too wide to ignore, and it’s only been getting wider since then.

AMD Logo Official

The rift is seen in adoption rates across all products that make use of modern CPUs with Intel dominating nearly any sector that you find them in. When Intel first retook the crown all those years ago the reasons were clear, Intel just performed well enough to justify the cost, however as time went on it seemed like AMD was willing to let that gap continue to grow. Indeed if you look at them from a pure technology basis they’re stuck about 2 generations behind where Intel is today with the vast majority of their products being produced on a 28nm process, with Intel’s latest release coming out on 14nm. Whilst they pulled a major coup in winning over all of the 3 major consoles that success has had much onflow to the rest of the business. Indeed since they’ll be producing the exact same chips for the next 5+ years for those consoles they can’t really do much with them anyway and I doubt they’d invest in a new foundry process unless Microsoft or Sony asked them nicely.

What this has translated into is a monopoly by default, one where Intel maintains it’s massive market share without having to worry about any upstarts rocking their boat. Thankfully the demands of the industry are pressure enough to keep them innovating at the rapid pace they set way back when AMD was still biting at their heels but there’s a dangerously real chance that they could just end up doing the opposite. It’s a little unfair to put the burden on AMD to keep Intel honest however it’s hard to think of another company who has the required pedigree and experience to be the major competition to their platform.

The industry is looking towards ARM as being the big competition for Intel’s x86 platform although, honestly, they’re really not in the same market. Sure nearly every phone under the sun is now powered by some variant of the ARM architecture however when it comes to consumer or enterprise compute you’d be struggling to find anything that runs on it. There is going to have to be an extremely compelling reason for everyone to want to translate to that platform and, as it stands right now, mobile and low power are the only places where it really fits. For ARM to really start eating Intel’s lunch it’d need to make some serious inroads into those spaces, something which I don’t see happening for decades at least.

There is some light in the form of Kaveri however it’s less than stellar performance when compared to Intel’s less tightly coupled solution does leave a lot to be desired. At a high level the architecture does feel like the future of all computing, well excluding radical paradigm shifts like HP’s The Machine (which is still vaporware at this point),  but until it equals the performance of discreet components it’s not going anywhere fast. I get the feeling that if AMD had kept up with Intel’s die shrinks Kaveri would be looking a lot more attractive than it is currently, but who knows what it might have cost them to get to that stage.

In any other industry you’d see this kind of situation as one that was ripe for disruption however the capital intensive nature, plus an industry leader who isn’t resting on their laurels, means that there are few who can hold a candle to Intel. The net positive out of all of this is that we as consumers aren’t suffering however we’ve all seen what happens when a company remains at the top for far too long. Hopefully the numerous different sectors which Intel is currently competing in will be enough to offset their monopolistic nature in the CPU market but that doesn’t mean more competition in that space isn’t welcome.

Intel Keeps Moore’s Law Alive With 14nm Fabrication.

The popular interpretation of Moore’s Law is that computing power, namely of the CPU, doubles every two years or so. This is then extended to pretty much all aspects of computing such as storage, network transfer speeds and so on. Whilst this interpretation has held up reasonably well in the past 40+ years since the law has coined it’s actually not completely accurate as Moore was actually referring to the number of components that could be integrated into a single package for a minimum cost. Thus the real driver behind Moore’s law isn’t performance, per se, it’s the cost at which we can provide said integrated package. Keeping on track with this law hasn’t been easy but innovations like Intel’s new 14nm process are what have been keeping us on track.

14nmCosts

CPUs are created through a process called Photolithography whereby a substrate, typically a silicon wafer, has the transistors etched onto it through a process not unlike developing a photo. The defining characteristic of this process is the minimum size of a feature that the process can etch on the wafer which is usually expressed in terms of nanometers. It was long thought that 22nm would be the limit for semiconductor manufacturing as this process was approaching the physical limitations of the substrates used. However Intel, and many other semiconductor manufacturers, have been developing processes that push past this and today Intel has released in depth information regarding their new 14nm process.

The improvements in the process are pretty much what you’d come to expect from a node improvement of this nature. A reduction in node size typically means that a CPU can be made with more transistors that performs better and uses less power than a similar CPU built on a larger sized node. This is most certainly the case with Intel’s new 14nm fabrication process and, interesting enough, they appear to be ahead of the curve so to speak, with the improvements in this process being slightly ahead of the trend. However the most important factor, at least in respect Moore’s Law, is that they’ve managed to keep reducing the cost per transistor.

One of the biggest cost drivers for CPUs is what’s called the yield of the wafer. Each of these wafers costs a certain amount of money and, depending on how big and complex your CPU is, you can only fit a certain number of them on there. However not all of those CPUs will turn out to be viable and the percentage of usable CPUs is what’s known as the wafer yield. Moving to a new node size typically means that your yield takes a dive which drives up the cost of the CPU significantly. The recently embargoed documents from Intel reveals however that the yield for the 14nm process is rapidly approaching that of the 22nm process which is considered to be Intel’s best yielding process to date. This, plus the increased transistor density that’s possible with the new manufacturing process, is what has led to the price per transistor dropping giving Moore’s law a little more breathing room for the next couple years.

This 14nm process is what will be powering Intel’s new Broadwell set of chips, the first of which is due out later this year. Migrating to this new manufacturing process hasn’t been without its difficulties which is what has led to Intel releasing only a subset of the Broadwell chips later this year, with the rest to come in 2015. Until we get our hands on some of the actual chips there’s no telling just how much of an improvement these will be over their Haswell predecessors but the die shrink alone should see some significant improvements. With the yields fast approaching those of its predecessors they’ll hopefully be quite reasonably priced too, for a new technology at least.

It just goes to show that Moore’s law is proving to be far more robust than anyone could have predicted. Exponential growth functions like that are notoriously unsustainable however it seems every time we come up against another wall that threatens to kill the law off another innovative way to deal with it comes around. Intel has long been at the forefront of keeping Moore’s law alive and it seems like they’ll continue to be its patron saint for a long time to come.

AMD’s Kaveri Could Be This Generation’s x86-64.

The story of AMD’s rise to glory on the back of Intel’s failures is well known. Intel, filled with the hubris that can only come from maintaining a dominate market position as long as they had, thought that the world could be brought into the 64bit world on the back of their brand new platform: Itanium. The cost for adopting this platform was high however as it made no attempts to be backwards compatible, forcing you to revamp your entire software stack to take advantage of it (the benefits of which were highly questionable). AMD, seeing the writing on the wall, instead developed their x86-64 architecture which not only promised 64bit compatibility but even went as far as to outclass then current generation Intel processors in 32bit performance. It was then an uphill battle for Intel to play catchup with AMD but the past few years have seen Intel dominate AMD in almost every metric with the one exception of performance per dollar at the low end.

That could be set to change however with AMD announcing their new processors, dubbed Kaveri:

AMD Kaveri CPU-GPU OverviewOn the surface Kaveri doesn’t seem too different from the regular processors you’ll see on the market today, sporting an on-die graphics card alongside the core compute units. As the above picture shows however the amount of on die space dedicated to said GPU is far more than any other chip currently on the market and indeed the transistor count, which is a cool 2.1 billion, is a testament to this. After that however it starts to look more and more like a traditional quad core CPU with an integrated graphics chip, something few would get excited about, but the real power of AMD’s new Kaveri chips comes from the architectural changes that underpin this insanely complex piece of silicon.

The integration of GPUs onto CPUs has been the standard for some years now with 90% of chips being shipped with an on-die graphics processor. For all intents and purposes the distinction between them and discrete units are their location within the computer as they’re essentially identical at the functional level. There is some advantages gained due to being so close to the CPU (usually to do with latency that’s eliminated by not having to communicate over the PCIe bus) but they’re still typically inferior due to the amount of die space that can be dedicated to them. This was especially true of generations previous to the current one which weren’t much better than the integrated graphics cards that shipped with many motherboards.

Kaveri, however, brings with it something that no other CPU has managed before: a unified memory architecture.

Under the hood under every computer is a whole cornucopia of different styles of memory, each with their own specific purpose. Traditionally the GPU and CPU would each have their own discrete pieces of memory, the CPU with its own pool of RAM (which is typically what people refer to) and the GPU with similar. Integrated graphics would typically take advantage of the system RAM, reserving part a section for its own use. In Kaveri the distinction between the CPU’s and GPUs memory is gone, replaced by a unified view where either processing unit is able to access the others. This might not sound particularly impressive but it’s by far one of the biggest changes to come to computing in recent memory and AMD is undoubtedly the pioneer in this realm.

GPUs power comes from their ability to rapidly process highly parallelizable tasks, examples being things like rendering or number crunching. Traditionally however they’re constrained by how fast they can talk with the more general purpose CPU which is responsible for giving it tasks and interpreting the results. Such activities usually involve costly copy operations that flow through slow interconnects in your PC, drastically reducing the effectiveness of a GPU’s power. Kaveri CPUs on the other hand suffer from no such limitations allowing for seamless communication between the GPU and the CPU enabling them both to perform tasks and share results without the traditional overhead.

The one caveat at this point however is that software needs to be explicitly coded to take advantage of this unified architecture. AMD is working extremely hard to get low level tools to support this, meaning that programs should eventually be able to take advantage of it without much hassle, however it does mean that the Kaveri hardware is arriving long before the software will be able to take advantage of it. It’s sounding a lot like an Itanium moment here, for sure, but as long as AMD holds good to their promises of working with tools developers to take advantage of this (whilst retaining the required backwards compatibility) this has the potential to be another coup for AMD.

If the results from the commercial units are anything to go by then Kaveri looks very promising. Sure it’s not a performance powerhouse but it certainly holds its own against the competition and I’m sure once the tools catch up you’ll start to see benchmarks demonstrating the power of a unified memory architecture. That may be a year or two out from now but rest assured this is likely the future for computing and every other chip manufacturer in the world will be rushing to replicate what AMD has created here.

 

The Real Winner of the Console Wars: AMD.

In the general computing game you’d be forgiven for thinking there’s 2 rivals locked in a contest for dominance. Sure there’s 2 major players, Intel and AMD, and whilst they are direct competitors with each other there’s no denying the fact that Intel is the Goliath to AMD’s David, trouncing them in almost every way possible. Of course if you’re looking to build a budget PC you really can’t go past AMD’s processors as they provide an incredible amount of value for the asking price but there’s no denying that Intel has been the reigning performance and market champion for the better part of a decade now. However the next generation of consoles have proved to be something of a coup for AMD and it could be the beginnings of a new era for the beleaguered chip company.

AMD LogoBoth of the next generation consoles, the PlayStation 4 and XboxOne, both utilize an almost identical AMD Jaguar chip under the hood. The reasons for choosing it seem to align with Sony’s previous architectural idea for Cell (I.E. having lots of cores working in parallel rather than fewer working faster) and AMD is the king of cramming more cores into a single consumer chip. Although the reasons for going for AMD over Intel likely stem from the fact that Intel isn’t too crazy about doing custom hardware and the requirements that Sony and Microsoft had for their own versions of Jaguar could simply not be accommodated. Considering how big the console market is this would seem like something of a misstep by Intel, especially judging by the PlayStation4’s day one sales figures.

If you hadn’t heard the PlayStation 4 managed to move an incredible 1 million consoles on its first day of launch and that was limited to the USA. The Nintendo Wii by comparison took about a week to move 400,000 consoles and it even had a global launch window to beef up the sales. Whether the trend will continue or not considering that the XboxOne just got released yesterday is something we’ll have to wait to see but regardless every one of those consoles being purchased now contains in it an AMD CPU and they’re walking away with a healthy chunk of change from each one.

To put it in perspective out of every PlayStation 4 sale (and by extension every XboxOne as well) AMD is taking away a healthy $100 which means that in that one day of sales AMD generated some $100 million for itself. For a company who’s annual revenue is around the $1.5 billion mark this is a huge deal and if the XboxOne launch is even half that AMD could have seen $150 million in the space of a week. If the previous console generations were anything to go by (roughly 160 million consoles between Sony and Microsoft) AMD is looking at a revenue steam of some $1.6 billion over the next 8 years, a 13% increase to their bottom line. Whilst it’s still a far cry from the kinds of revenue that Intel sees on a monthly basis it’s a huge win for AMD and something they will hopefully be able to use to leverage themselves more in other markets.

Whilst I may have handed in my AMD fanboy badge after many deliriously happy years with my watercooled XP1800+ I still think they’re a brilliant chip company and their inclusion in both next generation consoles shows that the industry giants think the same way. The console market might not be as big as the consumer desktop space nor as lucrative as the high end server market but getting their chips onto both sides of the war is a major coup for them. Hopefully this will give AMD the push they need to start muscling in on Intel’s turf again as whilst I love their chips I love robust competition between giants a lot more.

 

All Your Consoles Are Belong To x86.

Ever since the first console was released they have always been at arms length with the greater world of computing. Initially this was just a difference in inputs as consoles were primarily games machines and thus did not require a fully fledged keyboard but over time they grew into being purpose built systems. This is something of a double edged sword as whilst a tightly controlled hardware platform allows developers to code against a set of specifications it also usually meant that every platform was unique which often meant that there was a learning curve for developers every time a new system came out. Sony was particularly guilty of this as the PlayStation 2 and 3 were both notoriously difficult to code for; the latter especially given its unique combination of linear coprocessors and giant non-linear unit.

Playstation 4 Xbox360 Orbis Durango

There was no real indication that this trend was going to stop either as all of the current generation of consoles use some non-standard variant of some comparably esoteric processor. Indeed the only console in recent memory to attempt to use a more standard processor, the original Xbox, was succeeded by a PowerPC driven Xbox360 which would make you think that the current industry standard of x86 processors just weren’t suited to the console environment. Taking into account that the WiiU came out with a PowerPC CPU it seem logical that the next generation would continue this trend but it seems there’s a sea change on the horizon.

Early last year rumours started circulating that the next generation PlayStation, codenamed Orbis, was going to be sporting a x86 based processor but the next generation Xbox, Durango, was most likely going to be continuing with a PowerPC CPU. As it turns out this isn’t the case and Durango will in fact be sporting an x86 (well if you want to be pedantic its x86-64, or x64). This means that its highly likely that code built on the windows platform will be portable to Durango and makes the Xbox the launchpad for the final screen in Microsoft’s Three Screens idea. This essentially means that nearly all major gaming platforms share the same coding base which should make cross platform releases far easier than they have been.

News just in also reveals the specifications of the PlayStation 4 confirming the x86 rumours. It also brings with it some rather interesting news: AMD is looking to be the CPU/GPU manufacturer of choice for the next generation of consoles.

There’s no denying that AMD has had a rough couple years with their most recent quarter posting a net loss of $473 million. It’s not unique to them either as Intel has been dealing with sliding revenue figures as the mobile sector heats up and demand for ARM based processors, which neither of the 2 big chip manufacturer’s provide, skyrockets. Indeed Intel has stated several times that they’re shifting their strategy to try and capture that sector of the market with their most recent announcement being that they won’t be building motherboards any more. AMD seems to have lucked out in securing the CPU for the Orbis (and whilst I can’t find a definitive source it looks like their processor will be in Durango too) and the GPU for both of them which will guarantee them a steady stream of income for quite a while to come. Whether or not this will be enough to reinvigorate the chip giant remains to be seen but there’s no denying that it’s a big win for them.

The end result, I believe, will be an extremely fast maturation of the development frameworks available for the next generation of consoles thanks to their x86 base. What this means is that we’re likely to see titles making the most of the hardware much sooner than we have for other platforms thanks to their ubiquity of their underlying architecture. This will be both a blessing and a curse as whilst the first couple years will see some really impressive titles past that point there might not be a whole lot of room for optimizations. This is ignoring the GPU of course where there always seems to be better ways of doing things but it will be quickly outpaced by its newer brethren. Combine this with the availability of the SteamBox and we could see PCs making a come back as the gaming platform of choice once the consoles start showing their age.

 

Intel’s Next Generation CPU To Be Non-Removable, Drawing Enthusiast’s Ire.

The ability to swap components around has been an expected feature for PC enthusiasts ever since I can remember. Indeed the use of integrated components was traditionally frowned upon as they were typically of lower quality and should they fail you were simply left without that functionality with no recourse but to buy a new motherboard. Over time however the quality of integrated components has increased significantly and many PC builders, myself included, now forego the cost of additional add-in cards in favour of their integrated brethren. There are still some notable exceptions to this rule however, like graphics cards for instance, and there were certain components that most of us never thought would end up as being an integrated component, like the CPU.

Turns out we could be dead wrong about that.

Now it’s not like fully integrated computers are a new thing, in fact this blog post is coming to you via a PC that has essentially 0 replaceable/upgradable parts, commonly referred to as a laptop. Apple has famously taken this level of integration to its logical extreme in order to create its relatively high powered line of laptops with slim form factors and many other companies have since followed suit due to the success Apple’s laptop line have had. Still they were a relatively small market compared to the other big CPU consumers of the world (namely desktops and servers) which have both resisted the integrated approach mostly because it didn’t provide any direct benefits like it did for laptops. That may change if the rumours about Intel’s next generation chip, Haswell, turn out to be true.

Reports are emerging that Haswell won’t be available in a Land Grid Array (LGA) package and will only be sold in the Ball Grid Array (BGA) form factor. For the uninitiated the main difference between the two is that the former is the current standard which allows for processors to be replaced on a whim. BGA on the other hand is the package used when an integrated circuit is to be permanently attached to its circuit board as the “ball grid” is in fact blobs of solder that will be used to attach it. Not providing a LGA package essentially means the end for any kind of user-replaceable CPU, something which has been a staple of the enthusiast PC community ever since its inception. It also means a big shake up of the OEM industry who now have to make decisions about what kinds of motherboards they’re going to make as the current wide range of choice can’t really be supported with the CPUs being integrated.

My initial reaction to this was one of confusion as this would signify a really big change away from how the PC business has been running for the past 3 decades. This isn’t to say that change isn’t welcome, indeed the integration of rudimentary components like the sound card and NIC were very much welcome additions (after their quality improved), however making the CPU integrated essentially puts the kibosh on the high level of configurability that we PC builders have enjoyed for such a long time. This might not sound like a big deal but for things like servers and fleet desktop PCs that customizability also means that the components are interchangeable, making maintenance far easier and cheaper. Upgradeability is another reason however I don’t believe that’s a big of a factor as some would make it out to be, especially with how often socket sizes have changed over the past 5 years or so.

What’s got most enthusiasts worried about this move is the siloing of particular feature sets to certain CPU designations. To put it in perspective there’s typically 3 product ranges for any CPU family: the budget range (typically lower power, less performance but dirt cheap), the mid range (aimed at budget concious enthusiasts and fleet units) and the high end performance tier (almost exclusively for enthusiasts and high performance computing situations). If these CPUs are tied to the motherboard it’s highly likely that some feature sets will be reserved for certain ranges of CPUs. Since there are many applications where a low power PC can take advantage of high end features (like oodles of SATA ports for instance) and vice versa this is a valid concern and one that I haven’t been able to find any good answers to. There is the possibility of OEMs producing CPU daughter boards like the slotkets of old however without an agree upon standard you’d be effectively locking yourself into that vendor, something which not everyone is comfortable doing.

Still until I see more information its hard for me to make up my mind where I stand on this. There’s a lot of potential for it to go very, very wrong which could see Intel on the wrong side of a community that’s been dedicated to it for the better part of 30 years. They’re arguably in the minority however and its very possible that Intel is getting increasing numbers of orders that require BGA style chips, especially where their Atoms can’t cut it. I’m not sure what they could do right in this regard to win me over but I get the feeling that, just like the other integrated components I used to despise, there may come a time when I become indifferent to it and those zero insertion force sockets of old will be a distant memory, a relic of PC computing’s past.

Let’s Get Moore’s Law Straight, Ok?

Anyone who’s had a passing interest in computers has likely run up against the notion of Moore’s Law, even if they don’t know the exact name for it. Moore’s Law is a simple idea, approximately every 2 years the amount of computing power than can be bought cheaply doubles. This often takes the more common forms of “computer power doubles every 18 months” (thanks to Intel executive David House) or, for those uninitiated with the law, computers get obsoleted faster than any other product in the world. Since Gordon E. Moore first stated the idea back in 1970 it’s held on extremely well and for the most part we’ve beaten the predictions pretty handily.

Of course there’s been a lot of research into the upper limits of Moore’s Law as with anything exponential it seems impossible for it to continue on for an extended period of time. Indeed current generation processors built on the standard 22nm lithography process were originally thought to be one such barrier, because the gate leakage at that point was going to be unable to be overcome. Of course new technologies enabled this process to be used and indeed we’ve still got another 2 generations of lithography processes ahead of us before current technology suggests another barrier.

More recently however researches believe they’ve found the real upper limit after creating a transistor that consists only of a single atom:

Transistors — the basic building block of the complex electronic devices around you. Literally billions of them make up that Core i7 in your gaming rig and Moore’s law says that number will double every 18 months as they get smaller and smaller. Researchers at the University of New South Wales may have found the limit of this basic computational rule however, by creating the world’s first single atom transistor. A single phosphorus atom was placed into a silicon lattice and read with a pair of extremely tiny silicon leads that allowed them to observe both its transistor behavior and its quantum state. Presumably this spells the end of the road for Moore’s Law, as it would seem all but impossible to shrink transistors any farther. But, it could also points to a future featuring miniaturized solid-state quantum computers.

It’s true that this seems to suggest an upper limit to Moore’s Law, I mean if the transistors can’t get any smaller than how can the law be upheld? The answer is simple, the size of transistors isn’t actually a limitation of Moore’s Law, the cost of their production is.

You see most people are only familiar with the basic “computing power doubles every 18 months” version of Moore’s Law and many draw a link between that idea and the size of transistors. Indeed the size is definitely a factor as that means we can squeeze more transistors into the same space, but what this negates is the fact that modern CPU dies haven’t really increased in size at all in the past decade. Additionally new techniques like 3D CPUs (currently all the transistors on a CPU are in a single plane) have the potential to exponentially grow the number of transistors without needing the die shrinks that we currently rely on.

So whilst the fundamental limit of how small a transistor is might be a factor that affects Moore’s Law it by no means determines the upper limit; the cost of adding in those extra transistors does. Indeed every time we believe we’ve discovered yet another limit another technology gets developed or improved to the point where Moore’s Law becomes applicable again. This doesn’t negate work like that in the linked article above as discovering potential limitations like that better equips us for dealing with them. For the next decade or so though I’m very confident that Moore’s Law will hold up, and I see no reason why it won’t continue on for decades afterward.