The never-ending quest to satisfy Moore’s Law means that we’re always looking for ways to making computers faster and cheaper. Primarily this focuses on the brain of the computer, the Central Processing Unit (CPU), which in most modern computers is now how to transistors numbering in the billions. All the other components haven’t been resting on their laurels however as shown by the radical improvement in speeds from things like Solid State Drives (SSDs), high-speed interconnects and graphics cards that are just as jam-packed with transistors as any CPU is. One aspect that’s been relatively stagnant however has been RAM which, whilst increasing in speed and density, has only seen iterative improvements since the introduction of the first Double Data Rate (DDR). Today Intel and Micron have announced 3D Xpoint, a new technology that sits somewhere between DRAM and NAND in terms of speed.
Details on the underlying technology are a little scant at the moment however what we do know is that instead of storing information by trapping electrons, like all memory currently does, 3D Xpoint (pronounced cross point) instead stores bits via a change in resistance of the memory material. If you’re like me you’d probably think that this was some kind of phase change memory however Intel has stated that it’s not. What they have told us is that the technology uses a lattice structure which doesn’t require transistors to read and write cells, allowing them to dramatically increase the density, up to 128GB per die. This also comes with the added benefit of being much faster than current NAND technologies that power SSDs although slightly slower than current DRAM, albeit with the added advantage of being non-volatile.
Unlike most new memory technologies which often purport to be the replacements for one type of memory or another Intel and Micron are position 3D Xpoint as an addition to the current architecture. Essentially your computer has several types of memory, all of which are used for a specific purpose. There’s memory directly on the CPU which is incredibly fast but very expensive, so there’s only a small amount. The second type is the RAM which is still fast but can be had in greater amounts. The last is your long term storage, either in the form of spinning rust hard drives or a SSD. 3D Xpoint would sit in between the last two, providing a kind of high speed cache that could hold onto often used data that’s then persisted onto disk. Funnily enough the idea isn’t that novel, things like the XboxOne use a similar architecture, so there’s every chance that it might end up happening.
The reason why this is exciting is because Intel and Micron are already going into production with these new chips, opening up the possibility of a commercial product hitting our shelves in the very near future. Whilst integrating it in the way that they’ve stated in the press release would take much longer, due to the change in architecture, there’s a lot of potential for a new breed of SSD drives to be based on this technology. They might be an order of magnitude more expensive than current SSDs however there are applications where you can’t have too much speed and for those 3D Xpoint could be a welcome addition to their storage stack.
Considering the numerous technological announcements we’ve seen from other large vendors that haven’t amounted to much it’s refreshing to see something that could be hitting the market in short order. Whilst Intel and Micron are still being mum on the details I’m sure that the next few months will see more information make its way to us, hopefully closely followed by demonstrator products. I’m very interested to see what kind of tech is powering the underlying cells as a non-phase change, resistance based memory is something that would be truly novel and, once production hits at-scale levels, could fuel another revolution akin to the one we saw with SSDs all those years ago. Needless to say I’m definitely excited to see where this is heading and I hope Intel and Micron keep us in the loop with the new developments.
There are few computer interconnects that have been as pervasive as USB. Its limitations are numerous however the ease at which it could be integrated into electronic devices ensured that it became the defacto standard for nearly everything that needed to talk to a PC. Few other connectors have dared to try to battle it for the connectivity crown, Firewire being the only one that comes to mind, but the new upstart of Thunderbolt as the potential to usurp the crown. Right now it’s mostly reserved for the few who’ve splashed out for a new Macbook but the amount of connectivity, bandwidth and versatility that the Thunderbolt 3 specification from Intel brings is, quite frankly, astounding.
Thunderbolt, in its current incarnation, uses its own proprietary connector. There’s nothing wrong with that specifically, especially when you consider the fact that a single Thunderbolt connection can breakout into all manner of signals, however its size and shape don’t lend it well to applications in portable or slimline devices. The latest revision of the Thunderbolt specification however, announced recently by Intel at Computex in Taiwan, ditches the current connector in favour of the USB Type-C connector which, along with the space savings, brings other benefits like a reversible connector and hopefully much cheaper production costs. Of course the connector is really just one tiny aspect of all the benefits that Thunderbolt 3 will bring.
The new Thunderbolt 3 interface will double the current bandwidth available from 20Gb/s to 40Gb/s, enough to drive two 4K displays at 60hz off a single cable. To put that in perspective the current standard for high resolution screen interconnects, DisplayPort, currently only delivers 17Gb/s with the future 1.3 version is slated to deliver 34Gb/s. On its own that might not be exactly groundbreaking news for consumers, who really cares what the raw numbers are as long as it displays the pictures, but combine that with the fact that Thunderbolt 3 can deliver 100W worth of power and suddenly things are a lot different. That means you could run your monitor off the one cable, even large monitors like my AOC G2460PGs, which only draw 65W under load.
Like its predecessors Thunderbolt 3 will be able to carry all sorts of signals along its wires, including up to 4 lanes worth of PCIe. Whilst many seem to be getting excited about the possibility of external graphics cards, despite the obvious limitations they have, I’m more excited about more general purpose stuff that can be done with external PCIe lanes. The solutions available for doing that right now aren’t great but with 100W of power and 4 PCIe lanes over a single cable there’s potential for them to become a whole lot more palatable.
Of course we’ll be waiting quite a bit of time before Thunderbolt 3 becomes commonplace as manufacturers of both PCs and devices that have that connector ramp up to support it. The adoption of a more common connector, along with the numerous benefits of the Thunderbolt interface, has the potential to accelerate this however they still have a mountain to climb before they can knock USB down. Still I’m excited for the possibilities, even if it will mean a new PC to support them.
Who am I kidding, I’ll take any excuse to get a new PC.
It seems that the semiconductor industry can’t go a year without someone raising the tired old flag that is the impending doom of Moore’s Law. Nearly every year there’s a group of people out to see it finally meet its end although to what purpose I could not tell you. However as an industry observer will tell you these predictions have, for the past 5 decades, proved to be incorrect as any insurmountable barrier is usually overcome when the requisite billions are thrown at the problem. However we are coming to a point where our reigning champion behind Moore’s Law, namely planar transistors built on silicon, is starting to reach the end of its life and thus we have been searching for its ultimate replacement. Whilst it seems inevitable that a new material will become the basis upon which we build our new computing empire the question of how that material will be shaped is still unanswered, but there are rumblings of what may come.
For the vast majority of computing devices out there the transistors underneath the hood are created in a planar fashion, I.E. they essentially exist in a 2 dimensional space. In terms of manufacturing this has many advantages and the advances we’ve made in planar technology over the years have seen us break through many barriers that threatened to kill Moore’s Law in its tracks. Adding in that additional dimension however is no trivial task and whilst it’s not beyond our capability to do, indeed my computer is powered by a component that makes use of a 3D manufacturing process, but applying it to something as complicated as a CPU requires an incredible amount of effort. However the benefits of doing so are proving to be many and the transistor pictured above, called a Quantum Well Field Effect Transistor (QWFET), could be the ram with which we break through the next barrier to escalating Moore’s Law.
The main driver behind progress in the CPU market comes from making transistors ever-smaller, something which allows us to pack more of them in the same space whilst also giving us benefits like reduced power consumption. However as we get smaller issues that could be ignored, like gate leakage back when we were still at the 45nm stage, start to become fundamental blockers to progress. Right now, as we approach sizes below 10nm, that same problem is starting to rear its head again and we need to look at innovative solutions to tackle it. The QWFET is one such solution as it has the potential to eliminate the leakage problem whilst allowing us to continue our die shrinking ways.
QWFETs are essentially an extension of Intel’s current FinFET technology. In the current FinFETs electrons are bounded on 3 sides which is what helped Intel make their current die shrink workable (although it has taken them much longer than expected to get the yeilds right). In QWFETs the electrons are bounded on an additional side which forms a quantum well inside the transistor. This drastically reduces the leakage which would otherwise plague a transistor of a sub-10nm size and, as a benefit, significantly reduces power draw as the static power usage drops considerably.
This does sound good in principle and would be easy to write off as hot air had Intel not been working on it since at least 2010. Some of their latest research points to these kinds of transistors being the way forward all the way down to 5nm which would keep Moore’s Law trucking along for quite some time considering we’re just on the cusp of 14nm products hitting our shelves. Of course this is all speculative at this time however there’s a lot of writing on the wall that’s pointing to this as being the way forward. If this turns out to not be the case then I’d be very interested to see what Intel had up their sleeves as it’d have to be something even more revolutionary than this.
Either way it’l be great for us supporters of Moore’s Law and, of course, users of computers in general.
For as long as we’ve been using semiconductors there’s been one material that’s held the crown: silicon. Being one of the most abundant elements on Earth its semiconductor properties made it perfectly suited to mass manufacture and nearly all of the world’s electronics contain a silicon brain within them. Silicon isn’t the only material capable of performing this function, indeed there’s a whole smorgasbord of other semiconductors that are used for specific applications, however the amount of research poured into silicon means few of them are as mature as it is. However with our manufacturing processes shrinking we’re fast approaching the limit of what silicon, in its current form, is capable of and that may pave the way for a new contender for the semiconductor crown.
The road to the current 14nm manufacturing process has been a bumpy one, as the heavily delayed release of Intel’s Broadwell can attest to. Mostly this was due to the low yields that Intel was getting with the process, which is typical for die shrinks, however solving the issue proved to be more difficult than they had originally thought. This is likely due to the challenges Intel faced with making their FinFET technology work at the smaller scale as they had only just introduced it in the previous 22nm generation of CPUs. This process will likely still work down at the 10nm level (as Samsung has just proven today) but beyond that there’s going to need to be a fundamental shift in order for the die shrinks to continue.
For this Intel has alluded to new materials which, keen observers have pointed out, won’t be silicon.
The type of material that’s a likely candidate to replace silicon is something called Indium Gallium Arsenide (InGaAs). They’ve long been used in photodetectors and high frequency applications like microwave and millimeter wave applications. Transistors made from this substrate are called High-Electron Mobility Transistors which, in simpler terms, means that they can be made smaller, switch faster and more packed into a certain size. Whilst the foundries might not yet be able to create these kinds of transistors at scale the fact that they’ve been manufactured at some scale for decades now makes them a viable alternative rather than some of the other, more exotic materials.
There is potential for silicon to hang around for another die shrink or two if Extreme Ultraviolet (EUV) lithography takes off however that method has been plagued with developmental issues for some time now. The change between UV lithography and EUV isn’t a trivial one as EUV can’t be made into a laser and needs mirrors to be directed since most materials will simply absorb the EUV light. Couple that with the rather large difficulty in generating EUV light in the first place (it’s rather inefficient) and it makes looking at new substrates much more appealing. Still if TSMC, Intel or Samsung can figure it out then there’d be a bit more headroom for silicon, although maybe not enough to offset the investment cost.
Whatever direction the semiconductor industry takes one thing is very clear: they all have plans that extend far beyond the current short term to ensure that we can keep up the rapid pace of technological development that we’ve enjoyed for the past half century. I can’t tell you how many times I’ve heard others scream that the next die shrink would be our last, only to see some incredibly innovative solutions to come out soon after. The transition to InGaAs or EUV shows that we’re prepared for at least the next decade and I’m sure before we hit the limit of that tech we’ll be seeing the next novel innovation that will continue to power us forward.
Roll back the clock a decade or so and the competition for what kind of processor ended up in your PC was at a fever pitch with industry heavyweights Intel and AMD going blow for blow. The choice of CPU, at least for me and my enthusiast brethren, almost always came down to what was fastest but the lines were often blurry enough that brand loyalty was worth more than a few FPS here or there. For the longest time I was an AMD fan, sticking stalwartly to their CPUs which provided me with the same amount of grunt as their Intel brethren for a fraction of the cost. However over time the gap between what an AMD CPU could provide and what Intel offered was too wide to ignore, and it’s only been getting wider since then.
The rift is seen in adoption rates across all products that make use of modern CPUs with Intel dominating nearly any sector that you find them in. When Intel first retook the crown all those years ago the reasons were clear, Intel just performed well enough to justify the cost, however as time went on it seemed like AMD was willing to let that gap continue to grow. Indeed if you look at them from a pure technology basis they’re stuck about 2 generations behind where Intel is today with the vast majority of their products being produced on a 28nm process, with Intel’s latest release coming out on 14nm. Whilst they pulled a major coup in winning over all of the 3 major consoles that success has had much onflow to the rest of the business. Indeed since they’ll be producing the exact same chips for the next 5+ years for those consoles they can’t really do much with them anyway and I doubt they’d invest in a new foundry process unless Microsoft or Sony asked them nicely.
What this has translated into is a monopoly by default, one where Intel maintains it’s massive market share without having to worry about any upstarts rocking their boat. Thankfully the demands of the industry are pressure enough to keep them innovating at the rapid pace they set way back when AMD was still biting at their heels but there’s a dangerously real chance that they could just end up doing the opposite. It’s a little unfair to put the burden on AMD to keep Intel honest however it’s hard to think of another company who has the required pedigree and experience to be the major competition to their platform.
The industry is looking towards ARM as being the big competition for Intel’s x86 platform although, honestly, they’re really not in the same market. Sure nearly every phone under the sun is now powered by some variant of the ARM architecture however when it comes to consumer or enterprise compute you’d be struggling to find anything that runs on it. There is going to have to be an extremely compelling reason for everyone to want to translate to that platform and, as it stands right now, mobile and low power are the only places where it really fits. For ARM to really start eating Intel’s lunch it’d need to make some serious inroads into those spaces, something which I don’t see happening for decades at least.
There is some light in the form of Kaveri however it’s less than stellar performance when compared to Intel’s less tightly coupled solution does leave a lot to be desired. At a high level the architecture does feel like the future of all computing, well excluding radical paradigm shifts like HP’s The Machine (which is still vaporware at this point), but until it equals the performance of discreet components it’s not going anywhere fast. I get the feeling that if AMD had kept up with Intel’s die shrinks Kaveri would be looking a lot more attractive than it is currently, but who knows what it might have cost them to get to that stage.
In any other industry you’d see this kind of situation as one that was ripe for disruption however the capital intensive nature, plus an industry leader who isn’t resting on their laurels, means that there are few who can hold a candle to Intel. The net positive out of all of this is that we as consumers aren’t suffering however we’ve all seen what happens when a company remains at the top for far too long. Hopefully the numerous different sectors which Intel is currently competing in will be enough to offset their monopolistic nature in the CPU market but that doesn’t mean more competition in that space isn’t welcome.
The popular interpretation of Moore’s Law is that computing power, namely of the CPU, doubles every two years or so. This is then extended to pretty much all aspects of computing such as storage, network transfer speeds and so on. Whilst this interpretation has held up reasonably well in the past 40+ years since the law has coined it’s actually not completely accurate as Moore was actually referring to the number of components that could be integrated into a single package for a minimum cost. Thus the real driver behind Moore’s law isn’t performance, per se, it’s the cost at which we can provide said integrated package. Keeping on track with this law hasn’t been easy but innovations like Intel’s new 14nm process are what have been keeping us on track.
CPUs are created through a process called Photolithography whereby a substrate, typically a silicon wafer, has the transistors etched onto it through a process not unlike developing a photo. The defining characteristic of this process is the minimum size of a feature that the process can etch on the wafer which is usually expressed in terms of nanometers. It was long thought that 22nm would be the limit for semiconductor manufacturing as this process was approaching the physical limitations of the substrates used. However Intel, and many other semiconductor manufacturers, have been developing processes that push past this and today Intel has released in depth information regarding their new 14nm process.
The improvements in the process are pretty much what you’d come to expect from a node improvement of this nature. A reduction in node size typically means that a CPU can be made with more transistors that performs better and uses less power than a similar CPU built on a larger sized node. This is most certainly the case with Intel’s new 14nm fabrication process and, interesting enough, they appear to be ahead of the curve so to speak, with the improvements in this process being slightly ahead of the trend. However the most important factor, at least in respect Moore’s Law, is that they’ve managed to keep reducing the cost per transistor.
One of the biggest cost drivers for CPUs is what’s called the yield of the wafer. Each of these wafers costs a certain amount of money and, depending on how big and complex your CPU is, you can only fit a certain number of them on there. However not all of those CPUs will turn out to be viable and the percentage of usable CPUs is what’s known as the wafer yield. Moving to a new node size typically means that your yield takes a dive which drives up the cost of the CPU significantly. The recently embargoed documents from Intel reveals however that the yield for the 14nm process is rapidly approaching that of the 22nm process which is considered to be Intel’s best yielding process to date. This, plus the increased transistor density that’s possible with the new manufacturing process, is what has led to the price per transistor dropping giving Moore’s law a little more breathing room for the next couple years.
This 14nm process is what will be powering Intel’s new Broadwell set of chips, the first of which is due out later this year. Migrating to this new manufacturing process hasn’t been without its difficulties which is what has led to Intel releasing only a subset of the Broadwell chips later this year, with the rest to come in 2015. Until we get our hands on some of the actual chips there’s no telling just how much of an improvement these will be over their Haswell predecessors but the die shrink alone should see some significant improvements. With the yields fast approaching those of its predecessors they’ll hopefully be quite reasonably priced too, for a new technology at least.
It just goes to show that Moore’s law is proving to be far more robust than anyone could have predicted. Exponential growth functions like that are notoriously unsustainable however it seems every time we come up against another wall that threatens to kill the law off another innovative way to deal with it comes around. Intel has long been at the forefront of keeping Moore’s law alive and it seems like they’ll continue to be its patron saint for a long time to come.
Rewind back a couple years that the idea of wearable computing was something reserved for the realms of the ultra-geek and science fiction. Primarily this was a function of the amount of computing power and power capacity we could stuff into a gadget that anyone would be willing to wear as anything that could be deemed useful was far too bulky to be anything but a concept. Today the idea is far more mainstream with devices like Google Glass and innumerable smart watches flooding the market but that seems to be as far as wearable technology goes now. Should Intel have its way though this could be set for a rapid amount of change with the announcement of Intel Edison, a x86 processor that comes in a familiar (and very small) package.
It’s an x86 processor the size of a SD card and included in that package is a 400MHz processor (for the sake of argument I am assuming that it’s the same SOC that powers Intel’s Galileo platform, just a 22nm version), WiFi and low power Bluetooth. It can run a standard version of Linux and, weirdly enough, even has its own little app store. Should it retain its Galileo roots it will also be Arduino compatible whilst also gaining the capability to run the new Wolfram programming language. Needless to say it’s a pretty powerful little package and the standard form factor should make it easy to integrate into a lot of products.
By itself the Edison doesn’t suddenly make all wearable computing ideas feasible, indeed the progress made in this sector in the last year is a testament to that, instead it’s more of an evolutionary jump that should help to jump start the next generation of wearable devices. We’ve been able to go far with devices that have a tenth of the computing power of the Edison so it will be interesting to see what kinds of applications are made possible by the additional grunt it gives. Indeed Intel believes strongly in the idea that Edison will be the core of future wearable devices and has set up the Make It Wearable challenge, with over $1 million in prizes, in order to spur product designers on.
It will be interesting to see how the Edison stacks up against the current low power giant ARM as they have a bevy of devices already available that would be comparable to the Edison. Indeed it seems that Edison is meant to be a shot across ARM’s bow as it’s one of the few devices that Intel will allow third parties to license, much in the same way as ARM does today. There’s no question that Intel has been losing out hard in this space so the idea of marketing the Edison towards the wearable computing sector is likely a coy play to carve out a good chunk of that market before ARM cements themselves in it (like they did with smart phones).
One thing is for certain though, the amount of computing power available in such small packages is on the rise enabling us to integrate technology into more and more places. It’s the first tenuous steps towards creating an Internet of Things where seamless and unbounded communication is possible between almost any device. The results of Intel’s Make It Wearable competition will be a good indication of where this market is heading and what we, the consumers, can expect to see in the coming years.
In the general computing game you’d be forgiven for thinking there’s 2 rivals locked in a contest for dominance. Sure there’s 2 major players, Intel and AMD, and whilst they are direct competitors with each other there’s no denying the fact that Intel is the Goliath to AMD’s David, trouncing them in almost every way possible. Of course if you’re looking to build a budget PC you really can’t go past AMD’s processors as they provide an incredible amount of value for the asking price but there’s no denying that Intel has been the reigning performance and market champion for the better part of a decade now. However the next generation of consoles have proved to be something of a coup for AMD and it could be the beginnings of a new era for the beleaguered chip company.
Both of the next generation consoles, the PlayStation 4 and XboxOne, both utilize an almost identical AMD Jaguar chip under the hood. The reasons for choosing it seem to align with Sony’s previous architectural idea for Cell (I.E. having lots of cores working in parallel rather than fewer working faster) and AMD is the king of cramming more cores into a single consumer chip. Although the reasons for going for AMD over Intel likely stem from the fact that Intel isn’t too crazy about doing custom hardware and the requirements that Sony and Microsoft had for their own versions of Jaguar could simply not be accommodated. Considering how big the console market is this would seem like something of a misstep by Intel, especially judging by the PlayStation4’s day one sales figures.
If you hadn’t heard the PlayStation 4 managed to move an incredible 1 million consoles on its first day of launch and that was limited to the USA. The Nintendo Wii by comparison took about a week to move 400,000 consoles and it even had a global launch window to beef up the sales. Whether the trend will continue or not considering that the XboxOne just got released yesterday is something we’ll have to wait to see but regardless every one of those consoles being purchased now contains in it an AMD CPU and they’re walking away with a healthy chunk of change from each one.
To put it in perspective out of every PlayStation 4 sale (and by extension every XboxOne as well) AMD is taking away a healthy $100 which means that in that one day of sales AMD generated some $100 million for itself. For a company who’s annual revenue is around the $1.5 billion mark this is a huge deal and if the XboxOne launch is even half that AMD could have seen $150 million in the space of a week. If the previous console generations were anything to go by (roughly 160 million consoles between Sony and Microsoft) AMD is looking at a revenue steam of some $1.6 billion over the next 8 years, a 13% increase to their bottom line. Whilst it’s still a far cry from the kinds of revenue that Intel sees on a monthly basis it’s a huge win for AMD and something they will hopefully be able to use to leverage themselves more in other markets.
Whilst I may have handed in my AMD fanboy badge after many deliriously happy years with my watercooled XP1800+ I still think they’re a brilliant chip company and their inclusion in both next generation consoles shows that the industry giants think the same way. The console market might not be as big as the consumer desktop space nor as lucrative as the high end server market but getting their chips onto both sides of the war is a major coup for them. Hopefully this will give AMD the push they need to start muscling in on Intel’s turf again as whilst I love their chips I love robust competition between giants a lot more.
I’ve worked with a lot of different hardware in my life, from the old days of tinkering with my Intel 80286 through to esoteric Linux systems running on DEC tin until I, like everyone else in the industry, settled on x86-64 as the defacto standard. Among the various platforms I was happy to avoid (including such lovely things as Sun SPARC) was Intel’s Itanium range as it’s architecture was so foreign from anything else it was guaranteed that whatever you were trying to do, outside of building software specifically for that platform, was doomed to failure. The only time I ever came close to seeing it being deployed was on the whim of a purchasing manager who needed guaranteed 100% uptime until they realised the size of the cheque they’d need to sign to get it.
If Intel’s original dream was to be believed then this post would be coming to you care of their processors. You see back when it was first developed everything was still stuck in the world of 32bit and the path forward wasn’t looking particularly bright. Itanium was meant to be the answer to this, with Intel’s brand name and global presence behind it we would hopefully see all applications make their migration to the latest and greatest 64bit platform. However the complete lack of any backwards compatibility with any currently developed software and applications meant adopting it was a troublesome exercise and was a death knell for any kind of consumer adoption. Seeing this AMD swooped in with their dually compatible x86-64 architecture which proceeded to spread to all the places that Itanium couldn’t, forcing Intel to adopt the standard in their consumer line of hardware.
Itanium refused to die however finding a home in the niche high end market due to its redundancy features and solid performance for optimized applications. However the number of vendors continuing to support the platform dwindled from their already low numbers with it eventually falling to HP being the only real supplier of Itanium hardware in the form of their NonStop server line. It wasn’t a bad racket for them to keep up though considering the total Itanium market was something on the order of $4 billion a year and with only 55,000 servers shipped per year you can see how much of a premium they attract). Still all the IT workers of the world have long wondered when Itanium would finally bite the dust and it seems that that day is about to come.
HP has just announced that it will be transitioning its NonStop server range from Itanium to x86 effectively putting an end to the only sales channel that Intel had for their platform. What will replace it is still up in the air but it’s safe to assume it will be another Intel chip, likely one from their older Xeon line that shares many of the features that the Itanium had without the incompatible architecture. Current Itanium hardware is likely to stick around for an almost indefinite amount of time however due to the places it has managed to find itself in, much to the dismay of system administrators everywhere.
In terms of accomplishing it’s original vision Itanium was an unabashed failure, never finding the consumer adoption that it so desired and never becoming the herald of 64bit architecture. Commercially though it was somewhat of a success thanks to its features that made it attractive to the high end market but even then it was only a small fraction of total worldwide server sales, barely enough to make it a viable platform for anything but wholly custom solutions. The writing was on the wall when Microsoft said that Windows Server 2008 was the last version to support it and now with HP bowing out the death clock for Itanium has begun ticking in earnest, even if the final death knell won’t come for the better part of a decade.
One thing that not many people knew was that I was pretty keen on the whole Google TV idea when it was announced 2 years ago. I think that was partly due to the fact that it was a collaboration between several companies that I admire (Sony, Logitech and, one I didn’t know about at the time, Intel) and also because of what it promised to deliver to the end users. I was a fairly staunch supporter of it, to the point where I remember getting into an argument with my friends that consumers were simply not ready for something like it rather than it being a failed product. In all honesty I can’t really support that position any more and the idea of Google TV seems to be dead in the water for the foreseeable future.
What I didn’t know was that whilst Google, Sony and Logitech might have put the idea to one side Intel has been working on developing their own product along similar lines, albeit from a different angle than you’d expect. Whilst I can’t imagine that they had invested that much in developing the hardware for the TVs (a quick Google search reveals that they were Intel Atoms, something they had been developing for 2 years prior to Google TV’s release) it appears that they’re still seeking some returns on that initial investment. At the same time however reports are coming in that Intel is dropping anywhere from $100 million to $1 billion on developing this new product, a serious amount of coin that industry analysts believe is an order of magnitude above anyone who’s playing around in this space currently.
The difference between this and other Internet set top boxes appears to be the content deals that Intel is looking to strike with current cable TV providers. Now anyone who’s ever looked into getting any kind of pay TV package knows that whatever you sign up for you’re going to get a whole bunch of channels you don’t want bundled in alongside the ones you do, effectively diluting the value you derive from the service significantly. Pay TV providers have long fought against the idea of allowing people to pick and choose (and indeed anyone who attempted to provide such a service didn’t appear to last long, ala SelecTV Australia) but with the success of on demand services like NetFlix and Hulu it’s quite possible that they might be coming around to the idea and see Intel as the vector of choice.
The feature list that’s been thrown around press prior to an anticipated announcement at CES next week (which may or may not happen, according to who you believe) does sound rather impressive, essentially giving you the on demand access that everyone wants right alongside the traditional programming that we’ve come to expect from pay TV services. The “Cloud DVR” idea, being able to replay/rewind/fast-forward shows without having to record them yourself, is evident of this and it would seem that the idea of providing the traditional channels as well would just seem to be a clever ploy to get the content onto their network. Of course traditional programming is required for certain things like sports and other live events, something which the on demand services have yet to fully incorporate into their offerings.
Whilst I’m not entirely enthused with the idea of yet another set top box (I’m already running low on HDMI ports as it is) the information I’ve been able to dig up on Intel’s offering does sound pretty compelling. Of course many of the features aren’t exactly new, you can do many of the things now with the right piece of hardware and pay TV subscriptions, but the ability to pick and choose channels would be and then getting that Hulu-esque interface to watch previous episodes would be something that would interest me. If the price point is right, and its available globally rather than just the USA, I could see myself trying it out for the select few channels that I’d like to see (along with their giant back catalogues, of course).
In any case it will be very interesting to see if Intel does say anything about their upcoming offering next week as if they do we’ll have information direct from the source and if they don’t we’ll have a good indication of which analysts really are talking to people who are involved in the project.