Posts Tagged‘computing’

Extreme Ultraviolet Lithography May Have a Chance to Shine Afterall.

It seems I can’t go a month without seeing at least one article decrying the end of Moore’s Law and another which shows that it’s still on track. Ultimately this dichotomy comes from the fact that we’re on the bleeding edge of material sciences with new research being published often. At the same time however I’m always sceptical of those saying that Moore’s Law is coming to an end as we’ve heard it several times before and, every single time, those limitations have been overcome. Indeed it seems that one technology even I had written off, Extreme Ultraviolet Lithography, may soon be viable.

asml_NXE-3100_print

Our current process for creating computing chips relies on the photolithography process, essentially a light that etches the transistor pattern onto the silicon. In order to create smaller and smaller transistors we’ve had to use increasingly shorter wavelengths of light. Right now we use deep ultraviolet light at the 193nm wavelength which has been sufficient for etching features all the way down to 10nm level. As I wrote last year with current technology this is about the limit as even workarounds like double-patterning only get us so far, due to their expensive nature. EUV on the other hand works with light at 13.5nm, allowing for much finer details to be etched although there’s been some significant drawbacks which have prevented its use in at-scale manufacturing.

For starters producing the required wattage of light at that wavelength is incredibly difficult. The required power to etch features onto silicon with EUV is around 250W, a low power figure to be sure, however due to nearly everything (including air) absorbing EUV the initial power level is far beyond that. Indeed even in the most advanced machines only around 2% of the total power generated actually ends up on the chip. This is what has led ASML to develop the exotic machine you see above in which both the silicon substrate and the EUV light source work in total vacuum. This set up is capable of delivering 200W which is getting really close to the required threshold, but still requires some additional engineering before it can be utilized for manufacturing.

However progress like this significantly changes the view many had on EUV and its potential for extending silicon’s life. Even last year when I was doing my research into it there weren’t many who were confident EUV would be able to deliver, given its limitations. However with ASML projecting that they’ll be able to deliver manufacturing capability in 2018 it’s suddenly looking a lot more feasible. Of course this doesn’t negate the other pressing issues like the interconnect widths bumping up against physical limitations but that’s not a specific problem to EUV.

The race is on to determine what the next generation of computing chips will look like and there are many viable contenders. In all honesty it surprised me to learn that EUV was becoming such a viable candidate as, given its numerous issues, I felt that no one would bother investing in the idea. It seems I was dead wrong as ASML has shown that it’s not only viable but could be used in anger in a very short time. The next few node steps are going to be very interesting as they’ll set the tempo for technological progress for decades to come.

Light Based Memory Paves the Way for Optical Computing.

Computing as we know it today is all thanks to one plucky little component: the transistor. This simple piece of technology, which is essentially an on/off switch that can be electronically controlled, is what has enabled the computing revolution of the last half century. However it has many well known limitations most of which stem from the fact that it’s an electrical device and is thus constrained by the speed of electricity. That speed is about 1/100th of that of light so there’s been a lot of research into building a computer that uses light instead of electricity. One of the main challenges that an optical computer has faced is storage as light is a rather tricky thing to pin down and the conversion process into electricity (so it can be stored in traditional memory structures) would negate many of the benefits. This might be set to change as researchers have developed a non-volatile storage platform based on phase-change materials.

150922114949_1_900x600

The research comes out of the Karlsruhe Institute of Technology with collaborations from the universities of Münster, Oxford, and Exeter. The memory cell which they’ve developed can be written at speeds of up to 1GHz, impressive considering most current memory devices are limited to somewhere around a 1/5th of that. The actual memory cell itself is made up of phase-change material (a material that can shift between crystalline and amorphous states) Ge2Sb2Te5, or GST for short. When this material is exposed to a high-intensity light beam its state will shift. This state can then be read later on by using less intense light, allowing a data cell to be changed and erased.

One novel property that the researchers have discovered is that their cell is capable of storing data in more than just a binary format. You see the switch between amorphous and crystalline states isn’t distinct like it is with a transistor which essentially means that a single optical cell could store more data than a single electrical cell. Of course to use such cells with current binary architecture would mean that these cells would need a proper controller to do the translation but that’s not exactly a new idea in computing. For a completely optical computer however that might not be required but such an idea is still a way off from seeing a real world implementation.

The only thing that concerns me about this is the fact that it’s based on phase change materials. There’s been numerous devices based on them, most often in the realms of storage, which have purported to revolutionize the world of computing. However to date not one of them has managed to escape the lab and the technology has always been a couple years away. It’s not that they don’t work, they almost always do, more that they either can’t scale or producing them at volume proves to be prohibitively expensive. This light cell faces the unique challenge that a computing platform built for it currently doesn’t exist yet and I don’t think it can compete with traditional memory devices without it.

It is a great step forward however for the realm of light based computing. With quantum computing likely being decades or centuries away from becoming a reality and traditional computing facing more challenges than it ever has we must begin investigating alternatives. Light based computing is one of the most promising fields in my mind and it’s great to see progress when it’s been so hard to come by in the past.

HP’s “The Machine”: You’d Better Deliver on This, HP.

Whilst computing has evolved exponentially in terms of capabilities and raw computing performance the underlying architecture that drives it has remained largely the same for the past 30 years. The vast majority of platforms are either x86 or some other CISC variant running on a silicon wafer that’s been lithographed to have the millions (and sometimes billions) of transistors etched into it. This is then all connected up to various other components and storage through the various bus definitions, most of which have changed dramatically in the face of new requirements. There’s nothing particularly wrong with this model, it’s served us well and has fallen within the bounds of Moore’s Law for quite some time, however there’s always the nagging question of whether or not there’s another way to do things, perhaps one that will be much better than anything we’ve done before.

According HP their new concept, The Machine, is the answer to that question.

HP The Machine High Level Architecture

 

For those who haven’t yet read about it (or watched the introductory video on the technology) HP’s The Machine is set to be the next step in computing, taking the most recent advances in computer technology and using them to completely rethink what constitutes a computer. In short there are 3 main components that make it up, 2 of which are based on technology that have yet to see a commercial application. The first appears to be a Sony Cell like approach to computing cores, essentially combining numerous smaller cores into one big computing pool which can then be activated at will, technology which currently powers their Moonshot range of servers. The second piece is optical interconnects, something which has long been discussed as the next stage in computing but as of yet hasn’t really made in roads at the level HP is talking about. Finally the idea of “universal memory” which is essentially memristor storage which HP Labs has been teasing for some time but has failed to bring any product to light.

As an idea The Machine is pretty incredible, taking the best of breed technology for every subsystem of the traditional computer and putting it all together in the one place. HP is taking the right approach with it too as whilst The Machine might share some common ancestry with regular computers (I’m sure the “special purpose cores” are likely to be x86) current operating systems make a whole bunch of assumptions that won’t be compatible with its architecture. Thankfully they’ll be open sourcing Machine OS which means that it won’t be long before other vendors will be able to support it. It would be all too easy for them to create another HP-UX, a great piece of software in its own right that no one wants to touch because it’s just too damn niche to bother with. That being said however the journey between this concept and reality is a long one, fraught with the very real possibility of it never happening.

You see whilst all of these technologies that make up The Machine might be real in one sense or another 2 of them have yet to see a commercial release. The memristor based storage was “a couple years away” after the original announcement by HP however here we are, some 6 years later, and not even a prototype device has managed to rear its head. Indeed HP said last year that we might see memristor drives in 2018 if we’re lucky and the roadmap shown in the concept video shows the first DIMMs appearing sometime in 2016. Similar things can be said for optical interconnects as whilst they’ve existed at the large scale for some time (fibre interconnects for storage are fairly common) they have yet to be created for the low level type of interconnects that The Machine would require. HP’s roadmap to getting this technology to market is much less clear, something which HP will need to get right if they don’t want the whole concept to fall apart at the seams.

Honestly my scepticism comes from a history of being disappointed by concepts like this with many things promising the world in terms of computing and almost always failing to deliver on them. Even some of the technology contained within The Machine has already managed to disappoint me with memristor storage remaining vaporware despite numerous publications saying it was mere years away from commercial release. This is one of those times that I’d love to be proven wrong though as nothing would make me happier than to see a true revolution in the way we do computing, one that would hopefully enable us to do so much more. Until I see real pieces of hardware from HP however I’ll remain sceptical, lest I get my feelings hurt once again.

The Post-PC Era, or More Accurately The Post Desktop Era.

There’s no doubt that we’re at a crossroads when it comes to personal computing. For decades we have lived with the norm that computers conformed to a strict set of requirements such as having a mouse, keyboard and monitor as their primary interface devices. The paradigm seemed unbreakable as whilst touchscreens an motion controllers were a reality for the longest time they just failed to catch on with the tried and true peripherals dominating our user experience. In this time however the amount of computing power that we’ve been able to make mobile changed the way many people did computing and speculation began to run wild about the future, a place that had evolved past the personal computer.

Taking a step back for a second to look at the term “Post PC era” I could find where the term originated. Many point to Steve Jobs as being the source for the term but I’ve found people referencing it for well over a decade, long before Jobs started mentioning it in reference to the iPad and how it was changing the PC game. The definition of the term also seems somewhat lax with some defining it as a future where each niche has its own device whereas others see it more as straight up abolishing of desktop computers in favour of general purpose portable devices. The lack of a formal definition means that everyone has their own idea of what a Post PC era will entail, but all of them seem to be missing the crux of the matter.

What actually constitutes a Personal Computer?

In the most general terms a PC is a general purpose computing device that’s usable by an end user. The term stems from a time when most computers were massive machines, well out of the reach of any individual (both practically and financially). Personal computers then were the first computing devices designed for mass consumption rather than scientific or business purposes. The term “Post PC era” then suggests that we’ve moved past the PC onto something else for our computing needs, meaning our current definition of PC is no longer suitable for the technology that we’re using.

However, whilst the Post PC era might be somewhat loosely defined, many envision a future where something like a tablet PC is the basis of everyone’s computing. For all intents and purposes that is a personal computer as it’s a general purpose computing device that’s designed for mass consumption by an end user. Post-PC era extremists might take the definition further and say that the Post PC era will see a multitude of devices with specific purposes in mind but I can’t imagine someone wanting to buy a new device for each of the applications they want to access. Indeed the trend is very much the opposite with smartphones becoming quite capable of outright replacing a PC for many people, especially if it’s something like the Motorola Atrix that’s specifically designed with that purpose in mind.

Realistically people are seeing the Post-PC era as a Post Desktop Computer Era.

Now this is a term I’m much more comfortable with as it more aptly explains the upcoming trends in personal computing. Many people are finding that tablet PCs do all the things that their desktop PCs do with the added benefit of being portable and easy to use. Of course there are some tasks that tablets and other Post PC era devices aren’t quite capable of doing and these use cases could be easily covered off with docking stations that provide additional functionality. These could even go as far as providing additional features like more processing power, additional storage and better input peripherals. Up until recently such improvements were in the realms of fantasy, but with interconnects like Thunderbolt it’s entirely possible to provide capabilities that used to be reserved for internal components like PCIe devices.

The world of personal computing is changing and we’ve undergone several paradigm shifts in the last couple years that have changed the computing landscape dramatically. The notion that we’ll never touch a desktop again in the near future is an easy extrapolation to make (especially if you’re selling tablet computers) but it does ignore current trends in favour of an idealized future. More I feel we’ll be moving to an ubiquitous computing environment, one where our experience isn’t so dependent on the platform and those platforms will be far more flexible than they currently are. Whether the Post PC era vision or my ubiquitous computing idea comes to fruition remains to be seen, but I’d bet good money that we’re heading towards the latter than the former.

There’s No One Device To Change The World.

I consider myself to be pretty lucky to be living in a time when technical advancements are happening so rapidly that the world as we knew it 10 years ago seems so distant as to almost be a dream. Today I carry in my pocket as much computing power as what used to be held in high end desktops and if I so desire I can tap into untold pools of resources from cloud based companies for a fraction of what the same ability would’ve cost me even a couple years ago. With technology moving forward at such a fever pace it is not surprising that we manage to come up with an almost infinite number of ways in which to utilize it. Within this continuum of possibilities there are trends towards certain aspects which resonate with a need or want that certain audiences have, thereby driving demand for a product centered around them. As such we’ve seen the development of many devices that are toted as being the next revolution in technology with many being touted as the future of technology.

Two such ideas spring to mind when I consider recent advances in computing technology and both of them, on the surface, appear to be at odds with each other.

The first is the netbook. I can remember clearly the day that they first started making the rounds in the tech news circles I frequent with the community sentiment clearly divided over this new form of computing. In essence a netbook is a rethink of traditional computing ideals in that the latest and greatest computer is no longer required to do the vast majority of tasks that users require. It took me back to my years as a retail salesman as I can remember even back then telling over 90% of my customers that any computer they bought from us would satisfy their needs since all they were doing was web browsing, emailing and documents. The netbook then was the embodiment of the majority of users requirements with the added benefit of being portable and most importantly cheap. The market exploded as the low barrier to entry brought portable computing to the masses who before netbooks never saw a use for a portable computer.

The second is tablets. These kinds of devices aren’t particularly new although I’ll forgive you if your first ever experience with such a device was the iPad. I remember when I was starting out at university I looked into getting a tablet as an alternative to carrying around notepads everywhere and was unfortunately disappointed at the offerings. Back then the tablet idea was more of a laptop that got a swivel touchscreen added to it. Couple that with the fact that in order to keep costs down they were woefully underpowered you had devices that, whilst they had their niche, didn’t have widespread adoption. The introduction of a more appliance focused device in the form of the iPad arguably got the other manufacturers developing devices for consumption rather than general computing. Now the tablet market has exploded with a flurry of competing devices, all looking to capture this next computing revolution.

Both of these types of devices have been touted as the future of computing at one point or another and both have been pushed as being in direct competition with each other. In fact the latest industry numbers and predictionswould have you believe that the tablet market has caused a crash in the number of netbook sales. The danger in drawing such conclusions is that you’re comparing what amounts to an emerging market to an established maturing industry. Slowing growth might sound like a death knell to an industry but that’s actually more to do with the fact that as a market matures there are more people not buying the devices because they already have one, I.E. the market is reaching saturation point. Additionally the percentages give the wrong idea since you’re ignoring the market size. In 2010 alone there have already been 20 million netbooks sold, over 6 times that of the iPad and similar devices. Realistically these devices aren’t even in competition with each other.

So why did I choose the rather grandiose title for this post rather than say “Tablets vs Netbooks, Facts and Figures”? The answer, strangely enough, lies within spaghetti sauce:

(I wholeheartedly encourage you to watch that entire video, it’s quite fantastic)

The talk focuses on the work of Howard Moskowitzwho is famous for reinventing the canned spaghetti sauce industry. Companies approached him to find out what the perfect product would be for their target markets. After following tradition scientific methods he found that his data bore no correlation to the variables that he had to play with until he realised that there could be no perfect product, there had to be perfect products. The paradigm shift he brought on in the food industry can be seen in almost all products they produce today with specific sets of offerings that cater to the various clumps of consumers that desire their products.

How the heck does this relate to tablets and netbooks? Simple, neither one of these types of products is the perfect solution to end user computing and neither were any of the products that came before it. Over time we’ve discovered trends that seem to work well in worldwide markets and we’ve latched onto those. Then companies attempt to find the perfect solution to their users needs by trying to aggregate all possible options. However no one product could attempt to satisfy everyone and thus we have a diverse range of devices that fit our various needs. To make the three types of sauces analogy there are those who like their computing focused on consumption (tablets, MIDs, consoles), creation (desktops, laptops, netbooks) and integration (smartphones). These are of course wholly unresearched categories, but they seem to ring true from my anecdotal experience with friends and their varying approaches towards computing.

So whilst we may have revolutions and paradigm shifts in the computing world no one of them will end up being the perfect solution to all our needs. As time goes by we will begin to notice the trends and clumps of users that share certain requirements and develop solutions for them so the offerings from companies will become increasingly focused on these key areas. For the companies it means more work as they play catch up as each of these revolutions happens and for us it means a greater computing experience than we’ve ever had before, and that’s something that never fails to excite me.