Posts Tagged‘x86’

HP The Machine High Level Architecture

HP’s “The Machine”: You’d Better Deliver on This, HP.

Whilst computing has evolved exponentially in terms of capabilities and raw computing performance the underlying architecture that drives it has remained largely the same for the past 30 years. The vast majority of platforms are either x86 or some other CISC variant running on a silicon wafer that’s been lithographed to have the millions (and sometimes billions) of transistors etched into it. This is then all connected up to various other components and storage through the various bus definitions, most of which have changed dramatically in the face of new requirements. There’s nothing particularly wrong with this model, it’s served us well and has fallen within the bounds of Moore’s Law for quite some time, however there’s always the nagging question of whether or not there’s another way to do things, perhaps one that will be much better than anything we’ve done before.

According HP their new concept, The Machine, is the answer to that question.

HP The Machine High Level Architecture

 

For those who haven’t yet read about it (or watched the introductory video on the technology) HP’s The Machine is set to be the next step in computing, taking the most recent advances in computer technology and using them to completely rethink what constitutes a computer. In short there are 3 main components that make it up, 2 of which are based on technology that have yet to see a commercial application. The first appears to be a Sony Cell like approach to computing cores, essentially combining numerous smaller cores into one big computing pool which can then be activated at will, technology which currently powers their Moonshot range of servers. The second piece is optical interconnects, something which has long been discussed as the next stage in computing but as of yet hasn’t really made in roads at the level HP is talking about. Finally the idea of “universal memory” which is essentially memristor storage which HP Labs has been teasing for some time but has failed to bring any product to light.

As an idea The Machine is pretty incredible, taking the best of breed technology for every subsystem of the traditional computer and putting it all together in the one place. HP is taking the right approach with it too as whilst The Machine might share some common ancestry with regular computers (I’m sure the “special purpose cores” are likely to be x86) current operating systems make a whole bunch of assumptions that won’t be compatible with its architecture. Thankfully they’ll be open sourcing Machine OS which means that it won’t be long before other vendors will be able to support it. It would be all too easy for them to create another HP-UX, a great piece of software in its own right that no one wants to touch because it’s just too damn niche to bother with. That being said however the journey between this concept and reality is a long one, fraught with the very real possibility of it never happening.

You see whilst all of these technologies that make up The Machine might be real in one sense or another 2 of them have yet to see a commercial release. The memristor based storage was “a couple years away” after the original announcement by HP however here we are, some 6 years later, and not even a prototype device has managed to rear its head. Indeed HP said last year that we might see memristor drives in 2018 if we’re lucky and the roadmap shown in the concept video shows the first DIMMs appearing sometime in 2016. Similar things can be said for optical interconnects as whilst they’ve existed at the large scale for some time (fibre interconnects for storage are fairly common) they have yet to be created for the low level type of interconnects that The Machine would require. HP’s roadmap to getting this technology to market is much less clear, something which HP will need to get right if they don’t want the whole concept to fall apart at the seams.

Honestly my scepticism comes from a history of being disappointed by concepts like this with many things promising the world in terms of computing and almost always failing to deliver on them. Even some of the technology contained within The Machine has already managed to disappoint me with memristor storage remaining vaporware despite numerous publications saying it was mere years away from commercial release. This is one of those times that I’d love to be proven wrong though as nothing would make me happier than to see a true revolution in the way we do computing, one that would hopefully enable us to do so much more. Until I see real pieces of hardware from HP however I’ll remain sceptical, lest I get my feelings hurt once again.

Dual Shock 4

Next Gen Consoles Will Peak Early, Peak Hard.

The current norms for games consoles are going to be flipped on their head when the next generation comes online. There are some things we could argue that are expected, like the lack of backwards compatibility, but the amount of change coming our way really doesn’t have any comparison in previous console generations. In nearly all respects I believe this is a good thing as many of the decisions made seemed to be born out of a mindset that worked 2 decades ago but was becoming rapidly outdated in today’s market. However one significant change could have a detrimental impact on consoles at large and could open up an opportunity for the PC (and by extension the SteamBox) to make a comeback.

Dual Shock 4The next generation of games consoles are shaping up to be some of the most developer friendly platforms ever created. Not only are they x86 under the hood, allowing many frameworks developers for regular PC games to be ported across with relative ease, many of the features that they have are a direct response to the requests from developers. This means that developers will be able to make use of the full power of these consoles from much earlier on and whilst this will make for some great launch titles that will be leaps and bounds above their previous generation predecessors it does mean that they’ll reach their peak early, and that might not be a good thing.

It was always expected that the best games of a console generation would come out towards the end of its lifecycle. This was due to games developers becoming far more familiar with the platform and the tools reaching a maturity level that made creating those games possible. The current generation, with its record breaking longevity, is a great example of this with the demos of current and next gen titles running on both platforms being very comparable. With the next generation being so developer friendly however I can’t imagine it taking long for them to be able to exploit the system to its fullest extent within a short time frame. Couple this with the next gen expected to have a similar life to the current gen and you’ve got a recipe for console games being stagnant (from a technology point of view) for a very long time.

Granted there will always be improvements that can be made and I’d still expect the best titles to come towards the end of its lifecycle. However the difference between first year and last year titles will be a lot smaller and in the case of the end user I doubt many will notice the difference. With the shared x86 base however there’s a big potential here for the PC versions of the games to start out pacing their console counterparts much earlier on as some of the optimizations will translate readily across, something which just wasn’t possible with previous platforms.

Indeed due to the current gen limitations we’ve already begun to see something of a resurgence in PC gaming. Now its likely that this could be dampened when the next gen of consoles get released however due the reasons I’ve outlined I’d expect to see the cycle begin again not too long afterwards. I do doubt that this will see PCs return to the glory days of being the king of gaming but there’s a definite opportunity for them to grab some significant market share, possibly enough to be elevated past their current also-ran status.

Of course this is wild speculation on my part but I do believe that the next generation of consoles will peak much earlier in its lifecycle which, as history has shown us, will usher people back towards the PC as a platform. With the SteamBox readying itself for release around the same time there’s ample opportunity for current gen console customers to be swayed over to the PC platform, even if it’s camouflaged itself as one of the enemy. In the end though the next gen consoles will still represent good value for money for several years to come, even if they’re quickly outpaced.

 

 

Playstation 4 Xbox360 Orbis Durango

All Your Consoles Are Belong To x86.

Ever since the first console was released they have always been at arms length with the greater world of computing. Initially this was just a difference in inputs as consoles were primarily games machines and thus did not require a fully fledged keyboard but over time they grew into being purpose built systems. This is something of a double edged sword as whilst a tightly controlled hardware platform allows developers to code against a set of specifications it also usually meant that every platform was unique which often meant that there was a learning curve for developers every time a new system came out. Sony was particularly guilty of this as the PlayStation 2 and 3 were both notoriously difficult to code for; the latter especially given its unique combination of linear coprocessors and giant non-linear unit.

Playstation 4 Xbox360 Orbis Durango

There was no real indication that this trend was going to stop either as all of the current generation of consoles use some non-standard variant of some comparably esoteric processor. Indeed the only console in recent memory to attempt to use a more standard processor, the original Xbox, was succeeded by a PowerPC driven Xbox360 which would make you think that the current industry standard of x86 processors just weren’t suited to the console environment. Taking into account that the WiiU came out with a PowerPC CPU it seem logical that the next generation would continue this trend but it seems there’s a sea change on the horizon.

Early last year rumours started circulating that the next generation PlayStation, codenamed Orbis, was going to be sporting a x86 based processor but the next generation Xbox, Durango, was most likely going to be continuing with a PowerPC CPU. As it turns out this isn’t the case and Durango will in fact be sporting an x86 (well if you want to be pedantic its x86-64, or x64). This means that its highly likely that code built on the windows platform will be portable to Durango and makes the Xbox the launchpad for the final screen in Microsoft’s Three Screens idea. This essentially means that nearly all major gaming platforms share the same coding base which should make cross platform releases far easier than they have been.

News just in also reveals the specifications of the PlayStation 4 confirming the x86 rumours. It also brings with it some rather interesting news: AMD is looking to be the CPU/GPU manufacturer of choice for the next generation of consoles.

There’s no denying that AMD has had a rough couple years with their most recent quarter posting a net loss of $473 million. It’s not unique to them either as Intel has been dealing with sliding revenue figures as the mobile sector heats up and demand for ARM based processors, which neither of the 2 big chip manufacturer’s provide, skyrockets. Indeed Intel has stated several times that they’re shifting their strategy to try and capture that sector of the market with their most recent announcement being that they won’t be building motherboards any more. AMD seems to have lucked out in securing the CPU for the Orbis (and whilst I can’t find a definitive source it looks like their processor will be in Durango too) and the GPU for both of them which will guarantee them a steady stream of income for quite a while to come. Whether or not this will be enough to reinvigorate the chip giant remains to be seen but there’s no denying that it’s a big win for them.

The end result, I believe, will be an extremely fast maturation of the development frameworks available for the next generation of consoles thanks to their x86 base. What this means is that we’re likely to see titles making the most of the hardware much sooner than we have for other platforms thanks to their ubiquity of their underlying architecture. This will be both a blessing and a curse as whilst the first couple years will see some really impressive titles past that point there might not be a whole lot of room for optimizations. This is ignoring the GPU of course where there always seems to be better ways of doing things but it will be quickly outpaced by its newer brethren. Combine this with the availability of the SteamBox and we could see PCs making a come back as the gaming platform of choice once the consoles start showing their age.

 

Apple_A6X_chip

Apple Eyeing ARM For Their Desktop Line.

The name of the game for all large technology companies is platform unification and domination, with each of them vying to become the platform that consumers choose. Microsoft has been on a long and winding road to accomplishing this since they first talked about it 3 years ago and Apple has been flirting with the idea ever since it started developing its iOS line of products with features like the App Store making its way back into OSX. Neither of them are really there yet as Windows 8/WinRT are still nascent and requiring a lot more application development before the platform can be considered unified and there is still a wide schism betwen iOS and OSX that Apple hasn’t really tried to bridge.

Predominately that’s because Apple understands that they’re two different markets and their current product strategy doesn’t really support bridging those two markets. The iOS space is pretty much a consumer playground as whilst you can do some of the things that Apple’s previous OS was known for on there its far from being the creative platform that OSX was (and still is, to some extent). Indeed attempts to bridge their previous products with more consumer orientated versions have been met with heavy criticism from their long time fans and their failure to provide meaningful product updates to their creative powerhouse the Mac Pro has also drawn the ire of many creative professionals.

If I’m honest I didn’t really think that Apple would turn their backs on the creative niche that is arguably responsible for making them what they are today. It’s understandable from the company’s point of view to focus your attention on the most profitable sectors, much like games developers do with the whole console priority thing, but it almost feels like the time when Apple still considered itself a player in the enterprise space, only to quietly withdraw from it over the course of a couple years. Whilst there isn’t much evidence to support this idea the latest rumours circulating that they may be considering a switch to ARM for their desktop line doesn’t help to dispel that idea.

ARM, for the uninitiated, is a processor company based out of Cambridge that’s responsible for approximately 95% of all the processors that power today’s smartphones. They are unquestionably the kings of the low power space with many of their designs being able to achieve incredible efficiencies which is what enables your phone to run for hours instead of minutes. Whilst they may no longer be the supplier for the chips that powers Apple’s current line of iOS products their technology is still the basis for them. Suffice to say if you’ve got any piece of mobile technology it’s likely that there’s some kind of ARM processor in there and it’s the reason why Microsoft chose it as their second platform for the WinRT framework.

Apple switching platforms is nothing new as they made the switch to x86/Intel back in 2006. The reason back then was that PowerPC, made by IBM, was not able to keep pace with the rapid improvements in performance that Intel was making but was also because of the performance-per-watt of their processors which was arguably why AMD wasn’t considered. Apple’s direction has changed considerably since then and their focus is much more squarely aimed at portable experiences which is far better served by the low power processors that ARM can deliver. For things like the MacBook and the Air lower power means a longer battery life, probably the most key metric by which these portable computers are judged by.

There’s no doubt that Apple will be able to make the transition however I’m not sure that the cost to them, both in real and intangible terms, would be worth it. Forgetting all the technical challenges in getting all your third parties to re-architect their applications the unfortunate fact is that ARM doesn’t have a processor that’s capable of performing at the same level that Intel’s current line is. This means for creative applications like photo/video editing, graphic design and the like their current software suites will simply not be viable on the ARM platform. Since the transition is a ways off its possible that ARM might be able to design some kind of high power variant to satisfy this part of the market but traditionally that’s not their focus and since the desktop sector is one of Apple’s smallest revenue generators I can’t see them wanting to bother doing so.

This is not to say that this would be a bad move for Apple at large however. Being able to have a consistent architecture across their entire line of products is something that no other company would be able to achieve and would be an absolute boon to those seeking a ubiquitous experience across all their devices. It would also be a developer’s wet dream as you could make a cross-platform applications far more easily than you could with other platforms. Considering that Apple makes the majority of its money from ARM based platforms it doesn’t come as much surprise that they might be considering a move to it, even if that’s at the cost of creative sector that brought them back from the graveyard all those years ago.

I don’t usually comment on Apple rumours, mostly because they’re usually just a repeat of the same thing over and over again, but this one caught my attention because if it turns out to be true it will mark Apple’s final step away from its roots. Whilst the creative professionals may lament the disappearance of a platform they’ve been using for over 2 decades the saving grace will be the fact that on a feature level the Windows equivalents of all their programs are at feature parity. Apple will then continue down the consumer electronics road that it has been for the past 10+ years and where it will go from there will be anyone’s guess.

Durango, Orbis and What’s Influencing the Next Generation of Consoles.

The current generation of consoles is the longest lived of any generation of the past 2 decades. There are many reasons for this but primarily it came from the fact that the consoles of this generation, bar the Nintendo Wii, where light years ahead of their time at release. In a theoretical sense both the Xbox360 and the PlayStation 3 had 10 times the computing power of their PC contemporaries at release and they took several years to catch up. Of course now the amount of computing power available, especially that of graphics cards, far surpasses that which is available in console form and the gaming community is starting to look towards the next generation of consoles.

The last couple weeks have seen quite a lot of rumour and speculation going around as to what the next generation of consoles might bring us. Just last week some very detailed specifications on the PlayStation4, codenamed Orbis, were made public and the month before revealed that the new Xbox is codenamed Durango. As far as solid information goes however there’s been little to come by and neither Sony or Microsoft have been keen to comment on any of the speculation. Humour me then as I dive into some of the rumours and try to make sense of everything that’s flying around.

I’ll focus on Durango for the moment as I believe that it will play a critical part in Microsoft’s current platform unification crusade. Long time readers will know how much I’ve harped on about Microsoft’s Three Screens idea in the past and how Windows 8 is poised to make that a reality. What I haven’t mentioned up until now is that Microsoft didn’t appear to have a solution for the TV screen as the Xbox didn’t appear to be compatible with the WindowsRT framework that would underpin their platform unification. Rumours then began swirling that the next Xbox could be sporting a x86 compatible CPU, something which would make Metro apps possible. However SemiAccurate reports that it’s highly unlikely that the Durango CPU will be anything other than another PowerPC chip, effectively putting the kibosh on a Three Screens idea that involves the Xbox.

Now I don’t believe Microsoft is completely unaware of the foot hold they have in the living room when it comes to the Xbox so it follows that either Durango will have a x86/ARM architecture (the 2 currently confirmed WinRT compatible architectures) or WinRT will in fact work on the new Xbox. The latter is the interesting point to consider and there’s definitely some meat in that idea. Recall in the middle of last year that there was strong evidence to suggest that Windows 8 would be able to play Xbox360 games suggesting that there was some level of interoperability between the two platforms (and by virtue the Windows Phone 7 platform as well). Funnily enough if this is the case then it’s possible that Metro apps could run on the Wii U but I doubt we’ll ever see that happen.

Coincidentally Orbis, the PlayStation3 successor, is said to be sporting a x64 CPU in essence eliminating most of the differences between it and conventional PCs. Whilst the advantages to doing this are obvious (cross platform releases with only slight UI and controller modifications, for starters) the interesting point was that it almost guarantees that there will be no backwards compatibility for PlayStation3 games. Whilst the original PlayStation3s contained an actual PS2 inside them the vast majority of them simply emulated the PS2 in software, something that it was quite capable of doing thanks to the immense power under of the PlayStation3. Using a more traditional x64 CPU makes this kind of software emulation nigh on impossible and so backwards compatibility can only be achieved with either high end components or an actual Cell processor. As Ars Technica points out it’s very likely that the next generation of consoles will be more in line with current hardware than being the computational beasts of their predecessors, mostly because neither Microsoft or Sony wants to sell consoles at a loss again.

The aversion to this way of doing business, which both Microsoft and Sony did for all their past console releases, is an interesting one. Undoubtedly they’ve seen the success of Nintendo and Apple who never sell hardware at a loss and wish to emulate that success but I think it’s far more to do with the evolution of how a console gets used. Indeed on the Xbox360 more people use it for entertainment purposes than they do for gaming and there are similar numbers for the PlayStation3. Sony and Microsoft both recognise this and will want to capitalize on this with the  next generation. This also means that they can’t support their traditional business model of selling at a loss and making it up on the games since a lot of consoles won’t see that many games purchased for them. There are other ways to make up this revenue short fall, but that doesn’t necessarily mean they can keep using the console as a loss leader for their other products.

All this speculation also makes the idea of the SteamBox that much more interesting as it no longer seems like so much of an outlier when lumped in with the next generation of consoles. There’s also strong potential that should a console have a x86/x64 architecture that the Steam catalogue could come to the platform. Indeed the ground work has already been done with titles like Portal 2 offering a rudimentary level of Steam integration on the PlayStation3, so it’s not much of a stretch to think that it will make a reappearance on the next generation.

It will be interesting to see how these rumours develop over the next year or so as we get closer to the speculated announcement. Suffice to say that the next generation of consoles will be very different beasts to their predecessors with a much more heavy focus on traditional entertainment. Whether this is a positive thing for the gaming world at large will have to remain to be seen but there’s no mistaking that some radical change is on the horizon.

Windows 8: First Step to the Realization of Three Screens.

The last two years have seen a major shake up in the personal computing industry. Whilst I’m loathed to admit it Apple was the one leading the charge here, redefining the smart phone space and changing the way many people did the majority of their computing by creating the wildly successful niche of curated computing (read: tablets). It is then inevitable that many subsequent innovations from rival companies are seen as reactions to Apple’s advances, even if the steps that company is taking are towards a much larger and broader goal than competing in the same market.

I am, of course, referring to Microsoft’s Windows 8 which was just demoed recently.

There’s been quite a bit of news about the upcoming release of Windows 8 with many leaked screenshots and even leaked builds that gave us a lot of insight into what we can expect of the next version of Windows. For the most part the updates didn’t seem like anything revolutionary although things like portable desktops and a more integrated web experienced were looking pretty slick. Still Windows 7 was far from being revolutionary either but the evolution from Vista was more than enough to convince people that Microsoft was back on the right track and the adoption rates reflect that.

However the biggest shift that is coming with Windows 8 was known long before it was demoed: Windows 8 will run on ARM and other System on a Chip (SOC) devices. It’s a massive deviation from Microsoft’s current platform which is wholly x86/x86-64 based and this confirms Microsoft’s intentions to bring their full Windows experience to tablet and other low power/portable devices. The recent demo of the new operating system confirmed this with Windows 8 having both a traditional desktop interface that we’re all familiar with and also a more finger friendly version that takes all of its design cues from the Metro interface seen on all Windows Phone 7 devices.

The differences between these two interfaces just don’t stop at what input device they were optimized for either. Whilst all Windows 8 devices will be capable running the huge back catalog of software that has been developed for Windows over the past few decades in the traditional desktop interface mode the new tablet optimized interface relies on applications built using HTML5 and JavaScript. This is arguably done so that they are much more platform independent than their traditional Windows applications cousins who, whilst most likely being able to run since .NET will be ported to the ARM and SOC infrastructures, won’t have been designed for the tablet environment. They’ll still be usable in a pinch of course but you’d still want to rewrite them if a large number of your users were moving to the tablet/smartphone platform.

Looking at all these changes you can’t help but think that they were all done in reaction to Apple’s dominance of the tablet space with their iPad. It’s true that a lot of the innovations Microsoft has done with Windows 8 mirror those of what Apple has achieved in the past year or so however since Windows 8 has been in development for much longer than that not all of them can be credited to Microsoft playing the me-too game. Realistically it’s far more likely that many of these innovations are Microsoft’s first serious attempts at realizing their three screens vision and many of the changes in Windows 8 support this idea.

A lot of critics think the idea of bringing a desktop OS to a tablet form factor is doomed for failure. The evidence to support that view is strong too since Windows 7 (and any other OS for that matter) tablet hasn’t enjoyed even a percentage of the success that the dedicated tablet OS’s have. However I don’t believe that Microsoft is simply making a play for the tablet market with Windows 8, what they’re really doing is providing a framework for building user experiences that remain consistent across platforms. The idea of being capable of completing any task whether you’re on your phone, TV or dedicated computing device (which can be a tablet) is what is driving Microsoft to develop Windows 8 they way they are. Windows Phone 7 was their first steps into this arena and their UI has been widely praised for its usability and design and Microsoft’s commitment to using it on Windows 8 shows that they are trying to blur the lines that current exist between the three screens. The potential for .NET applications to run on x86, ARM and other SOC platforms seals the deal, there is little doubt that Microsoft is working towards a ubiquitous computing platform.

Microsoft’s execution of this plan is going to be vital for their continued success. Whilst they still dominate the desktop market it’s being ever so slowly eroded away by the bevy of curated computing platforms that do everything users need them to do and nothing more. We’re still a long time away from everyone out right replacing all their PCs with tablets and smart phones but the writing is on the wall for a sea change in the way we all do our computing. Windows 8 is shaping up to be Microsoft’s way of re-establishing themselves as the tech giant to beat and I’m sure the next year is going to be extremely interesting for fans and foes alike.