Posts Tagged‘hp’

HP The Machine High Level Architecture

HP’s “The Machine”: You’d Better Deliver on This, HP.

Whilst computing has evolved exponentially in terms of capabilities and raw computing performance the underlying architecture that drives it has remained largely the same for the past 30 years. The vast majority of platforms are either x86 or some other CISC variant running on a silicon wafer that’s been lithographed to have the millions (and sometimes billions) of transistors etched into it. This is then all connected up to various other components and storage through the various bus definitions, most of which have changed dramatically in the face of new requirements. There’s nothing particularly wrong with this model, it’s served us well and has fallen within the bounds of Moore’s Law for quite some time, however there’s always the nagging question of whether or not there’s another way to do things, perhaps one that will be much better than anything we’ve done before.

According HP their new concept, The Machine, is the answer to that question.

HP The Machine High Level Architecture

 

For those who haven’t yet read about it (or watched the introductory video on the technology) HP’s The Machine is set to be the next step in computing, taking the most recent advances in computer technology and using them to completely rethink what constitutes a computer. In short there are 3 main components that make it up, 2 of which are based on technology that have yet to see a commercial application. The first appears to be a Sony Cell like approach to computing cores, essentially combining numerous smaller cores into one big computing pool which can then be activated at will, technology which currently powers their Moonshot range of servers. The second piece is optical interconnects, something which has long been discussed as the next stage in computing but as of yet hasn’t really made in roads at the level HP is talking about. Finally the idea of “universal memory” which is essentially memristor storage which HP Labs has been teasing for some time but has failed to bring any product to light.

As an idea The Machine is pretty incredible, taking the best of breed technology for every subsystem of the traditional computer and putting it all together in the one place. HP is taking the right approach with it too as whilst The Machine might share some common ancestry with regular computers (I’m sure the “special purpose cores” are likely to be x86) current operating systems make a whole bunch of assumptions that won’t be compatible with its architecture. Thankfully they’ll be open sourcing Machine OS which means that it won’t be long before other vendors will be able to support it. It would be all too easy for them to create another HP-UX, a great piece of software in its own right that no one wants to touch because it’s just too damn niche to bother with. That being said however the journey between this concept and reality is a long one, fraught with the very real possibility of it never happening.

You see whilst all of these technologies that make up The Machine might be real in one sense or another 2 of them have yet to see a commercial release. The memristor based storage was “a couple years away” after the original announcement by HP however here we are, some 6 years later, and not even a prototype device has managed to rear its head. Indeed HP said last year that we might see memristor drives in 2018 if we’re lucky and the roadmap shown in the concept video shows the first DIMMs appearing sometime in 2016. Similar things can be said for optical interconnects as whilst they’ve existed at the large scale for some time (fibre interconnects for storage are fairly common) they have yet to be created for the low level type of interconnects that The Machine would require. HP’s roadmap to getting this technology to market is much less clear, something which HP will need to get right if they don’t want the whole concept to fall apart at the seams.

Honestly my scepticism comes from a history of being disappointed by concepts like this with many things promising the world in terms of computing and almost always failing to deliver on them. Even some of the technology contained within The Machine has already managed to disappoint me with memristor storage remaining vaporware despite numerous publications saying it was mere years away from commercial release. This is one of those times that I’d love to be proven wrong though as nothing would make me happier than to see a true revolution in the way we do computing, one that would hopefully enable us to do so much more. Until I see real pieces of hardware from HP however I’ll remain sceptical, lest I get my feelings hurt once again.

HP Care Packs

HP Charging For Updates? Not Sure if Don’t Want.

It’s every system administrator’s dream to only be working on the latest hardware running the most recent software available. This is partially due to our desire to be on the cutting edge of all things, where new features abound and functionality is at its peak. However the reality is always far from that nirvana with the majority of our work being on systems that are years old running pieces of software that haven’t seen meaningful updates in years. That’s why few tears have been shed by administrators worldwide about XP’s impending demise as it signals the end of the need to support something that’s now over a decade old. Of course this is much to the chagrin of end users and big enterprises who have still yet to make the transition.

Indeed big enterprises are rarely on the cutting edge and thus rely on extended support programs in order to keep their fleet maintained. This is partially due to the amount of inertia big corporations have, as making the change to potentially thousands of endpoints takes some careful planning an execution. Additionally the impacts to the core business cannot be underestimated and must be taken into careful consideration before the move to a new platform is made. With this in mind it’s really no surprise that corporations often buy support contracts that go for 3 or 5 years for the underlying hardware as that ensures that they won’t have to make disruptive changes during that time frame.

HP Care Packs

So when HP announced recently that it would be requiring customers to have a valid warranty or support agreement with them in order to get updates I found myself in two minds about it. For most enterprises this will be a non-issue as running hardware that’s out of warranty is begging for trouble and not many have the appetite for that kind of risk. Indeed I actually thought this would be a good thing for enterprise level IT as it would mean that I wouldn’t be cornered into supporting out of warranty hardware, something which has caused me numerous headaches in the past. On the flip side though this change does affect something that is near and dear to my heart: my little HP Mircoserver.

This new decision means that this little server only gets updates for a year after purchase after which you’re up for at least $100 for a HP Care Pack which extends the warranty out to 5 years and provides access to all the updates. Whilst I missed the boat on the install issues that plagued its initial release (I got mine after the update came out) I can see it happening again with similar hardware models. Indeed the people hit hardest by this change are likely the ones who would be least able to afford a support plan of this nature (I.E. smaller businesses) who are the typical candidates for running hardware that’s out of a support arrangement. I can empathise with their situation but should I find myself in a situation where I needed an update for them and couldn’t get it due to their lack of support arrangements I’d be the first one to tell them so.

Indeed the practice isn’t too uncommon with the majority of other large vendors requiring something on the order of a subscription in order to get product updates with the only notable exception being Dell (full disclosure: I work for them). I’ll agree that it appears to be a bit of a cash grab as HP’s server business hasn’t been doing too well in the recent quarters (although no one has done particularly well, to be honest) although I doubt they’re going to make up much to counter act the recent downfall. This might also spur some customers on to purchase newer hardware whilst freeing up resources within HP that no longer need to support previous generations of hardware.

So I guess what I’m getting at is that whilst I can empathise with the people who will be hard done by with this change I, as someone who has to deal with warranty/support calls, don’t feel too hard done by. Indeed any admin worth their salt could likely get their hands on the updates anyway without having to resort to the official source anyway. If the upkeep on said server is too much for you to afford then it’s likely time to rethink your IT strategy, potentially looking at cloud based solutions that have a very low entry point cost when compared to upgrading a server.

Steam In Home Streaming

Steam In Home Streaming: Results From the Field.

My stance on Cloud Gaming is well known and honestly barring some major breakthrough in several technological areas (graphics cards, available bandwidth, etc.) I can’t see it changing any time soon. The idea of local streaming however is something I’m on board with as there have already been numerous proven examples where it can work, a couple of which I’ve actually used myself. So when I heard that Valve was going to enable In Home Streaming as a feature of Steam I was pretty excited as there have been a couple times where I’ve found myself wanting to use games installed on my main PC on other computers in the house. Valve widen the beta last week to include a lot more people and I was lucky enough to snag an invite so I gave In Home Streaming a look over during the Australia Day long weekend.

Steam In Home Streaming

The setup couldn’t be more simple. At this stage you have to opt into the Steam client beta, requiring you to redownload the client (around 80 MB at the time of writing) and sign into both machines using the same account. Now last time I remember trying to do that I got told I was already logged in somewhere else and thus couldn’t log in but it seems this client version has no such limitations. Once you’re logged into both machines you should be greeted with a list of games available to play that matches your main machine perfectly and, when you go to play them, you’ll have the option to either install it locally or stream it from the other machine.

Clicking on stream will start the game on the other machine its installed on and, should everything go according to plan, it will then appear in another window on the machine you’re streaming to. The first thing you’ll notice though is that the game fully runs on the other machine, including display the graphics and playing sound. This can be somewhat undesirable and whilst it’s easily remedied it shows you what kind of streaming is actually occurring (I.E. DirectX mirroring). Using such technology also places some limitations on what can and cant’ be streamed by simply clicking on the stream button but there are ways around it.

I first tried this on my media PC which is a HP MicroServer that has a Radeon HD6450 1GB installed in it. Now this machine can handle pretty much any kind of content you can throw at it although I have had it struggle with some high bitrate 1080p files. This was somewhat improved by using newer drivers and later builds of VLC so I was pretty confident it could handle a similar stream over the network. Whilst it worked the frame rates were pretty dismal, even in games which weren’t as graphically intense. Considering the primary use case of this would be for underpowered machines to take advantage of the grunt other PCs in the house can provide this was a little disappointing but I decided I’d give it a go on my Zenbook before I passed judgement.

The much better hardware of the Zenbook improved the experience greatly with all the games I tested on it running nigh on perfectly. There were a couple issues to report, namely when the stream broke there didn’t seem to be a way to restart it so I was just left with a black screen and audio playing. The differing resolutions meant that I was playing with a boxed perspective which was a tad annoying and, unfortunately, it appears you’re limited to the resolutions of the box you’re streaming from (I couldn’t run DOTA 2 at 1080p as my monitors are 1680 x 1050). Still the performance was good enough that I could play FPS games on it, although I wasn’t game enough to try an online match.

Overall I’m very impressed with what Valve has delivered with In Home Streaming as it’s pretty much what I expected, bar it being so damn easy to set up and use. Whilst I’m sure they’ll improve the performance over time it does speak volumes to the fact that the end point does matter and that you will have a worse experience on low powered hardware. Still, even then it was usable for my use case (watching in game DOTA 2 replays) and I’m sure that it would be good enough in its current form for a lot of people.

The Cloud Wars Are About to Begin.

With virtualization now being as much of as a pervasive idea in the datacentre as storage array networks or under floor cooling the way has been paved for the cloud to make its way there as well for quite some time now. There are now many commercial off the shelf solutions that allow you to incrementally implement the multiple levels of the cloud (IaaS -> PaaS -> SaaS) without the need for a large operational expenditure in developing the software stack at each level. The differentiation now comes from things like added services, geographical location and pricing although even that is already turning into a race to the bottom.

The big iron vendors (Dell, HP, IBM) have noticed this and whilst they could still sustain their current business quite well by providing the required tin to the cloud providers (the compute power is shifted, not necessarily reduced) they’re all starting to look to creating their own cloud solutions so that they can continue to grow their business. I covered HP’s cloud solution last week after the HP Cloud Tech day but recently there’s been a lot of news coming out regarding the other big players, both from the old big iron world and the more recently established cloud providers.

First cab off the rank I came across was Dell who are apparently gearing up to make a cloud play. Now if I’m honest that article, whilst it does contain a whole lot of factual information, felt a little speculative to me mostly because Dell hasn’t tried to sell me on the cloud idea when I’ve been talking to them recently. Still after doing a small bit of research I found that not only are Dell planning to build a global network of datacentres (where global usually means everywhere but Australia) they announced plans to build one in Australia just on a year ago. Combining this with their recent acquisition spree that included companies like Wyse it seems highly likely that this will be the backbone of their cloud offering. What that offering will be is still up for speculation however, but it wouldn’t surprise me if it was yet another OpenStack solution.

Mostly because RackSpace, probably the second biggest general cloud provider behind Amazon Web Services, just announced that their cloud will be compatible with the OpenStack API. This comes hot off the heels of another announcement that both IBM and RedHat would become contributers to the OpenStack initiative although no word yet on whether they have a view to implement the technology in the future. Considering that both HP and Dell have are already showing their hands with their upcoming cloud strategies it would seem like becoming OpenStack contributers will be the first step to seeing some form of IBM cloud. They’d be silly not to given their share of the current server market.

Taking all of this into consideration it seems that we’re approaching a point of convergence in the cloud computing industry. I wrote early last year that one of the biggest draw backs to the cloud was its proprietary nature and it seems like the big iron providers noticed that this was a concern. The reduction of vendor lock lowers the barriers to entry for many customers significantly and provides a whole host of other benefits like being able to take advantage of disparate cloud providers to provide service redundancy. As I said earlier the differentiation between providers will then predominately come from value-add services, much like it did for virtualization in the past.

This is the beginning of the cloud war, where all the big players throw their hats into the ring and duke it out for our business. It’s a great thing for both businesses and consumers as the quality of products will increase rapidly and the price will continue on a down hill trend. It’s quite an exciting time, one akin to the virtualization revolution that started happening almost a decade ago. Like always I’ll be following these developments keenly as the next couple years will be something of a proving ground for all cloud providers.

HP Cloud Tech Day.

So as you’re probably painfully aware (thanks to my torrent of tweets today) I spent all of today sitting down with a bunch of like minded bloggers for HP’s Cloud Tech Day which primarily focused on their recent announcement that they’d be getting into the cloud business. They were keen to get our input as to what the current situation was in the real world in relation to cloud services adoption and what customers were looking for with some surprising results. If I’m completely honest it was more aimed at strategic level rather than the nuts and bolts kind of tech day I’m used to, but I still got some pretty good insights out of it.

For starters HP is taking a rather unusual approach to the cloud. Whilst it will be offering something along the lines of the traditional public cloud like all other providers they’re also going to  attempt to make inroads into the private cloud market whilst also creating a new kind of cloud offering they’re dubbing “managed cloud”. The kicker being that should you implement an application on any of those cloud platforms you’ll be able to move it seamlessly between them, effectively granting you the elusive cloud bursting ability that everyone wants but no one really has. All the tools between all 3 platforms are the same too, enabling you to have a clear idea of how your application is behaving no matter where its hosted.

The Managed Cloud idea is an interesting one. Basically it takes the idea of a private cloud, I.E. one you host yourself, and instead of you hosting it HP will host it for you. Basically it takes away the infrastructure management worry that a private cloud still presents whilst allowing you to have most of the benefits of a private cloud. They mentioned that they already have a customer using this kind of deployment for their email infrastructure which had the significant challenge of keeping all data on Australian shores and the IT department still wanting some level of control over it.

How they’re going to go about this is still something of a mystery but there are some little tid bits that give us insight into their larger strategy. HP isn’t going to offer a new virtualization platform to underpin this technology, it will in fact utilize whatever current virtual infrastructure you have. What HP’s solution will do is abstract that platform away so you’re given a consistent environment to implement against which is what enables HP Cloud enabled apps to work between the varying cloud platforms.

Keen readers will know that this was the kind of cloud platform I’ve been predicting (and pining for) for some time. Whilst I’m still really keen to get under the hood of this solution to see what makes it tick and how applicable it will be I have to say that HP has done their research before jumping into this. Many see cloud computing as some kind of panacea to all their IT ills when in reality cloud computing is just another solution for a specific set of IT problems. Right now that’s centred around commodity services like email, documents, ERP and CRM and of course that umbrella will continue to expand into the future but there will always be those niche apps which won’t fit well into the cloud paradigm. Well not at the price point customers would be comfortable anyway.

What really interested me was the parallels that could be easily drawn between the virtualization revolution and the burgeoning cloud industry. Back in the day there was really only one player (VMware, Amazon) but as time went on many other players came online. Initially those competitors had to play feature catch up with the number 1. The biggest player noticed they were catching up quickly (through a combination of agility, business savvy and usually snapping up a couple disgruntled employees) and reacted by providing value add services above the base functionality level. The big players in virtualization (Microsoft, VMware and CITRIX) are just all about on feature parity for base hypervisor capabilities but VMware has stayed ahead by creating a multitude of added services, but their lead is starting to shrink which I’m hoping will push for a fresh wave of innovation.

Applying this to the cloud world it’s clear that HP has seen that there’s no reason in competing at a base level with cloud providers; it’s a fools gambit. Amazon has the cheap bulk computing services thing nailed and if all you’re doing is giving the same services then the only differentiator you’ll have is price. That’s not exactly a weapon against Amazon who could easily absorb losses for a quarter whilst it watches you squirm as your margins plunge into the red. No instead HP is positioning themselves as a value add cloud provider, having a cloud level that works at multiple levels. The fact that you can seamlessly between them is probably all the motivation most companies will need to give them a shot.

Of course I’m still a bit trepidatious about the idea because I haven’t seen much past the marketing blurb. As with all technology products there will be limitations and until I can get my hands on the software (hint hint) then I can’t get too excited about it. It’s great to see HP doing so much research and engaging with the public in this way but the final proof will be in the pudding, something I’m dying to see.

Fusion-IO ioDrive Maximised IOPS

Fusion-IO’s ioDrive Comparison: Sizing up Enterprise Level SSDs.

Of all the PC upgrades that I’ve ever done in the past the one that’s most notably improved performance of my rig is, by a wide margin, installing a SSD. Whilst good old fashioned spinning rust disks have come a long way in recent years in terms of performance they’re still far and away the slowest component in any modern system. This is what chokes most PC’s performance as the disk is a huge bottleneck, slowing everything down to its pace. The problem can be mitigated somewhat by using several disks in a RAID 0 or RAID 10 set but all of those pale in comparison when compared to even a single SSD.

The problem doesn’t go away for the server environment either, in fact most of the server performance problems I’ve diagnosed have had their roots in poor disk performance. Over the years I’ve discovered quite a few tricks to get around the problems presented by traditional disk drives but there are just some limitations you can’t overcome. Recently at work the issue of disk performance came to a head again as we investigated the possibility of using blade servers in our environment. I casually made mention of a company that I had heard of a while back, Fusion-IO, who specialised in making enterprise class SSDs. The possibility of using one of the Fusion-IO cards as a massive cache for the slower SAN disk was a tantalizing prospect and to my surprise I was able to snag an evaluation unit in order to put it through its paces.

The card we were sent was one of the 640GB ioDrives. It’s surprising heavily for its size, sporting gobs of NAND flash and a massive heat sink that hides the propeitary c ontroller. What intrigued me about the card initially was the NAND didn’t sport any branding I recognised before (usually its recognisable like Samsung) but as it turns out each chip is a 128GB Micron NAND Flash chip. If all that storage was presented raw it would total some 3.1 TB and this is telling of the underlying infrastructure of the Fusion-IO devices.

The total storage available to the operating system once this card is installed is around 640GB (600GB usable). Now to get that kind of storage out of the Micron NAND chips you’d only need 5 of them but the ioDrive comes with a grand total of 25 dotting the board. No traditional RAID scheme can account for the amount of storage presented. So based on the fact that there’s 25 chips and only 5 chips worth of capacity available it follows that the Fusion-IO card uses quintuplet sets of chips to provide the high level of performance that they claim. That’s an incredible amount of parallelism and if I’m honest I expected these chips to all be 256MB chips that were all RAID 1 to make one big drive.

Funnily enough I did actually find some Samsung chips on this card, two 1GB DDR2 chips. These are most likely used for the CPU on the ioDrive which has a front side bus of either 333 or 400MHz based on the RAM speed.

But enough of the techno geekery, what’s really important is how well this thing performs in comparison to traditional disks and whether or not it’s worth the $16,000 price tag that comes along with it. Now I had done some extensive testing of various systems in the past in order to ascertain whether the new Dell servers we were looking at where going to perform as well as their HP counterparts. All of this testing was purely disk based using IOMeter, a disk load simulator that tests and reports on nearly every statistic you want to know about your disk subsystem. If you’re interested in replicating the results I’ve got then I’ve uploaded a copy of my configuration file here. The servers included in the test are Dell M610x, Dell M710HD, Dell M910, Dell R710 and a HP DL380G7. For all the tests (bar the two labelled local install) all of them are a base install of ESXi 5 with a Windows 2008R2 virtual machine installed on top of it. The specs of the virtual machine are 4 vCPUs, 4GB RAM and a 40GB disk.

As you can see the ioDrive really is in a class all of its own. The only server that comes close in terms of IOPS is the M910 and that’s because it’s sporting 2 Samsung SSDs in RAID 0. What impresses me most about the ioDrive though is its random performance which manages to stay quite high even as the block size starts to get bigger. Although its not shown in these tests the one area where the traditional disks actually equal the Fusion-IO is in terms of throughput when you get up to really large write sizes, on the order of 1MB or so. I put this down to the fact that the servers in question, the R710s and DL380G7s, have 8 disks in them that can pump out some serious bandwidth when they need to. If I had 2 Fusion-IO cards though I’m sure I could easily double that performance figure.

What interested me next was to see how close I could get to the spec sheet performance. The numbers I just showed you are particularly incredible but Fusion-IO claims that this particular drive was capable of something on the order of 140,000 IOPS if I played my cards correctly. Using the local install of Windows 2008 I had on there I fired up IOMeter again and set up some 512B tests to see if I could get close to those numbers. The results, as shown in the Dell IO contoller software, are shown below:

Ignoring the small blip in the centre where I had to restart the test you can see that whilst the ioDrive is capable of some pretty incredible IO the advertised maximums are more than likely theoretical than practical. I tried several different tests and while a few averaged higher than this (approximately 80K IOPS was my best) it was still a far cry from the figures they have quoted. Had they gotten within 10~20% I would’ve given it to them but whilst the ioDrive’s performance is incredible it’s not quite as incredible as the marketing department would have you believe.

As a piece of hardware the Fusion-IO ioDrive is really the next step up in terms of performance. The virtual machines I had running directly on the card were considerably faster than their spinning rust counterparts and if you were in need of some really crazy performance you really couldn’t go past one of these cards. For the purpose we had in mind for it however (putting it inside a M610x blade) I can’t really recommend it as it’s a full height blade that only has the power of a half height. The M910 represents much better value with its crazy CPU and RAM count and the SSDs, whilst being far from Fusion-IO level, do a pretty good job of bridging the disk performance gap. I didn’t have enough time to see how it would improve some real world applications (it takes me longer than 10 days to get something like this into our production environment) but based on these figures I have no doubt it improve the performance of whatever I put it into considerably. 

The Memristor: Moore’s Law Gets a Jolt.

The computer (or whatever Internet capable device you happen to be viewing this on) is made up of various electronic components. For the most part these are semiconductors, devices which allow the flow of electricity but don’t do it readily, but there’s also a lot of supporting electronics that are what we call fundamental components of electronics. As almost any electrical enthusiast will tell you there are 3 such components: the resistor, capacitor and inductor each of them with their own set of properties that makes them useful in electronic circuits. There’s been speculation of a 4th fundamental component for about 40 years but before I talk about that I’ll need to give you a quick run down on what the current fundamentals properties are.

The resistor is the simplest of the lot, all it does is impede the flow of electricity. They’re quite simple devices, usually a small brown package banded by 4 or more colours which denotes just how resistive it actually is. Resistors are often used as current limiters as the amount of current that can pass through them is directly related to the voltage and level of resistance of said resistor. In essence you can think of them as narrow pathways in which electric current has to squeeze through.

Capacitors are intriguing little devices and can be best thought of as batteries. You’ve seen them if you’ve taken apart any modern device as they’re those little canister looking things attached to the main board of said device. They work by storing charge in an electrostatic field between two metal plates that’s separated by an insulating material called a dielectric. Modern day capacitors are essentially two metal plates and the dielectric rolled up into a cylinder, something which you could see if you cut one open. I’d only recommend doing this with a “solid” capacitor as the dielectrics used in other capacitors are liquids and tend to be rather toxic and/or corrosive.

Inductors are very similar to capacitors in the respect that they also store charge but instead of an electrostatic field they store it in a magnetic field. Again you’ve probably seen them if you’ve cracked open any modern device (or say looked inside your computer) as they look like little circles of metal with wire coiled around them. They’re often referred to as “chokes” as they tend to oppose the current that induces the magnetic field within them and at high frequencies they’ll appear as a break in the circuit, useful if you’re trying to keep alternating current out of your circuit. 

For quite a long time these 3 components formed the basis of all electrical theory and nearly any component could be expressed in terms of them. However back in 1971 Leon Chua explored the symmetry between these fundamental components and inferred that there should be a 4th fundamental component, the Memristor. The name is a combination of memory and resistor and Chua stated that this component would not only have the ability to remember its resistance, but also have it changed by passing current through it. Passing current in one direction would increase the resistance and reversing it would decrease it. The implications of such a component would be huge but it wasn’t until 37 years later that the first memristor was created by researchers in HP’s lab division.

What’s really exciting about the memristor is its potential to replace other solid state storage technologies like Flash and DRAM. Due to memristor’s simplicity they are innately fast and, best of all, they can be integrated directly onto the chip of processors. If you look at the breakdown of a current generation processor you’ll notice that a good portion of the silicone used is dedicated to cache, or onboard memory. Memristors have the potential to boost the amount of onboard memory to extraordinary levels, and HP believes they’ll be doing that in just 18 months:

Williams compared HP’s resistive RAM technology against flash and claimed to meet or exceed the performance of flash memory in all categories. Read times are less than 10 nanoseconds and write/erase times are about 0.1-ns. HP is still accumulating endurance cycle data at 10^12 cycles and the retention times are measured in years, he said.

This creates the prospect of adding dense non-volatile memory as an extra layer on top of logic circuitry. “We could offer 2-Gbytes of memory per core on the processor chip. Putting non-volatile memory on top of the logic chip will buy us twenty years of Moore’s Law, said Williams.

To put this in perspective Intel’s current flagship CPU ships with a total of 8MB of cache on the CPU and that’s shared between 4 cores. A similar memristor based CPU would have a whopping 8GB of on board cache, effectively negating the need for external DRAM. Couple this with a memristor based external drive for storage and you’d have a computer that’s literally decades ahead of the curve in terms of what we thought was possible, and Moore’s Law can rest easy for a while.

This kind of technology isn’t you’re usual pie in the sky “it’ll be available in the next 10 years” malarkey, this is the real deal. HP isn’t the only one looking into this either, Samsung (one of the world’s largest flash manufacturers) has also been aggressively pursuing this technology and will likely début products around the same time. For someone like me it’s immensely exciting as it shows that there are still many great technological advances ahead of us, just waiting to be uncovered and put into practice. I can’t wait to see how the first memristor devices perform as it will truly be a generational leap ahead in technology.

 

HP, WebOS and the Future of the Tablet Space.

So last Friday saw the announcement that HP was spinning off their WebOS/Tablet division, a move that sent waves through the media and blogosphere. Despite being stuck for decent blog material on the day I didn’t feel the story had enough legs to warrant investigation, I mean anyone but the most dedicated of WebOS fans knew that the platform wasn’t going anywhere fast. Heck it took me all of 30 seconds on Google to find these latest figures that have it pegged at somewhere around 2%, right there with Symbian (those are smart phone figures, not overall mobile) trailing the apparently “failing” Windows Phone 7 platform by a whopping 7%. Thus the announcement that they were going to dump the whole division wasn’t so much of a surprise and set about trying to find something more interesting to write about.

Over the weekend though the analysts have got their hands on some juicy details that I can get stuck into.

Now alongside the announcement that WebOS was getting the boot HP also announced that it was considering exiting the PC hardware business completely. At the moment that would seem like a ludicrous idea as that division was their largest with almost $10 billion in revenue but their enterprise services division (which is basically what used to be EDS) is creeping up on that quite quickly. Such a move also wouldn’t see them exit the server hardware business either which would be a rather suicidal move from them considering they’re the second largest player there with 30% of the market. More it seems like HP wants out of the consumer end of the market and wants to focus on enterprise software, services and the hardware that supports them.

It’s a move that several similar companies have taken in the past when faced with downwards trending revenues in the hardware sector. Back when I worked at Unisys I can remember them telling me about how they now derive around 70% of their revenue from outsourcing initiatives and only 30% from their mainframe hardware sales. They used to be a mostly hardware oriented company but switched to professional services and outsourcing when they had negative growth for several years. HP on the other hand doesn’t seem to be suffering any of these problems, which begs the question why would they bother exiting what seems to be a lucrative market for them?

It was a question I hadn’t really considered until I read this post from John Gruber. Now I’d known that HP had gotten a new CEO since Mark Hurd was ejected over that thing with former PlayBoy Girl Jodie Fisher (and his expense account, but that’s no where near as fun to write) but I hadn’t caught up with who they’d hired as his replacement. Turns out it is former SAP CEO Leo Apotheker. Now their decisions to spin off their WebOS (and potentially their PC division) make a lot of sense as that’s the kind of company Apotheker has quite a lot of experience in. Couple that with their decision to buy Autonomy, another enterprise software company, it seems almost certain that HP is heading towards the end goal of being a primarily serviced based company.

Of course with HP exiting the consumer market after only being in it for such a short time people started to wonder if there was ever going to be a serious competitor to Apple’s offerings, especially in the tablet market. Indeed it doesn’t look good for anyone trying to crack into that market as it’s pretty much all Apple all the time and if a multi-billion dollar company can’t do it then there’s not much hope for anyone else. However Android has made some impressive inroads into this Apple dominated niche, securing a solid 20% of the market.  Just like it did with the iPhone before it no single vendor will come to completely decimate Apple in this space but overall Android’s dominance will come from the sheer variety that they offer. We’ve still yet to see  Galaxy S2-esque release in the Android tablet space but I’m sure one’s not too far off.

It’ll be interesting to see how HP evolves itself over the next year or so under Apotheker’s leadership as it’s current direction is vastly different to that of the HP in the past. This isn’t necessarily a good or bad thing for the company either as whilst they might have any cause for concern now this transition could avoid the pain of attempting to do it further down the track. The WebOS split off is just the first step in this long journey for HP and there will be many more for them to take if they’re to make the transition to a professional services company.