I can remember my first encounter with virtual reality way back in the 90s. It was a curiosity more than anything else, something that was available at this one arcade/pizza place in the middle of town. You’d go in and there it would be, two giant platforms containing people with their heads strapped into oversized head gear. On the screens behind them you could see what they were seeing, a crude polygonal world inhabited by the other player and a pterodactyl. I didn’t really think much of it at the time, mostly since I couldn’t play it anywhere but there (and that was an hour drive away) but as I grew older I always wondered what had become of that technology. Today VR is on the cusp of becoming mainstream and it looks like Google wants to thrust it into the limelight.
Meet Google Cardboard, the ultra low cost virtual reality headset that Google gave out to every attendee at I/O this year. It’s an incredibly simple idea, using your smartphone’s screen and to send different images to your eyes. Indeed if you were so inclined a similar system could be used to turn any screen into a VR headset, although the lenses would need to be crafted for the right dimensions. With that in mind the range of handsets that Google Cardboard supports is a little limited, mostly to Google Nexus handsets and some of their closely related cousins, but I’m sure that future incarnations that support a wide range of devices won’t be too far off. Indeed if the idea has piqued your interest enough you can get an unofficial version of it for the low cost of $25, a bargain if you’re looking to dabble with VR.
Compared to the original OculusVR specs most smartphones are more than capable of driving Google Cardboard with an acceptable level of performance. My current phone, the Sony Xperia Z, has a full 1080p resolution and enough grunt to run some pretty decent 3D applications. That combined with the bevy of sensors that are in most modern smartphones make Google Cardboard a pretty brilliant little platform for testing out what you can do with VR. Of course that also means the experience you can get with this will vary wildly depending on what handset you have but for those looking for a cheap platform to validate ideas on it’s hard to argue against it.
Of course this begs the question as to what Google’s larger plan is for introducing this concept to the world. Ever since the breakaway success that was the OculusVR it’s been obvious that there’s consumer demand for VR and it only seems to be increasing as time goes on. However most applications are contained solely within the games industry with only a few interesting experiments (like Living with Lag) breaking outside that mould. There’s a ton of augmented reality applications on Android which could potentially benefit from widespread adoption of something like Cardboard, however beyond that I’m not so sure.
I think it’s probably a gamble on Google’s part as history has proven that throwing out a concept to the masses is a great way to root out innovative ideas. Google might not have any solid plans for developing VR of this nature themselves but the community that arises around the idea could prove a fruitful place for applications that no one has thought of before. I had already committed myself to a retail version of an Oculus when it came out however so whilst Cardboard might be a curiosity my heart is unfortunately promised to another.
Whilst computing has evolved exponentially in terms of capabilities and raw computing performance the underlying architecture that drives it has remained largely the same for the past 30 years. The vast majority of platforms are either x86 or some other CISC variant running on a silicon wafer that’s been lithographed to have the millions (and sometimes billions) of transistors etched into it. This is then all connected up to various other components and storage through the various bus definitions, most of which have changed dramatically in the face of new requirements. There’s nothing particularly wrong with this model, it’s served us well and has fallen within the bounds of Moore’s Law for quite some time, however there’s always the nagging question of whether or not there’s another way to do things, perhaps one that will be much better than anything we’ve done before.
According HP their new concept, The Machine, is the answer to that question.
For those who haven’t yet read about it (or watched the introductory video on the technology) HP’s The Machine is set to be the next step in computing, taking the most recent advances in computer technology and using them to completely rethink what constitutes a computer. In short there are 3 main components that make it up, 2 of which are based on technology that have yet to see a commercial application. The first appears to be a Sony Cell like approach to computing cores, essentially combining numerous smaller cores into one big computing pool which can then be activated at will, technology which currently powers their Moonshot range of servers. The second piece is optical interconnects, something which has long been discussed as the next stage in computing but as of yet hasn’t really made in roads at the level HP is talking about. Finally the idea of “universal memory” which is essentially memristor storage which HP Labs has been teasing for some time but has failed to bring any product to light.
As an idea The Machine is pretty incredible, taking the best of breed technology for every subsystem of the traditional computer and putting it all together in the one place. HP is taking the right approach with it too as whilst The Machine might share some common ancestry with regular computers (I’m sure the “special purpose cores” are likely to be x86) current operating systems make a whole bunch of assumptions that won’t be compatible with its architecture. Thankfully they’ll be open sourcing Machine OS which means that it won’t be long before other vendors will be able to support it. It would be all too easy for them to create another HP-UX, a great piece of software in its own right that no one wants to touch because it’s just too damn niche to bother with. That being said however the journey between this concept and reality is a long one, fraught with the very real possibility of it never happening.
You see whilst all of these technologies that make up The Machine might be real in one sense or another 2 of them have yet to see a commercial release. The memristor based storage was “a couple years away” after the original announcement by HP however here we are, some 6 years later, and not even a prototype device has managed to rear its head. Indeed HP said last year that we might see memristor drives in 2018 if we’re lucky and the roadmap shown in the concept video shows the first DIMMs appearing sometime in 2016. Similar things can be said for optical interconnects as whilst they’ve existed at the large scale for some time (fibre interconnects for storage are fairly common) they have yet to be created for the low level type of interconnects that The Machine would require. HP’s roadmap to getting this technology to market is much less clear, something which HP will need to get right if they don’t want the whole concept to fall apart at the seams.
Honestly my scepticism comes from a history of being disappointed by concepts like this with many things promising the world in terms of computing and almost always failing to deliver on them. Even some of the technology contained within The Machine has already managed to disappoint me with memristor storage remaining vaporware despite numerous publications saying it was mere years away from commercial release. This is one of those times that I’d love to be proven wrong though as nothing would make me happier than to see a true revolution in the way we do computing, one that would hopefully enable us to do so much more. Until I see real pieces of hardware from HP however I’ll remain sceptical, lest I get my feelings hurt once again.
One of the first ideas that an engineer in training is introduced to is the idea of modularity. This is the concept that every problem, no matter how big, can be broken down into a subset of smaller problems that are interlinked. The idea behind this is that you can design solutions specific to the problem space rather than trying to solve everything in one fell swoop, something that is guaranteed to be error prone and likely never to achieve its goals. Right after you’re introduced to that idea you’re also told that modularity done for its own sake can lead to the exact same problems so its use must be tempered with moderation. It’s this latter point that I think the designers of Phonebloks might be missing out on even though as a concept I really like the idea.
For the uninitiated the idea is relatively simple: you buy yourself what equates to a motherboard which you can then plug various bits and pieces in to with one side being dedicated to a screen and the other dedicated to all the bits and pieces you’ve come to expect from a traditional smartphone. Essentially it’s taking the idea of being able to build your own PC and applying it to the smartphone market done in the hope of reducing electronic waste since you’ll only be upgrading parts of the phone rather than the whole device at a time. The lofty idea is that this will eventually become the platform for everyone and smartphone component makers will be lining up to build additional blocks for it.
As someone who’s been building his own PCs for the better part of 3 decades now I think the idea that the base board, and by extension the interconnects it has on it, will never change is probably the largest fundamental flaw with Phonebloks. I’ve built many PCs with the latest CPU socket on them in the hopes that I could upgrade on the cheap at a later date only to find that, when it came time to upgrade, another newer and far superior socket was available. Whilst the Phonebloks board can likely be made to accommodate current requirements its inevitable that further down the track some component will require more connections or a higher bandwidth interface necessitating its replacement. Then, just as with all those PCs I bought, this will also necessitate re-buying all the additional components, essentially getting us into the same position as we are currently.
This is not to mention the fact that hoping other manufacturers, ones that already have a strong presence in the smartphone industry, will build components for it is an endeavor that’s likely to be met with heavy resistance, if it’s not outright ignored. Whilst there are a couple companies that would be willing to sell various components (Sony with their EXMOR R sensor, ARM with their processor, etc.) they’re certainly not going to bother with the integration, something that would likely cost them much more than any profit they’d see from being on the platform.
Indeed I think that’s the biggest issue that this platform faces. Whilst its admirable that they’re seeking to be the standard modular platform for smartphones the standardization in the PC industry did not come about overnight and took the collaboration of multiple large corporations to achieve. Without their support I’m struggling to see how this platform can get the diversity it needs to become viable and as far as I can tell the only backing they’ve got is from a bunch of people willing to tweet on their behalf.
Fundamentally I like the idea as whilst I’m able to find a smartphone that suits the majority of my wants pretty easily there are always things I would like to trade in for others. My current Xperia Z would be a lot better if the speakerphone wasn’t rubbish and the battery was capable of charging wirelessly and I’d happily shuffle around some of the other components in order to get my device just right. However I’m also aware of the giant integration challenge that such a modular platform would present and whilst they might be able to get a massive burst of publicity I’m skeptical that it will turn into a viable product platform. I’d love to be wrong on this though but as someone who’s seen many decades of modular platform development and the tribulations it entails I can’t say that I’m banking money for my first Phoneblok device.
Before I say anything, you need to watch this video in its entirety:http://www.youtube.com/watch?v=Dou4Gy0p97Y
Long time readers will know I have a soft spot for Quantic Dream’s work, having played their first title Fahrenheit some years ago. My status as their fanboy was sealed when I played their second title Heavy Rain, a game that pushed the limits of games as a medium of expression. The video above is Quantic Dream’s latest achievement and whilst they say that this isn’t directly from any of their current projects they have done other videos similar to this which were very indicative of their future products.
Needless to say this has me very excited for what they’re currently working on. Whilst it might not be exactly what’s shown here you can safely bet that all the elements: the graphics, the emotion and the story telling will make it into the final product. The fact that it all runs in real time on the PS3 is even more impressive as whilst the graphics aren’t exactly cutting edge they’re right up there with other titles.
I really can’t wait!
I’ve had quite a few phones in my time but only 2 of them have ever been Nokias. The first was the tiny 8210 I bought purely because everyone else was getting a phone so of course I needed one as well. The second was an ill-fated N95 which, despite being an absolutely gorgeous media phone, failed to work on my network of choice thanks to it being a regional model that the seller neglected to inform me about. Still I always had a bit of a soft spot for Nokia devices because they got the job done and they were familiar to anyone who had used them before, saving many phone calls when my parents upgraded their handsets. I’ve even wondered loudly why developers ignore Nokia’s flagship mobile platform despite it’s absolutely ridiculous install base that dwarfs all of its competitors, acknowledging that it’s mostly due to their lack of innovation on the platform.
Then on the weekend a good friend of mine tells me that Nokia had teamed up with Microsoft to replace Symbian with Windows Phone 7. I had heard about Nokia’s CEO releasing a memo signalling drastic changes ahead for the company but I really didn’t expect that to result in something this drastic:
Nokia CEO Stephen Elop announced a long-rumored partnership with Microsoft this morning that would make Windows Phone 7 Nokia’s primary mobile platform.
The announcement means the end is near for Nokia’s aging Symbian platform, which many (myself included) have criticized as being too archaic to compete with modern platforms like the iPhone OS or Android. And Nokia’s homegrown next-generation OS, MeeGo, will no longer be the mythical savior for the Finnish company, as it’s now being positioned more as an experiment.
We’ve argued for some time that a move to Windows Phone 7 would make the most sense for Nokia, and after Elop’s dramatic “burning platform” memo last weekend, it was all but certain that the company would link up with Microsoft.
It’s a bold move for both Nokia and Microsoft as separated they’re not much of a threat to the two other giants in the mobile industry. However upon combining Nokia is ensuring that Windows Phone 7 reaches many more people than it can currently, delivering handsets at price ranges that other manufacturers just won’t touch. This will have a positive feedback effect of making the platform more attractive to developers which in turn drives more users to come to the platform when their applications of choice are ported or emulated. Even their concept phones are looking pretty schmick:
The partnership runs much deeper than just another vendor hopping onto the WP7 bandwagon however. Nokia has had a lot more experience than Microsoft in the mobile space and going by what is said in an open letter that the CEOs of both companies wrote together it looks like Microsoft is hoping to use that experience to further refine the WP7 line. There’s also a deep integration in terms of Microsoft services (Bing for search and adCenter for ads) and interestingly enough Bing Maps won’t be powering Nokia’s WP7 devices, it will still be OVI Maps. I’m interested to see where this integration heads because Bing Maps is actually a pretty good product and I was never a fan of the maps on Nokia devices (mostly because of the subscription fee required). They’ll also be porting all their content streams and application store across to the Microsoft Marketplace which is expected considering the level of integration they’re going for.
Of course the question has been raised as to why they didn’t go for one of the alternatives, namely their MeeGo platform or Google Android. MeeGo, for all its open source goodness, hasn’t really experienced the same amount of traction that Android has and has firmly been in the realms of “curious experiment” for the past year, even if Nokia is only admitting to it today. Android on the other hand would’ve made a lot of sense, however it appears that Nokia wanted to be an influencer of their new platform of choice rather than just another manufacturer. They’d never get this level of integration from Google unless they put in all the work and then realistically that does nothing to help the Nokia brand, it would all be for Google. Thus WP7 is really the only choice with these considerations in mind and I’m sure Microsoft was more than happy to welcome Nokia into the fray.
For a developer like me this just adds fuel to the WP7 fire that’s been burning in my head for the past couple months. Although it didn’t take me long to become semi-competent with iPhone SDK the lure of easy WP7 development has been pretty hard to ignore over the past couple months, especially when I have to dive back into Visual Studio to make API changes. Nokia’s partnership with Microsoft means that there’s all the more chance that WP7 will be a viable platform for the long term and as such any time spent developing on it is time well spent. Still if I was being truly honest with myself I’d just suck it up and do Android anyway but after wrangling with Objective-C for so long I feel like I deserve a little foray back into the world of C# and Visual Studio goodness and this announcement justifies that even more.