One of the first ideas that an engineer in training is introduced to is the idea of modularity. This is the concept that every problem, no matter how big, can be broken down into a subset of smaller problems that are interlinked. The idea behind this is that you can design solutions specific to the problem space rather than trying to solve everything in one fell swoop, something that is guaranteed to be error prone and likely never to achieve its goals. Right after you’re introduced to that idea you’re also told that modularity done for its own sake can lead to the exact same problems so its use must be tempered with moderation. It’s this latter point that I think the designers of Phonebloks might be missing out on even though as a concept I really like the idea.
For the uninitiated the idea is relatively simple: you buy yourself what equates to a motherboard which you can then plug various bits and pieces in to with one side being dedicated to a screen and the other dedicated to all the bits and pieces you’ve come to expect from a traditional smartphone. Essentially it’s taking the idea of being able to build your own PC and applying it to the smartphone market done in the hope of reducing electronic waste since you’ll only be upgrading parts of the phone rather than the whole device at a time. The lofty idea is that this will eventually become the platform for everyone and smartphone component makers will be lining up to build additional blocks for it.
As someone who’s been building his own PCs for the better part of 3 decades now I think the idea that the base board, and by extension the interconnects it has on it, will never change is probably the largest fundamental flaw with Phonebloks. I’ve built many PCs with the latest CPU socket on them in the hopes that I could upgrade on the cheap at a later date only to find that, when it came time to upgrade, another newer and far superior socket was available. Whilst the Phonebloks board can likely be made to accommodate current requirements its inevitable that further down the track some component will require more connections or a higher bandwidth interface necessitating its replacement. Then, just as with all those PCs I bought, this will also necessitate re-buying all the additional components, essentially getting us into the same position as we are currently.
This is not to mention the fact that hoping other manufacturers, ones that already have a strong presence in the smartphone industry, will build components for it is an endeavor that’s likely to be met with heavy resistance, if it’s not outright ignored. Whilst there are a couple companies that would be willing to sell various components (Sony with their EXMOR R sensor, ARM with their processor, etc.) they’re certainly not going to bother with the integration, something that would likely cost them much more than any profit they’d see from being on the platform.
Indeed I think that’s the biggest issue that this platform faces. Whilst its admirable that they’re seeking to be the standard modular platform for smartphones the standardization in the PC industry did not come about overnight and took the collaboration of multiple large corporations to achieve. Without their support I’m struggling to see how this platform can get the diversity it needs to become viable and as far as I can tell the only backing they’ve got is from a bunch of people willing to tweet on their behalf.
Fundamentally I like the idea as whilst I’m able to find a smartphone that suits the majority of my wants pretty easily there are always things I would like to trade in for others. My current Xperia Z would be a lot better if the speakerphone wasn’t rubbish and the battery was capable of charging wirelessly and I’d happily shuffle around some of the other components in order to get my device just right. However I’m also aware of the giant integration challenge that such a modular platform would present and whilst they might be able to get a massive burst of publicity I’m skeptical that it will turn into a viable product platform. I’d love to be wrong on this though but as someone who’s seen many decades of modular platform development and the tribulations it entails I can’t say that I’m banking money for my first Phoneblok device.
My stance on phone based photography is pretty well known (some would go as far as to say infamous) and is probably one of the only issues that causes me significant cognitive dissonance on a regular basis. You see I’m not in the hard against camp where anything below a pro-level DSLR doesn’t count but nor am I fully vested in the idea that the simple act of taking pictures makes you a photographer. It’s a matter of personal opinion, of course, and I’m not going to make myself out to be the arbiter of what is and isn’t photography, especially when I firmly believe in the “Photography is 50% photographer, 40% light and 10% equipment” rule of thumb.
Indeed I thought I had gotten over all my angst about phone based photography after my last post about it all. Heck I even spent an inordinate amount of time trying to learn my current phone’s camera, using it almost exclusively whilst I was in New Orleans in order to source some eye candy for my daily travel posts. I’ll be honest when I say the experience was a little frustrating but there was more than a few pics I was actually proud of, the above being one of them. My chosen toolset was not that of Instagram or any of its more well known competitors however as I prefer to use SnapSeed due to the flexibility it grants me (and the fact that they make some amazing Lightroom plugins as well) and I haven’t uploaded them to any of my regular sharing sites. Still for someone who had essentially written this whole area off I felt I was making progress until I read this article:
Since the launch of the original iPhone and the arrival of the App Store, the differences between those photographs taken on a smartphone and those taken on regular digital cameras have become far less apparent. Not because the phone cameras are getting better (despite the ever-improving optics, sensors, and software on smartphones, there’s still a huge difference in quality between an iPhone camera and a Canon 5D Mark III), but because of where photographs are being viewed. The vast majority of imagery is now seen in the exact same places: on smartphones and tablets, via apps such as Pinterest, Facebook, Google+, Flipboard and most importantly, Instagram. At 1024 x 1024 pixels, who can really tell whether a photo was taken on an iPhone or a Canon 5D? More to the point, who cares?
There’s a lot in Bareham’s post that I agree with, especially when it comes to the way most photographs are consumed these days. It’s rare now to see pictures materialize themselves in a physical medium or even at a scale where the differences between photographic platforms starts to become apparent. Indeed even I, the unabashed Canon DSLR fanboy, still has none of his work on display in his own house, preferring to show people my pictures on their laptop or other Internet connected device. Indeed many pictures I love on my phone often fail to impress me later when I view them on a larger screen although that’s probably more due to my perfectionist ways more than anything else.
Still I’m not convinced that the introduction of the iPhone, or any camera phone really for that matter really (I had a camera phone for a good 4 years by that point), changed everything about photography. Sure it made it more accessible thanks to its integration into a platform that nearly everyone has but it hadn’t really been out of reach for quite some time. Indeed many people had said similar things about the consumer level 35mm cameras back when they were first introduced and whilst the camera phones provided an added level of immediacy it’s not like that wasn’t available with the cheap digital point and shoots before it. Indeed the act simply became more public once the apps on our phones allowed us to share those photos much quicker than we could before.
Thinking it over a bit more it’s actually quite shocking to see how my journey into photography is the inverse of Bareham’s. I had had these easy to use and share cameras for ages thanks to my love of all things technological but that creative spark simply never took hold. That all changed when I got my first DSLR and I began to learn about the technical aspects of photography; suddenly a whole new world had opened up to me that I hadn’t known about. I felt compelled to share my images with everyone and I started seeking out photographic subjects that weren’t my friends at parties or the sunset from my front porch. It has then graduated into what I do today, something that’s weaved its way into all aspects of my life regardless of what I’m doing.
Perhaps then the technology is simply a catalyst for the realisation of a subconscious desire, something that we want to achieve but have no idea how to accomplish in our current mindset. We all have our favourite platforms on which we create, ones that we’ll always gravitate back to over time, and for many people that has become their phones. I no longer begrudge them, indeed I’ve come to realise that nearly every criticism I’ve levelled at them can be just as easily aimed at any other creative endeavour, but nor do I believe they’re the revolution that some claim them to be. We’re simply in the latest cycle of technologically fueled progress that’s been a key part of photography for the past century, one that I’m very glad to be a part of.
If the deafening outcrying from nearly every one of my favourite games news sites and social media brethren is anything to go by the console war has already been won and the new king is Sony. Whilst the fanboy in me would love to take this opportunity to stick it to all the Xboxers out there I honestly believe that Sony really didn’t do much to deserve the praise that’s currently being heaped on it. More I feel like the news coming out of E3 just shows how many missteps Microsoft took with the XboxOne with Sony simply sitting on the sidelines, not really changing anything from what they’re currently doing today.
The one, and really only, point that this all hinged on was the yet unknown stance that Sony would take for DRM on the PlayStation4. It was rumoured that they were watching social media closely and that spurred many grassroots campaigns aimed at influencing them. The announcement came at E3 that they’d pretty much be continuing along the same lines as they are now, allowing you to trade/sell/keep disc based games without any restrictions built into the platform. This also means that developers were free to include online passes in their games, something which has thankfully not become too common but could go on the rise (especially with cross platform titles).
There wasn’t much else announced at E3 that got gamers excited about the PlayStation4 apart from seeing the actual hardware for the first time. One curious bit of information that didn’t receive a whole lot of attention though was the change to Sony’s stance on free multiplayer through the PlayStation Network. You’ll still be able to get a whole bunch of services for free (like NetFlix/Hulu) but if you want to get multiplayer you’re going to have to shell out $5/month for the privilege. However this is PlayStation Plus which means it comes with a whole bunch of other benefits like free full version games so it’s not as bad as it sounds. Still it looks like Sony might have been capitalizing on the notion that there will be quite a few platform switchers for this generation and thus took the opportunity to make the service mandatory for multi.
It could also be partly to offset the extremely low (relative) price of the PlayStation4 with it clocking in at $399. Considering its specs it’s hard to believe that they’re not using the console as a loss leader yet again, something which I thought they were going to avoid for this generation. If the life of these consoles remains relatively the same that means they’ll at least get the console’s price back again in subscription fees, plus any additional revenue they get from the games sales. At least part of it will have to go to the massive amount of online services they’re planning to release however, but overall it seems that at least part of that subscription cash will be going to offset the cheaper hardware.
The thing to note here is that the differences between Sony’s current and next generation console are far smaller than those for Microsoft. This is the same Sony who were ridiculed for releasing the PSN long after Xbox Live, pricing their console way above the competition and, even if it wasn’t for games specifically, had some of the most insane DRM known to man. The fact that not much has changed (they have, in fact, got objectively worse) and they’re being welcomed with open arms shows just how much Microsoft has dropped the ball.
Whether or not this will translate into lost sales though will have to remain to be seen. The consumer market has an incredibly short memory and we’ve got a good 5 months between now and when the XboxOne goes on sale. It’s entirely possible that the current conversation is being dominated by the vocal minority and the number of platform loyalists will be enough to overcome that initial adoption hump (something which the Wii-U hasn’t been able to do). I’m sure that anyone who was on the fence about which one to get has probably made their mind up now based on these announcements but in all honesty those people are few and far between. I feel the majority of console gamers will get one, and only one, console and will likely not change platforms easily.
The proof will come this holiday season, however.
[UPDATE]: It has come to my attention that Sony has stated that they will not be allowing online passes from anyone. Chalk that up to yet another win for them.
I remember when I first saw Windows Phone 7 introduced all those years ago now how it just looked like Microsoft playing the me-too game with one of its biggest competitors. This was also a time when RIM, you know those guys who make the BlackBerrys that everyone used to rave about, where the kings of the smart phone world and Android was still considered that upstart that would get no where. Back then I said I’d end up getting one of these handsets eventually, mostly for application development purposes, but also so I could share the experience with you, my readers. I never really made good on that promise but thanks to LifeHacker I’ve had the privilege to have a Nokia Lumia 900 as my sole communications device for the past couple weeks and I thought it was high time I told you what I think of it.
Before I get into the meat of the underlying operating system I want to take a little time to comment on the phone itself. Nokia, renowned for their low end handsets that are everywhere, sheds those preconceptions easily with the Lumia 900. Whilst I know its no indication of the underlying quality the 900 has a really nice heft to it, feeling quite solid in the hands. The specs are actually quite incredible with it sporting a 1.4GHz Qualcomm Scorpion processor, 512MB RAM and 16GB of internal storage. Couple that with an 8MP camera with Carl Zeiss optics capable of capturing 720p video you’ve got a solid base of hardware that’s easily comparable to all other handsets from its generation. The battery life is also pretty incredible, easily lasting a couple days with moderate usage. Indeed if Nokia were to release a similar phone to the Android market there’s no doubt in my mind that it’d be right up there with the likes of Samsung and HTC.
My first impressions were quite good for Windows Phone 7 with some teething issues that I’ll dive into. On the surface Windows Phone 7 is visually pleasing with the large icons, live tiles and a very smooth scrolling experience that all just works. Just like you do with Android or iOS you sign into your phone using your Windows Live ID, which can be any email address you want, which then hooks into the underlying services that power your Windows Phone 7 handset. For the most part this is synching with things like Live Contacts, SkyDrive for your cloud storage and any other Microsoft service. For the most part these work well however I had a stumbling block at the start which did sour me initially on the platform.
So ever since I moved from my Windows Mobile device to my first iPhone all those years ago I’ve had my contacts stored in Google Contacts as that was the easiest way to ensure they’d follow me from platform to platform. Thankfully Windows Phone 7 allows you to add accounts across a wide range of services, Google being one of them. So I entered my details and hit sync…nothing happened. Indeed even when I tried to sync to my LiveID (which has nothing in it) I got a similar error saying “Attention required” and upon investigation it said that my username/password combination wasn’t correct. No matter what I did to get this to work it would always come up with this same error for both services. To rectify this I had to reset my phone to factory defaults, sign in again with my LiveID and then attempt to sync again. For Google Contacts I had to create an application specific password to use it (I have 2 factor auth turned on for my Google account) but I wasn’t prompted for this from Windows Phone 7 like I have been for other services. Realistically I’d expect a little better from a platform that’s been around for this long and this was why I was initially unhappy with Windows Phone 7.
However all the other in built apps like email, messaging and maps work absolutely flawlessly. It didn’t take me long to get everything in sync with all my emails coming down as soon as the server received them and things like MMS, which usually require some fiddling to get them to work properly, just worked straight away from the APN settings that came down from Telstra. The problems I experienced getting my contacts onto Windows Phone 7 were really the only major issue I had with the platform itself and it speaks volumes that the rest of the experience was so trouble free by comparison.
Of course the platform itself is only part of the equation as it’s the third party applications that can make or break it. Thankfully I’m please to say that for all the major applications like Twitter, Facebook and Shazam there are native applications and the function pretty much identically to their counterparts on the other major platforms. There are of course some differences in the applications that can be rather irritating (Twitter for instance doesn’t preload tweets like it does on Android) but they are more than usable. I wouldn’t say I prefer the Windows Phone 7 experience over Android or iOS as I was very much used to the former due to it being my platform of choice for the past year and a bit but I don’t find myself wanting for any specific feature. It’s probably more due to the fact that Windows Phone 7 has its own UI styling that’s pretty consistent across all the applications and for some instances that fits well but for others it just doesn’t really work at all.
Where Windows Phone 7 starts to fall down is in the niche application area, I.E. those applications on other platforms that you have for one specific need or another. My best example of this would be SoundCloud, a music sharing application, which has a great application on both Android and iOS. For Windows Phone 7 there’s no official application and all the third party solutions are really quite bad, to the point of being unusable. Of the 3 I tried no one supported logging in with Facebook and since I have no idea what my SoundCloud password is (I never set one, because of the Facebook integration) I simply could not try them. The SoundCloud mobile application is actually quite good but it doesn’t function the way you’d expect it and in order to get similar functionality you have to do things that aren’t particularly intuitive. Reddit is another example as whilst there’s an usable application (Alien News) it’s just not as good as Reddit is Fun on Android.
The state of the niche applications might not be a big deal to the majority of people who only need a few major applications (which are well supported on Windows Phone 7) but for power users like myself it feels like you’re artificially limiting yourself to being a second class smart phone user. Now this is no fault of the platform, it’s simply a function of its popularity among the wider public, and the only thing that will solve it is more users and time. Whether that will happen is hard to say as whilst Windows Phone 7 market share has been growing it’s still hard to call it anything more than an also-ran in comparison to Android and iOS.
In an objective comparison between all the platforms, forgetting the applications as they’re not strictly reflective of the platform itself, I can say that Windows Phone 7 is most definitely comparable to Android and iOS. The interface is slick and smooth, the built in applications are very usable and there are no real show stopping bugs that prevent you from doing anything that you could do on other platforms. Whilst I’m not sure if this will become my default platform of choice for the future (considering my Lumia won’t get Windows Phone 8) I definitely can’t fault anyone for choosing it over any of the other ones available. Indeed for certain people, especially those who are heavily invested in the Microsoft platform, I’d recommend it over anything else as its tight integration with Microsoft would make it much more worthwhile.
So overall I was very impressed with Windows Phone 7 as I was truly expecting the majority of applications to be no where near as good as their iOS/Android counterparts but they were. The most telling thing was that I never found myself wanting to do something and then finding out I wouldn’t be able to do it. Sure the experience wasn’t ideal in some cases but the capability was there and in many cases that’s all that matters. It will be interesting to see how this compares to the upcoming Windows Phone 8 and whilst I won’t promise that I’ll rush out to get one for the review (I’ve made that mistake before) I won’t say to no if Microsoft gives me a loaner for a couple weeks.
Which is actually a real possibility considering I’ll be blogging for them
In the middle of last year I commented on some rumours that were circling around the Internet about how Xbox Live was coming to Windows 8 and along with it the ability to play some Xbox titles. The idea would have seemed to come out of left field for a lot of people as there’s no real incentive to enable such functionality (especially considering just how damn hard it would be to emulate the Xbox processor) but considering it alongside the Three Screens and a cloud idea it was just another step along the platform unification path. Since then however I hadn’t seen much more movement on the idea and instead figured that eventually everything would be united under the WinRT platform and was waiting to see an announcement to that effect.
The lion’s share of the titles that will be released on the Windows 8 platform are from Microsoft Studios with a couple big name developers like Rovio and Gameloft joining in the party. All of the first wave of titles will be playable on any Windows 8 platform and a few of them (most notably the relatively simple titles like Solitaire and some word games) will stretch onto Windows Phone 8 with things like resuming games that you started on another platform. Looking at the list of titles I can’t help but notice the common thread among them and I’m not quite sure to make of it.
For many of the third party titles its quite obvious that their release on Windows 8 (ostensibly on WinRT) is just yet another platform for them to have their product on. Angry Birds, for instance, seems to make it a point of pride that they’re on pretty much every platform imaginable and the fact that they’re on Windows 8 really shouldn’t come as much of a surprise. Indeed quite a lot of them are already multi-platform titles that cut their teeth on one mobile platform or another and realistically their move onto the Xbox (and from there to Windows 8) will just be another string in their bow. I guess what I’m getting at is that many of these titles already had the hard work of getting ports working done for them and it’s less indicative of how flexible the underlying WinRT platform really is.
Indeed the most innovative uses of WinRT come from the first party Microsoft titles which, whilst being unfortunately bland, do show what a truly agnostic application is capable of. They all feature a pause/resume function that works across platforms, ability to work with both touch interfaces as well as traditional mouse and keyboards and lastly some of them feature cross platform competitive play. It’s unfortunate that the third party developers didn’t look to take advantage of these capabilities but I can understand why they didn’t for these first wave of games; the investment would be too high for the potential pay off.
What I think really needs to be done is to bring the WinRT platform to the Xbox360 via a system update. Whilst its all well and good to have some Xbox titles ported to Windows 8 its really only a stopgap solution to bringing a unified platform to all of the three screens. Right now the only platform that’s lacking some form WinRT is the TV screen and that could be remedied via the Xbox. Whether that comes in the current generation or in Durango though will have to remain to be seen but it would be a great misstep from Microsoft to ignore the fact that the final piece of the puzzle is WinRT in the living room.
Microsoft really is onto something with the unified experience between all their available platforms and they’re really not that far off achieving it. Whilst it will take a while for third party developers to come out with apps that take advantage of the platform the sooner that it’s available across all three screens the sooner those apps will come. This first wave of games from Xbox live gives us a tantalizing little glance of what an unified platform could bring to us and hopefully subsequent waves take inspiration from what Microsoft has been able to do and integrate that into future releases.
My first interaction with Steam wasn’t a pleasant one. I remember the day clearly, I was still living out in Wamboin when Valve released Half Life 2 and had made sure to grab myself a copy before heading home. After going through the lengthy install process requiring multiple CD swaps I was greeted by a login box asking me to create an account. Frustratingly all my usual gamer tags: PYROMANT|C, SuperDave, Nalafang, etc. were already taken leaving me to choose a random name. That wasn’t the real annoyance though, no what got me was the required update that needed to be applied before I could play it which, on the end of a 56k connection, was going to take me the better part of an hour to apply.
This soured me on the idea of Steam for quite a few years, at least until I got myself a stable form of broadband that let me update without having to wait hours at a time. Still it wasn’t until probably 3 years or so ago that I started buying most of my games through Steam as buying the physical media and then integrating with Steam later was still a much better experience. Today though it’s my platform of choice when purchasing games and it seems that I’m not alone in this regard with up to 70% of all digital sales passing through the platform. We’ve also seen Steam add many more features like SteamCloud and SteamWorks which have provided a platform for developers to add features that would have otherwise been too costly to develop themselves.
With all the success that Steam has enjoyed (in the process making Valve one of the most profitable companies per employee) it makes you wonder what the end game for Steam will end up being. Whilst they’d undoubtedly be able to coast along quite easily on the recurring sales and the giant community they’ve built around the platform history has shown that Valve isn’t that kind of company. Indeed the recent press release from Valve saying that traditional applications will soon be available through the Steam platform seems to indicate that they have ambitions that extend past their roots of gaming and digital distribution.
And its at this point that I start speculating wildly.
Valve has shown that it is dedicated to gamers regardless of the platform with Steam already on OSX and will soon be finding its way onto Linux alongside a native port of Left 4 Dead 2. With such a deep knowledge of games and an engine that runs on nearly any platform it would make sense that Valve might take a stab at cutting out the middle man entirely, choosing to create their own custom operating system that’s solely dedicated to the purpose of gaming. If such an idea was to come to fruition it would most likely be some kind of Linux derivative with a whole bunch of optimizations in it to make Source titles run better. I’ll be honest with you when this idea was suggested to me I thought it was pretty far out but there are some threads within this idea that have some merit.
Whilst the idea of SteamOS as a standalone operating system might be a bit far fetched I could see something akin to media centre software that transforms a traditional Windows/Linux/OSX PC into a dedicated gaming machine. Steam’s strength arguably comes from the giant catalogue of third party titles that they have on there and keeping the underlying OS (with its APIs in tact) means that all these games would still be available. This also seems to line up with the rumoured SteamBox idea that was floating around at the start of the year and would mean that the console was in fact just a re-badged Windows PC with some custom hardware underneath. The console itself might not catch on (although the success of the OUYA seems to indicate otherwise) but I could very well see people installing SteamOS beside their XBMC installation turning their Media PC into a dual use machine.
With all this in mind you have to then ask yourself what Valve would get out of something like this. They are already making headway into getting Steam in one form or another onto already existing consoles (see Steam for the PS3) and they’ve arguably already captured the lion’s share of PC gamers, the ones who’d be most likely to use something like SteamOS. The SteamBox would arguably be targeted at people who are not traditionally PC gamers and SteamOS then would simply be an also ran, something that would provide extra value to its already dedicated PC community. Essentially it would be further cementing Steam as the preferred digital distribution network for games whilst also attempting to capture a market that they’ve had little to do with up until this point.
All of this though is based on the current direction Valve seems to be going but realistically I could just be reading way too far into it. Their recent moves with the Steam platform are arguably just Valve trying to grow their platform organically and could very easily not be part of some grander scheme for greater platform dominance. The idea though is intriguing and whilst I have nothing more than speculation to go on I don’t think it would be a bad move by Valve at all.
Today the platform of choice for the vast majority of gamers is the console, there’s really no question about it. Whilst video games may have found their feet with PCs consoles took them to the next level offering a consistent user experience that expanded the potential market greatly. PC gaming however is far from dead and has even been growing despite the heavy competition that it faces in consoles. However the idea of providing a consistent user experience whilst maintaining the flexibility is an enticing one and there are several companies that are attempting to fuse the best elements of both platforms in the hopes of capturing both markets.
OnLive is one of these such companies. Their product is, in essence, PC gaming as a service (PCGAAS?) and seeks to alleviate the troubles some gamers used to face with the constant upgrade cycle. I was sceptical of the idea initially as their target demographic seemed quite small but here we are 2 years later and they’re still around, even expanding their operations beyond the USA. Still the limitations on the service (high bandwidth requirement being chief amongst them) mean that whilst OnLive might provide a consistent experience on par of that of consoles the service will likely never see the mainstream success that the 3 major consoles do.
Rumours have been circulating recently that Valve may take a stab at this problem; taking the best parts of the PC experience and distilling them down into a console creating new platform called the Steam Box:
According to sources, the company has been working on a hardware spec and associated software which would make up the backbone of a “Steam Box.” The actual devices may be made by a variety of partners, and the software would be readily available to any company that wants to get in the game.
Adding fuel to that fire is a rumor that the Alienware X51 may have been designed with an early spec of the system in mind, and will be retroactively upgradable to the software.
Indeed there’s enough circumstantial evidence to give some credence to these rumours. Valve applied for a patent on a controller back in 2009, one that had a pretty interesting twist to it. The controller would be modular allowing the user to modify it and those modifications would be detected by the controller. Such an idea fits pretty well with a PC/console type hybrid that the Steam Box is likely to be. It would also enable a wider selection of titles to be available on the Steam Box as not all games lend themselves well to the traditional 2 joystick console controller standard.
At the same time one of Valve’s employees, Greg Coomer, has been tweeting about a project that he’s working on that looks suspiciously like some kind of set top box. Now Valve doesn’t sell hardware, they’re a games company at heart, so why someone at Valve would be working on such a project does raise some questions. Further the screenshot of the potential Steam Box shows what looks to be a Xbox360 controller in the background. It’s entirely possible that such a rig was being used as a lightweight demo box for Valve to use at trade shows, but it does seem awfully coincidental.
For what its worth the idea of a Steam box could have some legs to it. Gone are the days when a constant upgrade cycle was required to play the latest games, mostly thanks to the consolization of the games market. What this means though is that a modern day gaming PC has the longevity rivalling that of most consoles. Hell even my last full upgrade lasted almost 3 years before I replaced it and even then I didn’t actually need to replace it; I just wanted to. A small, well designed PC then could function much like a console in that regard and you could even make optimized compliers for it to further increase it’s longevity.
The Steam Box could also leverage off the fact that many PC titles, apart from things like RTS, lend themselves quite well to the controller format. In fact much of Steam’s current catalog would be only a short modification away from being controller ready and some are even set up for their use already. The Steam Box then would come out of the box with thousands of titles ready for it, something that few platforms can lay claim to. It may not draw the current Steam crowd away from their PCs but it would be an awfully attractive option to someone who was looking to upgrade but didn’t want to go through the hassle of building/researching their own box.
Of course this is all hearsay at the moment but I think there could be something to this idea. It might not reach the same market penetration as any of the major consoles but there’s a definite niche in there that would be well served by something like this. What remains to be seen now is a) whether or not this thing is actually real and b) how the market reacts should Valve actually announce said device. If the rumours are anything to go by we may not have to wait too long to find both of those things out.
One of my most hotly anticipated games for this year, and I know I’m not alone in this, will be Blizzard’s Diablo III. I can remember the days of the original Diablo, forging my way down into the bowels of the abandoned church and almost leaping out of my chair when the butcher growled “Aaaahhh, fresh meat!” when I grew close to him. I then went online, firing up my 33K modem (yes, that’s all I had back then) and hitting up the then fledgling Battle.Net only to be overwhelmed by other players who gifted me with unimaginable loot. I even went as far as to buy the only official expansion, Hellfire, and play that to its fullest revelling in the extended Diablo universe.
Diablo II was a completely different experience, one that was far more social for me than its predecessor. I can remember many LANs dedicated to simply creating new characters and seeing how far we could get with them before we got bored. The captivation was turned up to a whole new level however with many of us running dungeons continuously in order to get that last set item or hoping for that extremely rare drop. The expansion pack served to keep us playing for many years after the games release and I still have friends telling me of how they’ve spun it back up again just for the sheer thrill of it.
Amongst all this is one constant: the torturous strain that we put on our poor computer mice. The Diablo series can be played almost entirely using the mouse thanks to the way the game was designed, although you do still need the keyboard especially at higher difficulties. In that regard it seemed like the Diablo series was destined to PC and PC only forever more. Indeed even though Blizzard had experimented with the wild idea of putting StarCraft on the Nintendo64 they did not attempt the same thing with the Diablo series. That is up until now.
Today there are multiple sources reporting that Diablo III will indeed be coming to consoles. As Kotaku points out the writing has been on the wall for quite some time about this but today is the day when everyone has started to pay attention to the idea. Now I don’t think there’s anything about the Diablo gameplay that would prevent it from being good on a console, as opposed to StarCraft (which would be unplayable, as is any RTS on a console). Indeed the simple interface of Diablo’s past would easily lend itself well to the limited input space of the controller with few UI changes needed. What concerns most people though is the possibility that Diablo III could become consolized, ruining the experience for PC gamers.
Considering that we’re already got a beta version of Diablo III on PC it’s a safe bet that the primary platform will be the PC. Blizzard also has a staunch commitment to not launching games until their done and you can bet that if there were any hints of consolization in one of their flagship titles it’d be picked up in beta testing long before it became a retail product. Diablo III coming to consoles is a sign of the times that PC gaming is still somewhat of a minority and even titles that have their roots firmly in the PC platform still need to consider a cross platform release.
Does this mean I’ll play Diablo III on one of my consoles? I must say that I’m definitely curious but I’ve already put in my pre-order for the collector’s edition of Diablo III on the PC. Due to the tie in with Battle.Net it’s entirely possible that buying it on one platform will gain you access to another via a digital download (something Blizzard has embraced wholeheartedly) and I can definitely see myself trying it out just for comparison. For me though the PC platform will always be my primary means by which I game and I can’t deny my mouse the torturous joy that comes from a good old fashioned Diablo session.
Make no mistake, in the world of gaming PCs are far from being the top platform. The reasoning behind this is simple, consoles are simply easier and have a much longer life than your traditional PC making them a far more attractive platform for both gamers and developers a like. This has lead to the consolization of the PC games market ensuring that many games are developed primarily for the console first and the PC becomes something of a second class citizen, which did have some benefits (however limited they might be). The platform is long from forgotten however with it still managing to capture a very respectable share of the games market and still remaining the platform of choice for many eSports titles.
The PC games market has been no slouch though with digital sales powering the market to all time highs. Despite that though the PC still remains a relative niche compared to other platforms, routinely seeing market share in the single digit percentages. There were signs that it was growing but it still seemed like the PC was to be forever relegated to the back seat. There’s speculation however that the PC is looking to make a comeback and could possibly even dominate consoles by 2014:
As of 2008, boxed copies of games had paltry sales compared to digital sales, and nothing at all looks to change. During 2011, nearly $15 billion is going to be attributed to digital sales while $2.5 billion belong to boxed copies. This is a trend I have to admit I am not surprised by. I’ll never purchase another boxed copy if I can help it.
The death of PC gaming has long been a mocking-point of console gamers, but recent trends show that the PC has nothing to stress over. One such trend is free-to-play, where games are inherently free, but support paid-services such as purchasing in-game items. This has proven wildly successful, and has even caused the odd MMORPG to get rid of it subscription fee. It’s also caused a lot of games to be developed with the F2P mechanic decided from the get-go.
The research comes out of DFC Intelligence and NVIDIA was the one who’s been spruiking it as the renaissance of PC gaming. The past couple years do show a trend for PC games sales to continue growing despite console dominance but the prediction starts to get a little hairy when it starts to predict the decline of console sales next year when there doesn’t seem to be any evidence of it. The growth in the PC sales is also strikingly linear leading me to believe that it’s heavily speculation based. Still it’s an interesting notion to toy with, so let’s have a look at what could (and could not) be driving these predictions.
For starters the data does not include mobile platforms like smart phones and tablets which for the sake of comparison is good as they’re not really on the same level as consoles or PCs. Sure they’ve also seen explosive growth in the past couple years but it’s still a nascent platform for gaming and drawing conclusions based on the small amounts of data available would give you wildly different results based purely on your interpretation.
A big driver behind these numbers would be the surge in the number of free to play, micro-transaction based games that have been entering the market. Players of these types of games will usually spend over and above the usual amount they would on a similar game that had a one off cost. As time goes on there will be more of these kinds of titles that appeal to a wider gamer audience thereby increasing the revenue of PC games considerably. Long time gamers like me might not like having to fork out for parts of the game but you’d be hard pressed to argue that it isn’t a successful business model.
Another factor could be that the current console generation is getting somewhat long in the tooth. The Xbox360 and PlayStation 3 were both launched some 5 to 6 years ago and whilst the hardware has performed admirably in the past the disparity between what PCs and consoles are capable of is hard to ignore. With neither Microsoft nor Sony mentioning any details on their upcoming successors to the current generation (nor if they’re actually working on them) this could see some gamers abandon their consoles for the more capable PC platforms. Considering even your run of the mill PC is now capable of playing games beyond the console level it wouldn’t be surprising to see gamers make the change.
What sales figures don’t tell us however is what the platform of choice will be for developers to release on. Whilst the PC industry as a whole might be more profitable than consoles that doesn’t necessarily mean it will be more profitable for everyone. Indeed titles like Call of Duty and Battlefield have found their homes firmly on the console market with PCs being the niche. The opposite is true for many of the online free to play games that have yet to make a successful transition onto the console platform. It’s quite possible that these sales figures will just mean an increase in a particular section of the PC market while the rest remain the same.
Honestly though I don’t think it really matters either way as game developers have now shown that it’s entirely possible to have a multi-platform release that doesn’t make any compromises. Consolization then will just be a blip in the long history of gaming, a relic of the past that we won’t see repeated again. The dominant platform of the day will come and go as it has done so throughout the history of gaming but what really matters is the experience which each of them can provide. As its looking right now all of them are equally capable when placed in the hands of good developers and whilst these sales projections predict the return of the PC as the king platform in the end it’ll be nothing more than bragging rights for us long time gamers.
Technological innovations, you know those things that are supposed to make our lives easier, usually end up becoming the bane of our existence not too long after they’ve lost their novelty. I can’t tell you how many times people have said that they’ve lost control of their email inbox or how they’re constantly distracted by people trying to contact them over the phone, damning the technology for allowing people to interrupt whatever the heck it was they were doing. What amuses me though is I use many of the same technologies that they do yet I don’t feel the same level of pressure that they do, leading me to wonder what the heck they’re complaining about.
Now I’m not saying that email, IM, Twitter et. al. are not distracting, indeed our techno-centric culture is increasingly skewed towards being a distracted one by a veritable tsunami communications tools. I myself struggled with Twitter not too long ago when I attempted to use it the “proper” way over a weekend, seeing my productivity hit the floor as I struggled to strike a balance between my level of engagement and the amount of work I got done. However I soon realised that using said service in the proper way meant that I just ended up as distracted as everyone else, with almost 0 benefit to me other than the small bit of self satisfaction that I was totally doing this social media thing right for a change.
In essence I feel that the reason people get so distracted by these tools is that they feel obligated to respond to them immediately, rather than at a time which suits them best. Thus the tool which is meant to help your productivity becomes a burden, interrupting you at the worst possible time and breaking you out of the flow of the work you were in. If you find yourself in this position you need to set up strict rules for interacting with that particular technology that suit you rather than what suits everyone else. How you go about this is left as an exercise for the reader, but the most effective tool (I’ve found, at least) is to only check your email/Twitter/whatever at certain times during the day and ignoring it at all other times.
The retort I usually get for advocating this kind of stance is “What if something important happens in the interim?”. Thinking really hard about it I can’t think of anything really important that’s come to me via the medium of email, IM, Twitter that didn’t first reach me through some more direct means (like my phone). If you’re relying on these distinctly one way, no way to verify if the person has actually received your message platforms then the message you’re sending can’t really be all that important and can wait a few hours before being responded to. If it can’t then use some more direct means of communicating otherwise you’re just forcing people into the same technological hell that you yourself feel trapped in, continuing the vicious cycle that just doesn’t need to exist.
However sometimes people are just looking for a scapegoat for their situation and it’s far easier to blame a faceless technology than it is to look internally and work out why they’re so distracted. I can kind of sort of understand people getting caught up with communications clients, especially when it’s part of your job, but when you think something like RSS is too distracting (you know, where you choose to subscribe to a site because you’re interested in it) then the problem isn’t the technology it’s your lack of ability to recognize that you’re wasting time. I get literally hundreds of items in my RSS reader every day but do I read them all? Heck no, at most I’ll skim the titles and if I recognize a story I’ve already read then I won’t go back and read it again.
Just seems like common sense to me.
It’s also not helped by the fact that many of us now carry our distractions with us. My phone has all the distraction capability of a modern PC and if it weren’t for my strict rules about only checking things at certain times I’m sure I’d be in the same distraction hell that everyone else is. Of course even though the platform may be different the same rules apply, it’s the feeling of obligation that drives us to distraction when realistically the obligation doesn’t exist, and we’re just slotting into a social norm that ends up wrecking havoc.
Thus all I’m advocating is taking back control of the technology rather than letting it control us. All of these distractions are tools to be used to our advantage and the second they stop being helpful we need to step back and question our use of them to see if we should change the way we use them. Otherwise we just end up being misused by the tools we wish to use and end up blaming them for the problems we in fact caused ourselves.