It was a late night in March 2007 where deep in the bowels of the Belconnen shopping mall dozens of consoles gamers gathered. I sat there, my extremely patient and soon to be wife by my side, alongside them eagerly awaiting what was to come, adrenaline surging despite the hour rapidly approaching midnight. We were all there for one thing, the release of the PlayStation 3, and just under an hour later all of us would walk out of there with one of them tucked under our arms. I stayed up far too long setting the whole system up only to crash out before I was able to play any games on it. That same PlayStation, the one I paid a ridiculous price for in both cash and sleep, still sits next to my TV today alongside every other current console.
Well, apart from one, the Wii U.
The reason behind me regaling you with tales of my more insane gamer years is not to humblebrag my way into some kind of gamer cred, no it’s more to highlight the fact that between then and now 6 years have passed. I’ve seen console games rapidly evolve from the first tentative titles, which barely stressed the hardware, to today’s AAA titles which are exploiting every single aspect of the system that they run on. Back in their day both the PlayStation3 and Xbox360 were computational beasts that could beat most other platforms in raw calculative potential without breaking a sweat. Today however that’s no longer the case with the PC having long retaken that crown and people are starting to notice.
Of course console makers are keenly aware of this and whilst the time between generations is increasing they still see the need to furnish a replacement once the current generation starts getting long in the tooth. Indeed if current rumours are anything to go by we’ll likely see both the PlayStation4 and Xbox-something this year. However the rather lackluster sales of the first installment in next generation consoles (the Nintendo WiiU) has led at least one industry critic to be rather pessimistic about whether the next generation is really needed:
Whatever the case, what lessons can Sony and Microsoft take on board from how their rival has fared, as they prepare to make their moves into the next console generation? Well, there’s one immediately apparent lesson: Don’t start a new fucking console generation, because it’s a bad climate and triple-A gaming is becoming too fat and toxic to support its own weight. If you make triple-A games even more expensive and troublesome to develop – not to mention forcing them to adhere to online and hardware gimmicks that shrink and alienate the potential audience even further – then you will be driving the Titanic smack into another iceberg in the hope that it’ll somehow freeze shut the hole the first one made.
The thing is the problems that are affecting the WiiU don’t really translate to Sony or Microsoft. The WiiU was Nintendo’s half-hearted attempt to recapture the more “hardcore” gaming crowd which, let’s be honest here, was a small minority of their customer based. The Wii was so successful because it appealed to the largest demographic that had yet to be tapped: those who traditionally did not play video games. The WiiU, whilst being comparable to current gen consoles, doesn’t provide enough value to end users in order for them to fork out the cash for an upgrade. That then translates into developers not wanting to touch the platform which starts a vicious downward spiral that’ll be incredibly hard to break from.
However the biggest mistake Yahtzee makes is in assuming the next generation of consoles will be harder to develop for, and this is simply not the case.
Both the Xbox360 and the PlayStation3 are incredibly complicated beasts to program for with the former running on a custom variant of PowerPC and the latter running on Sony’s attempt to develop a supercomputer, the Cell. Both of these had their own quirks, nuances and tricks developers used in order to squeeze more performance out of them, none of which were translatable to any other platform. The next generation however comes to us with a very familiar architecture backing it (x86-64) which has decades, yes decades, of programming optimizations, frameworks and development behind it. Indeed all the investment that game developers have made in PC titles (which they’ve thankfully continued to do despite its diminutive market share) will directly translate to the next generation platforms from Microsoft and Sony. Any work on either platform will also directly translate to the other which is going to make cross-platform releases far cheaper, easier and of much higher quality than they have been previously.
In principle I agree with the idea, we don’t need another generation of consoles like we have in the past where developers are forced to retool and spend the next 2 years catching up to the technology. However the next generation we’re getting is nothing like the past and is shaping up to be a major boon to both developers and consumers. As far as we can tell the PlayStation4 and Durango are going to be nothing like the WiiU with many major developers already on board for both platforms and nary a crazy peripheral has been sighted for either of them. To cite the WiiU as the reason why the next generation isn’t needed is incredibly short sighted as Nintendo has shown it’s no longer in the same market as Sony and Microsoft are.
The current generation of consoles have run their course and its time for their replacements to take the stage. The convergence of technology between the two major platforms will only mean good things for developers and consumers alike. There are issues that are plaguing the wider industry, there’s no doubt about that, and whilst I won’t say that the next generation will be the panacea to their ills it’s good step in the first direction as there’s an incredible amount of savings to be made in developer time from the switch to a more common architecture. Whether that translates into better games or whatever Yahtzee is ultimately lusting after will have to remain to be seen but the next generation is bright light on the horizon, not an iceberg threatening to sink the industry.
The ability to swap components around has been an expected feature for PC enthusiasts ever since I can remember. Indeed the use of integrated components was traditionally frowned upon as they were typically of lower quality and should they fail you were simply left without that functionality with no recourse but to buy a new motherboard. Over time however the quality of integrated components has increased significantly and many PC builders, myself included, now forego the cost of additional add-in cards in favour of their integrated brethren. There are still some notable exceptions to this rule however, like graphics cards for instance, and there were certain components that most of us never thought would end up as being an integrated component, like the CPU.
Turns out we could be dead wrong about that.
Now it’s not like fully integrated computers are a new thing, in fact this blog post is coming to you via a PC that has essentially 0 replaceable/upgradable parts, commonly referred to as a laptop. Apple has famously taken this level of integration to its logical extreme in order to create its relatively high powered line of laptops with slim form factors and many other companies have since followed suit due to the success Apple’s laptop line have had. Still they were a relatively small market compared to the other big CPU consumers of the world (namely desktops and servers) which have both resisted the integrated approach mostly because it didn’t provide any direct benefits like it did for laptops. That may change if the rumours about Intel’s next generation chip, Haswell, turn out to be true.
Reports are emerging that Haswell won’t be available in a Land Grid Array (LGA) package and will only be sold in the Ball Grid Array (BGA) form factor. For the uninitiated the main difference between the two is that the former is the current standard which allows for processors to be replaced on a whim. BGA on the other hand is the package used when an integrated circuit is to be permanently attached to its circuit board as the “ball grid” is in fact blobs of solder that will be used to attach it. Not providing a LGA package essentially means the end for any kind of user-replaceable CPU, something which has been a staple of the enthusiast PC community ever since its inception. It also means a big shake up of the OEM industry who now have to make decisions about what kinds of motherboards they’re going to make as the current wide range of choice can’t really be supported with the CPUs being integrated.
My initial reaction to this was one of confusion as this would signify a really big change away from how the PC business has been running for the past 3 decades. This isn’t to say that change isn’t welcome, indeed the integration of rudimentary components like the sound card and NIC were very much welcome additions (after their quality improved), however making the CPU integrated essentially puts the kibosh on the high level of configurability that we PC builders have enjoyed for such a long time. This might not sound like a big deal but for things like servers and fleet desktop PCs that customizability also means that the components are interchangeable, making maintenance far easier and cheaper. Upgradeability is another reason however I don’t believe that’s a big of a factor as some would make it out to be, especially with how often socket sizes have changed over the past 5 years or so.
What’s got most enthusiasts worried about this move is the siloing of particular feature sets to certain CPU designations. To put it in perspective there’s typically 3 product ranges for any CPU family: the budget range (typically lower power, less performance but dirt cheap), the mid range (aimed at budget concious enthusiasts and fleet units) and the high end performance tier (almost exclusively for enthusiasts and high performance computing situations). If these CPUs are tied to the motherboard it’s highly likely that some feature sets will be reserved for certain ranges of CPUs. Since there are many applications where a low power PC can take advantage of high end features (like oodles of SATA ports for instance) and vice versa this is a valid concern and one that I haven’t been able to find any good answers to. There is the possibility of OEMs producing CPU daughter boards like the slotkets of old however without an agree upon standard you’d be effectively locking yourself into that vendor, something which not everyone is comfortable doing.
Still until I see more information its hard for me to make up my mind where I stand on this. There’s a lot of potential for it to go very, very wrong which could see Intel on the wrong side of a community that’s been dedicated to it for the better part of 30 years. They’re arguably in the minority however and its very possible that Intel is getting increasing numbers of orders that require BGA style chips, especially where their Atoms can’t cut it. I’m not sure what they could do right in this regard to win me over but I get the feeling that, just like the other integrated components I used to despise, there may come a time when I become indifferent to it and those zero insertion force sockets of old will be a distant memory, a relic of PC computing’s past.
Even though in my heart I’m a PC gamer I was never without a console growing up. For the most part I was a Nintendo kid, seeing every console from the NES upwards making its way into my family’s living room. That changed when I had my own job and enough money to buy a PlayStation 2, secluding myself away in my room to play Gran Turismo for hours on end trying to justify the $700 odd sum I had spent on this magnificent piece of hardware. Nowadays you’ll find every major console lining up beside my TV so that I can indulge myself in any title regardless of its platform.
The past couple decades has been quite an interesting time for consoles. They really came into prominence after the release of the Nintendo Entertainment System back in 1985 (2 years later for us Australians) and Nintendo continued to be highly successful with it’s successor. Their reign as the king of consoles came to an end with the release of the original PlayStation back in 1994 which saw Sony catapulted to the top of the console kingdom. Microsoft, seeing a great opportunity to compete in the gaming market, released the Xbox back in 2001 and whilst it didn’t dethrone Nintendo or Sony it enjoyed some mild success in the market, even if it wasn’t a success financially. The release of the PlayStation 2 kept Sony at the top for quite a while as neither the Xbox nor Nintendo’s GameCube could hold a candle to it.
The current generation of consoles saw another shift in the king of consoles crown, but not for the traditional reasons that gamers had come to expected. Whilst the PlayStation 3 was a technical marvel the Xbox360 hit the trifecta of price, performance and catalogue of good platform exclusives that helped build it up to the success it is today. Neither of them however could hold a candle to the success that is the Nintendo Wii. Aiming at their largest untapped market Nintendo created a console that appealed to non-gamers and gamers alike. The result being that they couldn’t manufacture the things fast enough, seeing wide spread shortages for the console that only helped to sustain the fever pitch surrounding it. With a grand total of 90 million consoles sold to date it’s well on its way to be the most successful console ever released, although it still has a long way to go to match the PlayStation 2 (coming in at a whopping 153 million).
The next generation of consoles is still some ways off however. Traditionally you’d see a new console generation every 5 years but the only ones with any official plans so far are Nintendo with their Wii U console which isn’t slated for release until sometime next year. Granted the current generation of consoles has aged far better than any of their previous generations what with developers finding all sorts of optimizations to squeeze extra performance out of them but even the best programming can’t hide the aging hardware that’s running in these consoles. It is then up for debate as to what the next generation of consoles will look like and there’s speculation that it may be the last.
Richard Garriott AKA Lord British, games industry celebrity and space tourist, has gone on record that he believes that the next generation of consoles will be the last:
IG: It’s always tough to completely change the way you look at things. The bigger the company, the more conservative they tend to be. Do you think consoles as we know them are doomed, or are we going to get a new generation, or is it just becoming irrelevant?
RGC: I think we might get one more generation, might, but I think fundamentally they’re doomed. I think fundamentally the power that you can carry with you in a portable is really swamping what we’ve thought of as a console.
IG: If we’ve got a smartphone that can do Xbox level graphics, which we’ve almost got, and I can hook that up to a TV and use a controller, what’s the difference between that and a console? It’s just whatever games are available.
RGC: Yes, exactly. That’s why I think there may be one more round of consoles left, but not many.
The idea of consoles going away isn’t a new one, hell there was a time when everyone thought the PC would be the dominant platform for all time, but them being replaced outright by mobile devices is a new one on me. For starters whilst you can get current Xbox level graphics on a handheld it’s always going to be a game of cat and mouse as to how far ahead the consoles are. Realistically current smart phones capabilities are only catching up to what was possible 5 years ago, not what’s possible today. Indeed once the next generation of consoles is released the smart phones (and other portable entertainment systems) will again be behind in terms of technology. The fact of the matter is you can’t shoe horn current generation technology into a portable form factor so I doubt we’ll see the loss of consoles after the next generation.
Although there is potential for the console market to be shaken up somewhat by the portable industry. The Wii showed that a console can succeed without having cutting edge technology in it (the Wii is basically a GameCube on the inside) and it’s that same market that gobbled up the Wii that will turn to other places for their gaming fix. Whether this will make the transition into some form of home based entertainment like consoles currently do remains to be seen however, but there’s definitely potential for it to happen.
As for the the future of console gaming? More of the same I believe. Whilst we may have seen some technical marvels in the form of the Wii, PlayStation Move and Kinect the bread and butter of these consoles doesn’t appear to be going anywhere, even in the face of challengers like the iPhone. For the non-gamer market however there’s a strong possibility that they’ll shift away from their Wiis in favour of their smart phones or tablets but there’s still a massive market that will crave the better graphics and performance that can only come from a console.