The primary driver for any company, whether they’re bound to the public via the whims or the stock market or not, is to create value and wealth for its various stakeholders. There’s not many companies that do that as well as Valve who’s profit per employee is among the highest in any industry and an order of magnitude above all its competitors. This is almost wholly due to their domination of the digital distribution market but their innovative use of Free to Play for their flagship games has certainly contributed to that as well. Of course the question on everyone’s minds is where Valve will go from here and their latest announcement, which I speculated about last year, seems to be their answer.
Today Valve announced SteamOS, essentially a Linux environment that’s geared towards playing games. There’s also a number of additional features that will be made available with its release including the also recently announced Family Sharing program which allows you to share your steam library with others. Whilst this isn’t the SteamBox that many were anticipating it’s essentially Valve’s console launch as they’ve stated numerous times in the past that anyone would be able to build their own SteamBox and SteamOS would be the basis for that. What the SteamOS actually entails, in terms of functionality and look/feel, remains to be seen but the launch site promises it will be available soon.
SteamOS comes off the back of Valve’s substantial amount of work on the Linux platform with a decent chunk of the Steam library now available on the platform. If we take Gabe’s word for it much of this was driven by the fact that Windows 8 was a “catastrophe” for gaming, something which I don’t agree with, and Valve sees their future being the Linux platform. Whilst it’s admirable that they’re investing a lot in a platform that’s traditionally been a tiny sliver of the PC gaming market the decision to use Linux is, in my opinion, more likely profit driven than anything else as it gets them a foothold in an area where they don’t current have any: the home living room.
Big Picture mode was their first attempt at this which was pretty squarely aimed at replicating the console experience using the Steam platform. However since most people run their games on a PC dedicated to such activities this would mean that Steam’s penetration in the living room was minimal. The SteamOS, and by extension the SteamBox, is a more targeted attempt to break into this area with it’s additional media features and family friendly control options. I don’t begrudge them for this, the sole reason companies exist is to generate profit, however some seem to think Valve’s moves towards Linux are purely altruistic when I can assure you they’re anything but.
Of course the biggest factor that will determine the success or failure of this platform will be whether or not the big developers and publishers see the SteamOS as a viable platform to develop for. As many are speculating Valve could do this by drastically reducing their cut of sales on the platform, something which would go a long way to making developing for Linux viable. I don’t think Valve needs to do a whole lot to attract indie developers to it as many of the frameworks they use already natively support Linux (even XNA does through some 3rd party tools) and as the Humble Indie Bundle has shown there’s definitely enough demand to make it attractive for them.
If any other company attempted to do this I’d say they were doomed to fail but Valve has the capital and captive market to make this idea viable. I’m sure it will see a decent adoption rate just out of pure curiosity (indeed I’ll probably install it just to check it out) and that could be enough to give it the critical mass needed to see adoption rates sky rocket. Whether or not those numbers will be big enough to convince the developers and publishers to get on board though will be something that will play out over the next couple years and will ultimately be the deciding factor in the platform’s success or failure.
One of the biggest arguments I’ve heard against developing anything for the Android platform is the problem of fragmentation. Now it’s no secret that Android is the promiscuous smartphone operating system, letting anyone and everyone have their way with it, but that has led to an ecosystem that is made up of numerous devices that all have varying amounts of capabilities. Worse still the features of the Android OS itself aren’t very standard either with only a minority of users running the latest software at any point in time and the rest never making a true majority. Google has been doing a lot to combat this but unfortunately the unified nature of the iOS platform is hard to deny, especially when you look at the raw numbers from Google themselves.
Android developer’s lives have been made somewhat easier by the fact that they can add in lists of required features and lock out devices that don’t have them however that also limits your potential market so many developers aren’t too stringent with their requirements. Indeed those settings are also user controllable as well which can allow users you explicitly wanted to disallow being able to access your application (ala ChainFire3D to emulate NVIDIA Tegra devices). This might not be an issue for most of the basic apps out there but for things like games and applications that require certain performance characterisitcs it can be a real headache for developers to work with, let alone the sub-par user experience that comes as a result of it.
This isn’t made any easier by handset manufacturers and telecommunications providers dragging their feet every time an upgrade comes along. Even though I’ve always bought unlocked and unbranded phones the time between Google releasing an update and me receiving them has been on the order of months, sometimes coming so late that I’ve upgraded to a new phone before they’ve come out. This is why the Nexus range of phones directly from Google is so appealing, you’re guaranteed those updates immediately and without any of the cruft that your manufacturer of choice might cram in. Of course then there was that whole issue with supply but that’s another story.
For what it’s worth Google does seem to be aware of this and has tried to make inroads to solving it in the past. None of these have been particularly successful but their latest attempt, called Google Play Services, might just be the first step in the right direction to eliminating at least one aspect of Android fragmentation. Essentially instead of most new feature releases coming through Android updates like they have done in the past Google will instead deliver them via the new service. It’s done completely outside the Play store, heck it even has its own update mechanism (which isn’t visible to the end user), and is essentially Google’s solution to eliminate the feet dragging that carriers and handset manufacturers are renown for.
On the surface it sounds pretty great as pretty much every Android device is capable of running this which means that many features that just aren’t available to older versions can be made available via Google Play Services. This will also help developers immensely as they’ll be able to code against those APIs knowing that it’ll be widely available. I’m a little worried about its clandestine nature however with its silent, non-interactive updating process which seems like a potential attack vector but smarter people than me are working on it so I’ll hold off on bashing them until there’s a proven exploit.
Of course the one fragmentation problem this doesn’t solve is the one that comes from the varying hardware that the Android operating system runs on. Feature levels, performance characteristics and even screen resolution and aspect ratio are things that can’t be solved in software and will still pose a challenge to developers looking to create a consistent experience. It’s the lesser of the two problems, granted, but this is the price that Android has to pay for its wide market domination. Short of pulling a Microsoft and imposing design restrictions on manufacturers I don’t think there’s much that Google can do about this and, honestly, I don’t think they have any intentions to.
How this will translate into the real world remains to be seen however as whilst the idea is good the implementation will determine just how far this goes to solving Android’s fragmentation issue. Personally I think it will work well although not nearly as well as controlling the entire ecosystem, but that freedom is exactly what allowed Android to get to where it is today. Google isn’t showing any signs of losing that crown yet either so this really is all about improving the end user experience.
The current norms for games consoles are going to be flipped on their head when the next generation comes online. There are some things we could argue that are expected, like the lack of backwards compatibility, but the amount of change coming our way really doesn’t have any comparison in previous console generations. In nearly all respects I believe this is a good thing as many of the decisions made seemed to be born out of a mindset that worked 2 decades ago but was becoming rapidly outdated in today’s market. However one significant change could have a detrimental impact on consoles at large and could open up an opportunity for the PC (and by extension the SteamBox) to make a comeback.
The next generation of games consoles are shaping up to be some of the most developer friendly platforms ever created. Not only are they x86 under the hood, allowing many frameworks developers for regular PC games to be ported across with relative ease, many of the features that they have are a direct response to the requests from developers. This means that developers will be able to make use of the full power of these consoles from much earlier on and whilst this will make for some great launch titles that will be leaps and bounds above their previous generation predecessors it does mean that they’ll reach their peak early, and that might not be a good thing.
It was always expected that the best games of a console generation would come out towards the end of its lifecycle. This was due to games developers becoming far more familiar with the platform and the tools reaching a maturity level that made creating those games possible. The current generation, with its record breaking longevity, is a great example of this with the demos of current and next gen titles running on both platforms being very comparable. With the next generation being so developer friendly however I can’t imagine it taking long for them to be able to exploit the system to its fullest extent within a short time frame. Couple this with the next gen expected to have a similar life to the current gen and you’ve got a recipe for console games being stagnant (from a technology point of view) for a very long time.
Granted there will always be improvements that can be made and I’d still expect the best titles to come towards the end of its lifecycle. However the difference between first year and last year titles will be a lot smaller and in the case of the end user I doubt many will notice the difference. With the shared x86 base however there’s a big potential here for the PC versions of the games to start out pacing their console counterparts much earlier on as some of the optimizations will translate readily across, something which just wasn’t possible with previous platforms.
Indeed due to the current gen limitations we’ve already begun to see something of a resurgence in PC gaming. Now its likely that this could be dampened when the next gen of consoles get released however due the reasons I’ve outlined I’d expect to see the cycle begin again not too long afterwards. I do doubt that this will see PCs return to the glory days of being the king of gaming but there’s a definite opportunity for them to grab some significant market share, possibly enough to be elevated past their current also-ran status.
Of course this is wild speculation on my part but I do believe that the next generation of consoles will peak much earlier in its lifecycle which, as history has shown us, will usher people back towards the PC as a platform. With the SteamBox readying itself for release around the same time there’s ample opportunity for current gen console customers to be swayed over to the PC platform, even if it’s camouflaged itself as one of the enemy. In the end though the next gen consoles will still represent good value for money for several years to come, even if they’re quickly outpaced.
It was a late night in March 2007 where deep in the bowels of the Belconnen shopping mall dozens of consoles gamers gathered. I sat there, my extremely patient and soon to be wife by my side, alongside them eagerly awaiting what was to come, adrenaline surging despite the hour rapidly approaching midnight. We were all there for one thing, the release of the PlayStation 3, and just under an hour later all of us would walk out of there with one of them tucked under our arms. I stayed up far too long setting the whole system up only to crash out before I was able to play any games on it. That same PlayStation, the one I paid a ridiculous price for in both cash and sleep, still sits next to my TV today alongside every other current console.
Well, apart from one, the Wii U.
The reason behind me regaling you with tales of my more insane gamer years is not to humblebrag my way into some kind of gamer cred, no it’s more to highlight the fact that between then and now 6 years have passed. I’ve seen console games rapidly evolve from the first tentative titles, which barely stressed the hardware, to today’s AAA titles which are exploiting every single aspect of the system that they run on. Back in their day both the PlayStation3 and Xbox360 were computational beasts that could beat most other platforms in raw calculative potential without breaking a sweat. Today however that’s no longer the case with the PC having long retaken that crown and people are starting to notice.
Of course console makers are keenly aware of this and whilst the time between generations is increasing they still see the need to furnish a replacement once the current generation starts getting long in the tooth. Indeed if current rumours are anything to go by we’ll likely see both the PlayStation4 and Xbox-something this year. However the rather lackluster sales of the first installment in next generation consoles (the Nintendo WiiU) has led at least one industry critic to be rather pessimistic about whether the next generation is really needed:
Whatever the case, what lessons can Sony and Microsoft take on board from how their rival has fared, as they prepare to make their moves into the next console generation? Well, there’s one immediately apparent lesson: Don’t start a new fucking console generation, because it’s a bad climate and triple-A gaming is becoming too fat and toxic to support its own weight. If you make triple-A games even more expensive and troublesome to develop – not to mention forcing them to adhere to online and hardware gimmicks that shrink and alienate the potential audience even further – then you will be driving the Titanic smack into another iceberg in the hope that it’ll somehow freeze shut the hole the first one made.
The thing is the problems that are affecting the WiiU don’t really translate to Sony or Microsoft. The WiiU was Nintendo’s half-hearted attempt to recapture the more “hardcore” gaming crowd which, let’s be honest here, was a small minority of their customer based. The Wii was so successful because it appealed to the largest demographic that had yet to be tapped: those who traditionally did not play video games. The WiiU, whilst being comparable to current gen consoles, doesn’t provide enough value to end users in order for them to fork out the cash for an upgrade. That then translates into developers not wanting to touch the platform which starts a vicious downward spiral that’ll be incredibly hard to break from.
However the biggest mistake Yahtzee makes is in assuming the next generation of consoles will be harder to develop for, and this is simply not the case.
Both the Xbox360 and the PlayStation3 are incredibly complicated beasts to program for with the former running on a custom variant of PowerPC and the latter running on Sony’s attempt to develop a supercomputer, the Cell. Both of these had their own quirks, nuances and tricks developers used in order to squeeze more performance out of them, none of which were translatable to any other platform. The next generation however comes to us with a very familiar architecture backing it (x86-64) which has decades, yes decades, of programming optimizations, frameworks and development behind it. Indeed all the investment that game developers have made in PC titles (which they’ve thankfully continued to do despite its diminutive market share) will directly translate to the next generation platforms from Microsoft and Sony. Any work on either platform will also directly translate to the other which is going to make cross-platform releases far cheaper, easier and of much higher quality than they have been previously.
In principle I agree with the idea, we don’t need another generation of consoles like we have in the past where developers are forced to retool and spend the next 2 years catching up to the technology. However the next generation we’re getting is nothing like the past and is shaping up to be a major boon to both developers and consumers. As far as we can tell the PlayStation4 and Durango are going to be nothing like the WiiU with many major developers already on board for both platforms and nary a crazy peripheral has been sighted for either of them. To cite the WiiU as the reason why the next generation isn’t needed is incredibly short sighted as Nintendo has shown it’s no longer in the same market as Sony and Microsoft are.
The current generation of consoles have run their course and its time for their replacements to take the stage. The convergence of technology between the two major platforms will only mean good things for developers and consumers alike. There are issues that are plaguing the wider industry, there’s no doubt about that, and whilst I won’t say that the next generation will be the panacea to their ills it’s good step in the first direction as there’s an incredible amount of savings to be made in developer time from the switch to a more common architecture. Whether that translates into better games or whatever Yahtzee is ultimately lusting after will have to remain to be seen but the next generation is bright light on the horizon, not an iceberg threatening to sink the industry.
As any Call of Duty player will tell you there was always a good developer and a not-so-good developer behind their franchise of choice. Unquestionably everyone loved all of Infinity Ward’s releases and it’s not a long stretch to say that they are responsible for Call of Duty’s success, thanks almost entirely to the original Modern Warfare. Treyarch on the other hand was always second place to them with their games typically being considered the off years for the franchise with the sales figures reflecting that. Indeed when the original Black Ops was released many of the compliments to it felt backhanded, the best of which I recall as being “the best Call of Duty Treyarch has made” firmly segregating it away from its glorious Infinity Ward brethren.
Still it’s not like they made atrocious games, indeed whilst the original Black Ops might not have held a candle to Modern Warfare 2 it still managed to rake in over a billion dollars in 6 weeks, an accomplishment that not many game developers can boast. It’s still somewhat slower than Infinity Ward who was able to accomplish the same thing in about a third of the time. However after playing through Black Ops II I really felt that the overall quality of Treyarch’s recent release was at least on par if not exceeding that of its predecessors, even those from Infinity Ward. I posited the idea to a couple of my friends that it was possible that Treyarch might take the crown as the better Call of Duty developer and it looks like they might be on track to accomplish that:
Activision may have skipped its annual five-day totaling of Call of Duty sales, but the publisher announced this morning the latest installment, Black Ops 2, grossed $1 billion in 15 days.
The publisher announced shortly after Call of Duty Black Ops 2‘s launch the annual blockbuster made $500 million in 24 hours at retail, eclipsing Modern Warfare 3’s record of $400 million the year prior. The lack of a five-day total, which the company had done for three years running, gave some analysts “cause for concern” that Black Ops 2 wasn’t selling as well as previous installments.
Going from 6 weeks to 15 days to achieve the same target is a pretty impressive feat in the space of only a couple years. You could attribute this to the popularity of the Call of Duty franchise but, coming from someone who’s played all of their recent titles, Black Ops II really is that much better than the rest of them. Indeed checking out the sales stats since then for each of the respective platforms shows (apart from PC still being very much in the minority at around 4%) that it’s on track to outsell all of its predecessors in the space of about 2 to 3 months on each of its respective platforms. Should that happen it wouldn’t be the first Treyarch title to outsell Infinity Ward, but it would certainly cement their position as equal developers.
The question then becomes what this will mean for the Treyarch/Infinity Ward developer duality in the Call of Duty franchise. In all honesty I don’t think it’ll mean much overall, indeed each iteration of Call of Duty for the past couple generations has outsold the last, but the fervour at which fans adopted this most recent title was definitely a surprise for me even if I thought the quality was a definite jump up from Treyarch’s previous games. Indeed as long as the series keeps making money and breaking sales records I don’t think we’ll see any major changes in the franchise, either from an actual game play or developer perspective. For me it’s just interesting to see how the perceptions have changed over the past couple years as I’ve witness the back and forth between the two developers behind the biggest game franchise in the world and how a perceived duality in quality has, in essence, simply disappeared.
There’s no denying the success Apple has enjoyed thanks to their major shift in strategy under Steve Jobs’ reign. Before then they were seen as a direct competitor to Microsoft in almost every way: iMacs vs PCs, MacOS vs Windows and at pretty much every turn they were losing the battle save for a few dedicated niches that kept them afloat. That all changed when they got into the consumer electronics space and began bringing the sacred geek technology to the masses in a package that was highly desirable. There was one aspect of their business that suffered immensely because of this however: their enterprise sector.
Keen readers will note that this isn’t the first time I’ve mentioned Apple’s less than stellar support of the enterprise market and nothing has really changed in the 8 months since I wrote that last post. Apple as a company is almost entirely dedicated to the consumer space with token efforts for enterprise integration thrown in to make it look like their products can play well in the enterprise space. Strangely enough it would seem that this token effort is somehow working to convince developers that Apple (well really iOS) is poised to take over the enterprise space:
In the largest survey of its kind, Appcelerator developers were asked what operating system is best positioned to win the enterprise market. Developers said iOS over Android by a 53% to 38% margin. Last year, in its second quarter survey, the two companies were in a dead heat for the enterprise market, tied at 44%.
In a surprise of sorts, Windows showed some life as 33% said they would be interested in developing apps on the Windows 8 tablet.
Now there is value in gauging developer’s sentiment regarding the various platforms, it gives you some insight into which ones they’d probably prefer to develop for, however that doesn’t really serve as an indicator as to what platform will win a particular market. I’d hazard a guess (one that’s based on previous trends) that the same developers will tell you that iOS is the platform to develop for even though it’s quite clear that Android is winning in the consumer space by a very wide margin. I believe there’s the same level of disjunct between what Appcelerator’s developers are saying and what the true reality is.
For starters any of the foothold that iOS has in the enterprise space is not born of any effort that Apple has made and all of it is to do with non-Apple products. For iOS to really make a dent in the enterprise market it will need some significant buy in from its corporate overlords and whilst there’s been some inroads to this (like with the Enterprise Distribution method for iOS applications) I’m just not seeing anything like that from Apple currently. All of their enterprise offerings are simplistic and token lacking many of the features that are required by enterprises today. They may have mindshare and numbers that will help drive people to create integration between iOS products and other enterprise applications but so does Android, meaning that’s really not an advantage at all.
What gets me is the (I’m paraphrasing) “sort of surprise” that developers were looking to Windows 8 for developing applications. Taken in the enterprise context the only real surprise is why there aren’t more developers looking at the platform as if there’s any platform that has chance at dominating this sector it is in fact Windows 8. There’s no doubting the challenges that the platform faces what with Apple dominating the tablet space that Microsoft is only just looking at getting into seriously but the leverage they have for integrating with all their enterprise applications simply can’t be ignored. They may not have the numbers yet but if developer mindshare is the key factor here then Microsoft wins hands down, but that won’t show up in a survey that doesn’t include Windows developers (Appcelerator’s survey is from its users only and currently does not support Windows Phone).
I’ve had my share of experience with iOS/Android integration with various enterprise applications and for what its worth none of them are really up to the same level as native platform applications are. Sure you can get your email and even VPN back in to a full desktop using your smartphone but that’s nothing that hasn’t been done before. The executives might be pushing hard to get their iPads/toy dujour on the enterprise systems but they won’t penetrate much further until those devices can provide some real value to those outside of the executive arena. Currently the only platform that has any chance of doing that well is Microsoft with Android coming in second.
None of this means that Apple/iOS can’t do well in the enterprise space, just that there are other players in this market far better positioned to do so. Should Apple put some focus on the enterprise market it’s quite likely they could capture some market share away from Microsoft and their other partners but their business models have been moving increasingly away from this sector ever since they first release the iPod over a decade ago. Returning to the enterprise world is not something I expect to see from Apple or its products any time soon and no developer sentiment is going to change that.
I’ve seen so many consoles come and during my years as a gamer. I remember the old rivalries back in the day between the stalwart Nintendo fans and the just as dedicated Sega followers. As time went on Nintendo’s dominance became hard to push back against and Sega struggled to face up to the competition. Sony however made quite a splash with their original Playstation and was arguably the reason behind the transition away from game cartridges to the disc based systems we have today. For the last 5 years or so though there really hasn’t been much of a shake up in the console market, save for the rise of the motion controllers (which didn’t really shake anything up other than causing a giant fit of mee-tooism from all the major players).
I think the reasons for this are quite simple: consoles became powerful enough to be somewhat comparable to PCs, the old school king of gaming. The old business models of having to release a new console every 3 years or so didn’t make sense when your current generation was more than capable of modern games at a generally acceptable level. There was also the fact that Microsoft got burned slightly by releasing the Xbox360 so soon after the original Xbox and I’m sure Sony and Nintendo weren’t keen on making the same mistake. All we’ve got now are rumours about the next generation of consoles but by and large they’re not shaping up to be anything revolutionary like their current gen brethren were when they were released.
What’s really been shaking up the gaming market recently though is the mobile/tablet gaming sector. Whilst I’ll hesitate to put these in the same category as consoles (they are, by and large, not a platform with a primary purpose of gaming in mind) they have definitely had an impact in the portable sector. At the same time though the quality of games available on the mobile platform has increased significantly and developers now look to develop titles on the mobile platform wouldn’t have been reasonable or feasible only a few short years ago. This is arguably due to the marked increase in computing power that has been made available to even the most rudimentary of smart phones which spurred developers on to be far more ambitious with the kinds of titles they develop for the platform.
What I never considered though was a crossover between the traditional console market and the now flourishing mobile sector. That’s were OUYA, an Android based game console, comes into play.
OUYA is at its heart a smartphone without a screen or a cellular chipset in it. At its core it boasts a NVIDIA Tegra 3 coupled with 1GB of RAM, 8GB of flash storage, Bluetooth and a USB 2 port for connectivity. For a console the specifications aren’t particularly amazing, in fact they’re down right pitiful, but it’s clear that their idea for a system isn’t something that can play the latest Call of Duty. Instead the OUYA’s aim is to lurethat same core of developers, the ones who have been developing games for mobile platforms, over to their platform by making the console cheap, license free and entirely open. They’ve also got the potential to get a lot of momentum from current Android developers who will just need a few code modifications to support the controller, giving them access to potentially thousands of launch titles.
I’ll be honest at the start I was somewhat sceptical about what the OUYA’s rapid funding success meant. When I first looked at the console specifications and intended market I got the feeling that the majority of people ordering it weren’t doing it for the OUYA as a console, no the were more looking at it as a cracking piece of hardware for a bargain basement price. Much like the Raspberry Pi the OUYA gives you some bits of tech that are incredibly expensive to acquire otherwise like a Tegra 3 coupled with 1GB RAM and a Bluetooth controller. However that was back when there were only 8,000 backers but as of this morning there’s almost 30,000 orders in for this unreleased console. Additionally the hype surrounding around the console doesn’t appear to be centred on the juicy bits of hardware underneath it, people seem to be genuinely excited by the possibilities that could be unlocked by such a console.
I have to admit that I am too. Whilst I don’t expect the OUYA to become the dominant platform or see big name developers rushing towards releasing torrents of titles on it the OUYA represents something that the console market has been lacking: a cheap, low cost player that’s open to anyone. It’s much like the presence of an extremely cut-rate airline (think Tiger Airlines in Australia) sure you might not catch them all the time because of the ridiculous conditions attached to the ticket but their mere presence keeps the other players on their best behaviour. The OUYA represents a free, no holds barred arena where big and small companies alike can duke it out and whilst there might not be many multi-million dollar titles made for the platform you can bet that the big developers won’t be able to ignore it for long.
I’m genuinely excited about what the OUYA represents for the console games industry. With innovation seemingly at a stand still for the next year or two it will be very interesting to see how the OUYA fairs, especially considering its release date for the first production run in slated for early next year. I’m also very keen to see what kinds of titles will be available for it at launch and, hacker community willing, what kinds of crazy, non-standard uses for the device come out. I gladly plonked down $149 for the privilege of getting 1 with 2 controllers and even if you have only a casual interest in game consoles I’d urge you to do much the same.
I can remember my first experience with PC multi player game. I can’t remember exactly what game it was but I do recall running a 5 meter serial cable from my room across into my brother’s and then clicking the connect button frantically in the hopes that we could play together. Alas we never managed to get it working and resigned ourselves to play our game individually. Over the years my multiplayer experience would be mostly limited to bouts on the various Nintendo consoles we purchased over the years with my most fond memories being the countless hours we whiled away on Goldeneye 007.
Online multiplayer was something that eluded me for quite some time. Being stuck out in the sticks of Wamboin my Internet connection lagged behind the times considerably, seeing me stuck on dialup until I switched to a rural wireless provider sometime in 2005. I’d make do by finding servers that were sympathetic to my HPB ways but even then the experience wasn’t particularly stellar. It then follows that I found solace in good single player games much more often than I did with ones that required me to find someone else to play with (with World of Warcraft being the notable exception).
The games industry however has been trending in the opposite direction. It’s increasingly rare to find a game that doesn’t have some token form of multiplayer in it, especially those ones that are part of a long running series. Indeed many recent titles that found their success as single player only titles have since found their sequels with some form of multiplayer attached to them. The trend is somewhat worrying for long time gamers like myself as many of these efforts appear to be token attempts to increase the games longevity. Whilst this usually wouldn’t be a problem it seems that in some cases the single player has suffered because of it and this is why many gamers lament the appearance of multi player in games.
Personally though, I really haven’t seen much of a decline in game quality with the addition of multiplayer to new games. Indeed looking back at two sequels that found their feet in solid single player experience which had multi player added afterwards (Bioshock 2 and Portal 2) shows that it is possible to make a game with a token multiplayer aspect that doesn’t detract from the main game. It’s worth mentioning however that I didn’t bother to play the multiplayer at all in Bioshock 2 nor did I engage in the most recent effort of token multi playerism found in Rage. Had I done so I might have been telling a different story, one I might endeavour to investigate in the future.
All this being said however I did cringe a bit when two of my favourite titles from Bioware, namely Mass Effect and Dragon Age, both recently announced that their upcoming titles would include some form of multiplayer. Now these are two titles that have managed to go two releases without having multiplayer and no one can deny the success that both of them have had. The question then becomes “why now?” as they’d both have enough momentum to be successful just off their existing fan base. It would appear that there’s a perception that some form of multiplayer is now a required part of a game and not developing it could adversely affect the games future. There’s a decent amount of evidence to argue to the contrary however, like Skyrim selling a whopping 7 million copies already (and all their past success, of course).
The proof will be in the pudding as it’s rather unjust to judge a game before it’s released to the public and those games will be a good indicator of just how much a multi player section impacts on the single player experience. Whilst I can’t recall any games that were noticeably worse off because of multi player being tacked on I do understand the community’s concerns about how good, solid single player games could be ruined by focusing on something that, for a lot of people, adds no value to the game. I’ll make a point to give the multiplayer a good work over for these titles when their released in the future, just to see if it was worth the developer’s time of including them in.
You’d think that since I invested so heavily in Silverlight when I was developing Lobaco that I would’ve been more outraged at the prospect of Microsoft killing off Silverlight as a product. Long time readers will know that I’m anything but worried about Silverlight going away, especially considering that the release of the WinRT framework takes all those skills I learnt during that time and transitions them into the next generation of Windows platforms. In fact I’d say investing in Silverlight was one of the best decisions at the time as not only did I learn XAML (which powers WPF and WinRT applications) but I also did extensive web programming, something I had barely touched before.
Rumours started circulating recently saying that Microsoft had no plans to develop another version of the Silverlight plugin past the soon to be released version 5. This hasn’t been confirmed or denied by Microsoft yet but there are several articles citing sources familiar with the matter saying that the rumour is true and Silverlight will recieve no attention past this final iteration. This has of course spurred further outrage at Microsoft for killing off technologies that developers have heavily invested in and whilst in the past I’ve been sympathetic to them this time around I don’t believe they have a leg to stand on.
All of Microsoft’s platforms are so heavily intertwined with each other that it’s really hard to be just a Silverlight/WPF/ASP.NET/MFC developer without a lot of crossover into other technologies. Hell apart from the rudimentary stuff I learnt whilst in university I was able to self learn all of those technologies in the space of a week or two without many hassles. Compare that with my month long struggle to learn basic Objective-C (which took me a good couple months afterwards to get proficient in) and you can see why I think that any developer whining about Silverlight going away is being incredibly short sighted or just straight up lazy.
In the greater world of IT you’re doomed to fade into irrelevance if you don’t keep pace with the latest technologies and developers are no exception to this. Whilst I can understand the frustration in losing the platform you may have patronized for the past 4 years I can’t sympathize with an unwillingness to adapt to a changing market. The Windows platform is by far one of the most developer friendly and the skills you learn in any Microsoft technology will flow onto other Microsoft products, especially if you’re proficient in any C based language. So whilst Microsoft might not see a future with Silverlight that doesn’t mean the developers are left high and dry, in fact they’re probably in the best position to innovate out of this situation.
However it has come to my attention that Microsoft has been hinting at a potential panacea for all these woes, for quite some time now.
Back in January there were many rumours circling around the new features we could look forward to in Windows 8. Like any speculation on upcoming products there’s usually a couple facts amongst the rumour mill, usually from those who are familiar with the project. Two such features which got some air time were Mosh and Jupiter, two interesting ideas that at the time were easily written off as either speculation or things that would never eventuate. However Mosh, rumoured at the time to be a “tiled based interface”, turned out to be the feature which caused the developer uproar just a couple months ago. Indeed the speculation was pretty much spot on since it’s basically the tablet interface for Windows 8, but it also has a lot of potential for nettops and netbooks since underneath the full Windows 8 experience is still available.
The Jupiter rumour then can be taken a little bit more seriously, but I can see why many people passed it over back at the start of this year. In essence Jupiter just looked like yet another technology platform from Microsoft, just like Windows Presentation Framework and Silverlight before it. Some did recognize it as having the potential to be the bridge for Windows 8 onto tablets which again shoe horned it into being just another platform. However some did speculate that Jupiter could be much more than that, going as far to say that it could be the first step towards a unified development platform across the PC, tablet and mobile phone space. If Microsoft could pull that kind of stunt off they’d not only have one of the most desirable platforms for developers they’d also be taking a huge step forward towards realizing their Three Screens philosophy.
I’ll be honest and say that up until yesterday I had no idea that Jupiter existed, so it doesn’t surprise me that many of the outraged developers wouldn’t have known about it either. However yesterday I caught wind of an article from TechCrunch that laid bare all the details of what Jupiter could be:
- It is a new user interface library for Windows. (source)
- It is an XAML-based framework. (source)
- It is not Silverlight or WPF, but will be compatible with that code. (source)
- Developers will write immersive applications in XAML/C#/VB/C++ (source, source, source,source)
- It will use IE 10′s rendering engine. (source)
- DirectUI (which draws the visual elements on the screen, arrived in Windows Vista) is being overhauled to support the XAML applications. (source, source)
- It will provide access to Windows 8 elements (sensors, networking, etc.) via a managed XAML library. (source)
- Jupiter apps will be packaged as AppX application types that could be common to both Windows 8 and Windows Phone 8. (source, source, source, source)
- The AppX format is universal, and can used to deploy native Win32 apps, framework-based apps (Silverlight, WPF), Web apps, and games (source)
- Jupiter is supposed to make all the developers happy, whether .NET (i.e., re-use XAML skills), VB, old-school C++ or Silverlight/WPF. (Source? See all the above!)…
Why does Jupiter matter so much? If it’s not clear from the technical details above, it’s because Jupiter may end up being the “one framework” to rule them all. That means it might be possible to port the thousands of Windows Phone apps already written with Silverlight to Windows 8 simply by reusing existing code and making small tweaks. Or maybe even no tweaks. (That part is still unclear). If so, this would be a technical advantage for developers building for Windows Phone 8 (code-named “Apollo” by the way, the son of “Jupiter”) or Windows 8.
In a nutshell it looks like Microsoft is looking to unify at all of the platforms that run Windows under the Jupiter banner, enabling developers to port applications between them without having to undergo massive rework of their code. Of course the UI would probably need to be redone for each target platform but since the same design tools will work regardless of the platform the redesigns will be far less painful then they currently are. The best part about Jupiter though is that it leverages current developer skill sets, enabling anyone with experience on the Windows platform to be able to code in the new format.
Jupiter then represents a fundamental shift in Windows developer ecosystem, one that’s for the better of everyone involved.
We’ll have to wait until BUILD in September to find out the official word from Microsoft on what Jupiter will actually end up being, but there’s a lot of evidence mounting that it will be the framework to use when building applications for Microsoft’s systems. Microsoft has a proven track record of creating some of the best developer tools around and that, coupled with the potential to have one code base to rule them all, could make all of Microsoft’s platforms extremely attractive for developers. Whether this will translate into success for Microsoft on the smartphone and tablet space remains to be seen, but they’ll definitely be giving Apple and Google a run for their developers.