Monthly Archives: September 2011

Tiangong 1 Launches, China Aiming For a More Collaborative Future.

China really has come out of no where in the past decade in terms of space capability. 2003 saw them launch their first taikonaut into space aboard Shenzhou 5 and they quickly followed that up 2 years later with another manned orbital mission that lasted 5 days. Just 3 years after that China then completed their first extravehicular activity (EVA) showing that their ability to develop their capability rivalled that of other nations that had gone before them. Sure they might have bought some of technology from Russia but they’ve improved nearly every aspect of said technology, making it far more capable that it ever was.

Apart from Russia other space faring countries have been somewhat apprehensive in cooperating with the fledgling space nation. The general sentiment is that they wouldn’t have anything to gain and they’d only be helping them (which is ludicrous, considering the improvements they made to all the Russian tech they bought). This has extended as far as the International Space Station not having one Chinese national visit it, leaving China on their own when it comes to developing space technologies. To that end China just today launched their very own space station, Tiangong 1:

China launched their first space station module into orbit today (Sept. 29), marking a major milestone in the rapidly expanding Chinese space program. The historic liftoff of the man ratedTiangong 1 (Heavenly Palace 1) space lab on a Long March 2F rocket took place at 9:16 p.m. local time (9:16 a.m. EDT) from the Jiuquan Satellite Launch Center located in Gansu province in northwest China and is an impressive advance for China.

The beautiful nighttime liftoff occurred exactly on time and was carried live on China’s CCTV and on the internet for all to see. Chinese President Hu Jintao and many of China’s other top government leaders witnessed the launch from the launch control center as a jesture of confidence and support. Their presence was a clear sign of just how important China’s top leadership considers investments in research as a major driver of technological innovationthat is bolstering China’s vigourously growing economy and employing tens of thousands of people.

YouTube Preview Image

As a space station Tiangong 1 is a diminutive craft having only 15m² worth of pressurized volume. Within that space though it has sleeping quarters for a crew of 3 and exercise equipment. The life support systems are capable of hosting a crew for missions up to 40 days in length although that capability won’t be tested for a while. The next Shenzhou mission will be visiting the Tiangong 1 space station however it won’t be manned as it will just be a docking test flight. The following 2 missions will bring crews aboard the space station and they’ll remain in orbit for longer durations each time. After those missions Tiangong 1 will be de-orbited in preparation for the next Tiangong station.

The way China is progressing their technology is distinctly Russian in their origins. From 1971 to 1982 Russia’s Salyut program (which formed the basis for Mir and the ISS) used a similar method for testing equipment and expanding capabilities. During that program a total of 9 Salyut space stations were launched, visited by crews and then de-orbited at the end of their life. It’s a distinct difference from the American way of doing things which is to launch a much larger craft and keep it up there for as long as possible, ala Skylab. Adopting the Russian style of envelope pushing means China can iterate on their designs faster and improve their technology more quickly, which they’ve shown they’re quite capable of doing.

For the launch the International Astronomical Union presented taikonaut Zhai Zhigang with 300 flags that had previously flown on a Russian Soyuz as well as the last space shuttle mission. It might seem like a small gesture but it’s an indication that the world is starting to take China’s endeavour’s in space seriously and will hopefully begin to include them in their cooperative efforts. China has proved they’re quite a capable nation technologically and ignoring them would be doing us a major disservice.

The future of human space exploration is looking ever increasingly bright and China’s success with Tiangong 1 is just another sign of this. Hopefully their success spurs on the space superpowers of old to start innovating faster than they currently are as nothing gets people excited about space more than giants battling it out for technological supremacy. It’s quite likely though that the real competition will come from private industry and that’ll be quite a show to watch.

 

Kindle Fire Home Screen

Kindle Fire: Amazon’s Not Playing Apple’s Game.

Whilst Android has been making solid inroads to the tablet market, snapping up a respectable 26.8%, it’s still really Apple’s market with them holding a commanding lead that no one’s been able to come close to touching. It’s not for a lack of trying though with many big name companies attempting to break into the market only to pull out shortly afterwards, sometimes in blaze of fire sale glory. It doesn’t help matters much that every new tablet will be compared to the iPad thus ensuring every new tablet attempts to one up it in some way, usually keeping a price parity with the iPad but without the massive catalogue of apps that people have come to expect from Apple products. 

Apple’s got a great game going here. All of their iDevice range essentially made the market that they’re in, grabbing enough fans and early adopters to ensure their market dominance for years to come. Competitors then attempt to mimic Apple’s success by copying the essential ideas and then attempting to innovate, fighting an uphill battle. Whilst they might eventually lose ground to the massive onslaught of competitors (like they have to Android) they’ll still be one of the top individual companies, if they’re not number 1. It’s this kind of market leading that makes Apple products so desirable to John Q. Public and the reason why so many companies are failing to steal their market share away.

Rumours have been circulating for a while now over Amazon releasing a low cost tablet of some description and of course everyone was wondering whether it would shape up to be the next “iPad killer”. Today we saw the announcement of the Kindle Fire: a 7-inch multi-touch tablet that’s heavily integrated with Amazon’s services and comes at the low low price of only $199.

As a tablet it’s something of an outsider. Foregoing the traditional 9 to 10 inch screen size for a smaller 7 inch display. The processor in it isn’t anything fantastic, being just a step up from the one that powers the Nook Color, but history has shown it’s quite a capable system so the Kindle Fire shouldn’t be a slouch when it comes to performance. There’s also a distinct lack of cameras, 3G and Bluetooth connectivity meaning that the sole connection this tablet has to the outside world will be via your local wifi connection. It comes with an internal 8GB of storage that’s not upgradeable, favouring to store everything on the cloud and download it as required. You can see why this thing wouldn’t work with WhisperNet.

Also absent is any indication that the Kindle Fire is actually an Android device with the operating system being given a total overhaul. The Google App store has been outright replaced by Amazon’s Android app store and the familiar tile interface has been replaced by a custom UI designed by Amazon. All of Amazon services: music, books and movies to name a few, are heavily integrated with the device. Indeed they are so heavily integrated that the tablet also comes with a free month of Amazon Prime, Amazon’s premium service that offers unlimited free 2 day shipping plus access to their entire catalogue of media. At this point calling this thing a tablet seems like a misnomer, it’s much more of a media consumption device.

What’s really intriguing about the Kindle Fire though is the browser that Amazon has developed for it called Silk. Like Opera Mini and Skyfire before it Silk offloads some of the heavy lifting to external servers, namely Amazon’s massive AWS infrastructure. There’s some smarts in the delineation between what should be processed on device and what should be done on the servers so hopefully dynamic pages, which suffered heavily in this kind of configuration, will run a lot better under Silk. Overall it sounds like a massive step up for the usability of the browser on devices like these which is sure to be a great selling point for the Kindle Fire.

The more I read about the Kindle Fire the more I get the feeling that Amazon has seen the game that Apple has been playing and decided to not get caught up in it like their competitors have. Instead of competing directly with the iPad et. al. they’ve created a device that’s heavily integrated with their own services and have put themselves at arms length with Android. John Q. Public then won’t see the Kindle Fire as an Android Tablet nor an iPad competitor, more it’s a cheap media consumption device that’s capable at doing other tasks from a large and reputable company. The price alone is enough to draw people in and whilst the margins on the device are probably razor thin they’ll more than likely make it up in media sales for the device. All those together make the Kindle Fire a force to be reckoned with, but I don’t think current tablet manufacturers have much to worry about.

The Kindle Fire, much like the iPad before it, carves out its own little niche that’s so far be unsuccessfully filled. It’s not a feature laden object of every geek’s affection, more it’s a tablet designed for the masses with a price that competitors will find hard to beat. The deep integration with Amazon’s services will be the feature that ensures the Kindle Fire’s success as that’s what every other iPad competitor has lacked. However there’ll still be a market for the larger, more capable tablets as they’re more appropriate for people seeking a replacement for their laptop rather than a beefed up media player. I probably won’t be buying one for myself, but I could easily see my parents using one of these.

And I’m sure that’s what Amazon is banking on too.

Why Doust Thou Charge So Much, Steam?

I’m not usually one to complain about the prices of games since I’m usually one of the chumps who’s buying the collector’s edition, usually at a rather hefty premium. I don’t mind paying extra though as that’s just how I roll and those extra geeky goodies are part of the experience of getting a new game. Still sometimes games forego a collector’s edition (like nearly every indie title) so I’ll usually just grab the game from Steam since I can get the download for free thanks to Internode hosting a steam content server. However there’s been a rather worrying trend for games on Steam to be priced way above what they are elsewhere, enough to stop me in my tracks when purchasing some games.

Long time readers will remember that in my Call of Duty: Black Ops review I stated openly that I simply refused to play many Call of Duty games on release day because the price was just bonkers. It’s made even worse by said games being released at sane prices only to be changed shortly afterwards leaving customers who didn’t get in early faced with coughing up the cash or going without. For me I went without for a long time only grabbing a copy once it was below my pain threshold for Steam games. Recently however a friend of mine showed me something that’s changed the way I look at games on Steam, but it still leaves the question of price discrepancy unanswered.

The service I’m referring to is a website called G2Play an online store that mostly sells CD/Steam keys and digital only downloads. I had known about sites like this in the past (Play Asia being another friend favourite) but my trust in them was low since I’d never used them before. However the prices there are simply astonishing with most games being available at very heavy discounts. Figuring that all I had to lose was $37 and possibly a couple hours of my time I ordered a copy of Warhammer 40000: Space Marine. Alarm bells went off when they asked for a copy of my photo ID but I decided that since my friend had used them successfully they couldn’t be all bad and plus it’s nigh on impossible to do much with a bad cell phone picture of my ID. Less than an hour later I had a code and, surprise surprise, it worked like a charm.

I’ve since bought a few more games, each one working flawlessly.

It seems then that the price discrepancy isn’t some hard and fast rule that Steam is keen on enforcing, otherwise they would just deny any codes purchased in this fashion. Even stranger is the fact that these prices are below what’s available in the Steam store in their respective regions, signalling that there’s another avenue to legitimately purchasing games at below the retail price. Whilst this is true for almost any product (usually direct from the supplier/manufacturer) wholly digital products really don’t have those kinds of relationships since the marginal cost is practically 0 for each new unit. Price discrepancies above a small percentage (to account for currency conversions and import taxes) for such products in the global market are then seem to be nothing more than price gouging.

In doing some research for this post I tried to find some official word on why there were such wide price gaps between countries on Steam when ostensibly we’re all being sold the same product. To cut a long story short there isn’t anything official, at least where Steam is concerned. Kotaku Australia writer Mark Serrels did some solid research into why games were so expensive in Australia but failed to come up with a single reason, citing multiple different pressures that could be responsible for the discrepancy. Some of them apply to wholly digital items but the last quip of the Internet bringing down prices doesn’t seem to have eventuated, in fact it’s been quite the opposite. Prices, especially on big titles, have remained quite steady especially for retail box releases.

It really baffles me because Steam was the pioneer of pricing games to sell like hot cakes and that helped catapult them to being the top digital distribution platform. It’s true that us Australians have put up with higher game prices for as long as games have been for sale but the traditional barriers to distributing your games really don’t exist any more, especially for digital downloads. Perhaps as more become aware of services like G2Play Steam pricing will become more sane, but I’m not holding my breath.

Asteroids, Moon and Mars: Humanity’s Next Step in Space Exploration.

To put it bluntly we’ve been spinning our wheels in terms of human space exploration. It was well over 40 years ago that we first placed one of our own on the moon and in the time since then we’ve tentatively sent out our robotic companions to do the exploring for us, staying in the relative safety of low earth orbit ever since. There is no one entity that we can blame for this, more it is a sign of the malaise that took over once the space race was won and there was no longer any political motivation to push the final frontier further. The last decade has seen a few ambitious plans put into motion in order to start pushing that envelope once again, but none of them are to bear fruit for at least a decade.

Of course I’m not expecting that we’ll see another space race any time soon, we’re far too engaged in fixing economic problems right now for another pissing contest between superpowers. However that doesn’t mean that the groundwork can’t be done for a time when countries are ready to pursue space travel with renewed vigour and NASA is doing just that with their roadmap for space exploration:

Human and robotic exploration of the Moon, asteroids, and Mars will strengthen and enrich humanity’s future, bringing nations together in a common cause, revealing new knowledge, inspiring people, and stimulating technical and commercial innovation. As more nations undertake space exploration activities, they see the importance of partnering to achieve their objectives. Building on the historic flight of Yuri Gagarin on April 12, 1961, the first 50 years of human spaceflight have resulted in strong partnerships that have brought discoveries, innovations, and inspiration to all mankind. Discoveries we have made together have opened our eyes to the benefits of continuing to expand our reach.

NASA’s roadmap lays out 2 options for the future of manned missions beyond low earth orbit with both of them converging on the ultimate goal of sending humans to Mars. The first being called “Asteroid Next” which would see our next target being a near earth asteroid favouring the development of deep space technologies. The second is “Moon Next” which would see humanity return to our celestial sister and use it as a test bed for technologies that would enable humans to survive in Mars’ harsh climate. Both options are equally valid, but they are not without their drawbacks.

First let’s have a look at Asteroid Next. The most interesting part of this idea is the establishment of a Deep Space Habitat at the Earth-Moon lagrangian point. Now you might think that this is somewhat pointless when we have the International Space Station but establishing a base beyond the comforts of low earth orbit poses many significant challenges. The ISS as it stands doesn’t have the required shielding to protect it’s occupants past its current orbital altitude and a habitat at L1 or L2 would need significant redesigns. However such rework would form the basis of the module that would carry our explorers to Mars as the requirements for a habitat and interplanetary transport are nearly identical.

Having a base at the lagrangian points also opens up nearly any destination within our solar system and could serve as an excellent base for future missions. The energy required to go from one such points to anywhere in the solar system is quite minimal and well suited to high efficiency engines like ion-thrusters. Having a presence out there would make a perfect base for sending up unmanned equipment prior to sending them to Mars or beyond.

Asteroid Next however doesn’t make any mention of technology development for Mars settlement meaning that the missions to Mars that followed would probably be short lived like their Apollo ancestors were. Asteroid Next then is very much like its predecessors in that regard, being a lot more like a one-shot event that something that would be repeatable for decades to come. This would see us push the boundaries much more aggressively (we could conceivably send a DSH to Mars by 2030) but at the risk of history repeating itself, seeing such missions as one offs.

Moon Next then sees us forego advancing deep space technologies in favour of returning to the moon and establishing a base there. This delays developing deep space technologies in favour of developing, testing and deploying habitats and supporting infrastructure in a much hasher climate than what will be faced on Mars. Technologies like the Deep Space Habitat will still need to be developed as they are crucial for the journey to Mars however Moon Next would see them developed  well over a decade later than Asteroid Next. Moon Next would also see humanities base of operations be that of a small moon colony rather than a base at a lagrangian point which is advantageous in terms of resources (if we can develop technology to harvest some of the Moon’s resources) but does require much more energy in order to launch missions from there.

Going to the Moon before Mars might seem like we’re just repeating what we’ve already done but establishing a base there would be highly advantageous to future missions, and not just future exploration. There are many cases for radio telescopes on the far side of the moon (shielded from all the signals that currently pollute Earth) and there’s the very tantalizing prospect of constructing giant optical observatories that make us of the non-existent atmosphere and low gravity. However going for the Moon first means that a potential Mars shot will be delayed much longer than it would be if we pursued deep space technologies first.

After considering both options I believe our best bet is to go with the Moon Next option. If Mars was the only goal we had Asteroid Next would be the way to go but the potential benefits of a lunar base are just too good to pass up, even if it means not getting to Mars for another decade. Many of the technologies used in developing a lunar base will be transferable to both Mars missions as well as other deep space activities. It’s a tough choice for NASA though as the arguments are equally strong for supporting Asteroid Next and I’ll be watching the debate over these two ideas unfold with a keen interest.

Warhammer 40000 Space Marine Screenshot Wallpaper Titan

Warhammer 40000 Space Marine: Awesomely Epic Fun.

I’ll be honest, hack ‘n’ slash games aren’t really my forte. Sure I’ve played a couple in the past and enjoyed them (like Infinity Blade) but I was never able to get into the big titles like God of War, Bayonetta or Darksiders. I think it comes down to the (usually) rather thin plots and lack of hooks early on in the game that fail to grab my attention, making them rather easy to put down. Still on recommendations from my friends and family I purchased a copy of Warhammer 40K: Space Marine and was pleasantly surprised by how gripping this hack ‘n’ slash game was.

You play as Captain Titus of the Ultramarines, an elite super-human soldier who serves the Imperium of Man. One of the Imperium’s forge worlds, a planet dedicated solely to the manufacturing of the Imperium’s armaments, has come under attack from an Ork invasion. Titus is then sent to the planet in order to delay the invasion for as long as possible in order for an Imperium fleet to arrive. Of course the Ork invasion isn’t the only thing out of the ordinary on this forge world as Titus finds out as the game progresses.

Space Marine does an excellent job of incorporating the vast lore that exists within the Warhammer 40K world. Way back when I was a big fan of nearly all of the Games Workshop games and I’m sure I’ve still got one of the boxed 40K sets sitting up in my parent’s attic somewhere. Right from the start you get the feeling that this particular story is just a sliver of the giant universe in which it is set. Thankfully most of the details of the story aren’t hidden text dumps scattered around the place, with most of the important details being revealed in dialogue exchanges between the characters.

Relic has also done a fantastic job with the set pieces that you’ll come across during you’re adventures in Space Marine. All of the environments have a sense of epicness about them, from the wide open spaces that are filled with countless enemies to the underground tunnels that seem to go on for forever. Yet again this reinforces the larger than life feeling that this game seems to convey, constantly reminding you that you’re but a small cog in the giant wheel of the Imperium of Man.

The graphics as well, whilst nothing spectacular, work quite well within the context. I was never good at painting my collector of miniatures but I always loved seeing the ones which people had done right. Space Marine then evokes that same feeling as they’re extremely well done in true Warhammer styles. This extends to all the additional things like the foley, camera work and use of slow motion to really round out that epic movie feeling. Overall the look and feel of Space Marine is just exquisite, but that would be for nothing if the game wasn’t fun to play.

Combat in Space Marine is meaty, fast paced and just plain fun. There are 2 distinct modes of combat that you’ll use extensively throughout the game. The first is standard 3rd person shooter style which is your standard cover based affair. I’ll be honest and say that this was probably my least favourite aspect of the game as the shooter sections always felt like a distraction from true base of a hack ‘n’ slash game: the melee combat. Still there’s a variety of weapons to choose from (usually placed in piles in front of you) and your choice will determine how easy or hard a particular section is so there’s a definite bit of strategy in Space Marine that traditional hack ‘n’ slashers lack.

However the melee combat is really what makes Space Marine just so fun to play. Initially it’s somewhat of a chore as you’re just set up with a tiny combat knife but you’re quickly paired up with the iconic Space Marine weapon: the Chainsword. After that point it’s just simply glorius as you carve your way through untold hordes of enemies. They also change it up a bit when they introduce two other weapons (the Power Axe and Thunder Hammer) which breaks up the monotony considerably. The fury bar also makes for some interesting moments as once this bar is full you can unleash it, increasing your damage considerably and enabling you to regenerate health as you fight.

Of course this is all taken to a whole new level when you’re given a jet pack which allows you to rocket skyward and then charge back down to earth, devastating anyone who’s in your landing zone. These sections always felt way too short but they are by far the most fun sections in an already amazingly fun game. There’s not much strategy to it but anyone can find the fun in rocketing around the place whilst laying waste to legions of foes.

The multi-player in Space Marine is unfortunately a somewhat mixed affair. The core game play takes all the things you encountered in the single player and mixes them up into the now familiar persistent levelling multi-player experience. You start off with a few basic classes, weapons and perks available to you and as you level more of them are unlocked. The weapons, unlike other similar systems in say Call of Duty: Black Ops, can be somewhat game breaking in certain combinations. This is alleviated by the fact that you can copy an enemies load out when they kill you (for 1 life only) but the balance seems to kick in around the level 10~20 mark, which might be off putting to some players.

The most unfortunate part about the multi-player in Space Marine is the lack of dedicated servers for hosting. This means that you have no choice of who you’re paired up with and all it takes is one player on the other side of the world to start making the game laggy. On the first game that I played it was completely unplayable with me and my fellow LANers being matched to people that were no where near us at all. Changing this up to just us (plus a few other locals) alleviated the lag completely, but getting this in a public game seems to be nigh on impossible.

I played some more multi-player last night just to see if the issues were still occurring and whilst it was no where near as bad as it originally was there was still several occasions when it would starting lagging considerably or delay the game for 10 seconds whilst it waited for the current host to catch up. Talking this over with one of my mates who was a long time player of Dawn of War (another Warhammer based Relic game) this should have come as no surprise as they’ve had a history of atrocious netcode in nearly all of their games. Honestly when all the big names gave up quickly on the idea of serverless multi-player after one iteration you have to wonder why Relic went down this path as it basically ruins what could be an extremely fun and captivating multi-player experience.

The game itself though stands well enough alone though that a bad multi-player experience really can’t detract from the sheer enjoyment I had during my time with Warhammer 40000: Space Marine. The settings are amazing, the lore deep and thoroughly engrossing and the characters believable and aptly voice acted. Space Marine hits all the right buttons and there was never a time when I found myself wanting to put the game down out of frustration, instead feeling myself improve gradually as I mastered all aspects that the game presented to me. For fans of the Warhammer lore I’m sure they’ll enjoy this faithful experience and for regular gamers there’s enough action and thrills to keep you interested right up until the final crescendo.

Rating: 8.7/10

Warhammer 40000: Space Marine is available on PC, Xbox360 and PlayStation 3 right now for $88, $108 and $108 respectively. Game was played on the hardest difficulty with around 8 hours in the single player and 3 hours in multiplayer.

PC Console Games Sales 2014 Predictions

PC Gaming: Retaking The Crown?

Make no mistake, in the world of gaming PCs are far from being the top platform. The reasoning behind this is simple, consoles are simply easier and have a much longer life than your traditional PC making them a far more attractive platform for both gamers and developers a like. This has lead to the consolization of the PC games market ensuring that many games are developed primarily for the console first and the PC becomes something of a second class citizen, which did have some benefits (however limited they might be). The platform is long from forgotten however with it still managing to capture a very respectable share of the games market and still remaining the platform of choice for many eSports titles.

The PC games market has been no slouch though with digital sales powering the market to all time highs. Despite that though the PC still remains a relative niche compared to other platforms, routinely seeing market share in the single digit percentages. There were signs that it was growing but it still seemed like the PC was to be forever relegated to the back seat. There’s speculation however that the PC is looking to make a comeback and could possibly even dominate consoles by 2014:

As of 2008, boxed copies of games had paltry sales compared to digital sales, and nothing at all looks to change. During 2011, nearly $15 billion is going to be attributed to digital sales while $2.5 billion belong to boxed copies. This is a trend I have to admit I am not surprised by. I’ll never purchase another boxed copy if I can help it.

The death of PC gaming has long been a mocking-point of console gamers, but recent trends show that the PC has nothing to stress over. One such trend is free-to-play, where games are inherently free, but support paid-services such as purchasing in-game items. This has proven wildly successful, and has even caused the odd MMORPG to get rid of it subscription fee. It’s also caused a lot of games to be developed with the F2P mechanic decided from the get-go.

The research comes out of DFC Intelligence and NVIDIA was the one who’s been spruiking it as the renaissance of PC gaming. The past couple years do show a trend for PC games sales to continue growing despite console dominance but the prediction starts to get a little hairy when it starts to predict the decline of console sales next year when there doesn’t seem to be any evidence of it. The growth in the PC sales is also strikingly linear leading me to believe that it’s heavily speculation based. Still it’s an interesting notion to toy with, so let’s have a look at what could (and could not) be driving these predictions.

For starters the data does not include mobile platforms like smart phones and tablets which for the sake of comparison is good as they’re not really on the same level as consoles or PCs. Sure they’ve also seen explosive growth in the past couple years but it’s still a nascent platform for gaming and drawing conclusions based on the small amounts of data available would give you wildly different results based purely on your interpretation.

A big driver behind these numbers would be the surge in the number of free to play, micro-transaction based games that have been entering the market. Players of these types of games will usually spend over and above the usual amount they would on a similar game that had a one off cost. As time goes on there will be more of these kinds of titles that appeal to a wider gamer audience thereby increasing the revenue of PC games considerably. Long time gamers like me might not like having to fork out for parts of the game but you’d be hard pressed to argue that it isn’t a successful business model.

Another factor could be that the current console generation is getting somewhat long in the tooth. The Xbox360 and PlayStation 3 were both launched some 5 to 6 years ago and whilst the hardware has performed admirably in the past the disparity between what PCs and consoles are capable of is hard to ignore. With neither Microsoft nor Sony mentioning any details on their upcoming successors to the current generation (nor if they’re actually working on them) this could see some gamers abandon their consoles for the more capable PC platforms. Considering even your run of the mill PC is now capable of playing games beyond the console level it wouldn’t be surprising to see gamers make the change.

What sales figures don’t tell us however is what the platform of choice will be for developers to release on. Whilst the PC industry as a whole might be more profitable than consoles that doesn’t necessarily mean it will be more profitable for everyone. Indeed titles like Call of Duty and Battlefield have found their homes firmly on the console market with PCs being the niche. The opposite is true for many of the online free to play games that have yet to make a successful transition onto the console platform. It’s quite possible that these sales figures will just mean an increase in a particular section of the PC market while the rest remain the same.

Honestly though I don’t think it really matters either way as game developers have now shown that it’s entirely possible to have a multi-platform release that doesn’t make any compromises. Consolization then will just be a blip in the long history of gaming, a relic of the past that we won’t see repeated again. The dominant platform of the day will come and go as it has done so throughout the history of gaming but what really matters is the experience which each of them can provide. As its looking right now all of them are equally capable when placed in the hands of good developers and whilst these sales projections predict the return of the PC as the king platform in the end it’ll be nothing more than bragging rights for us long time gamers.

Was That Really Necessary, Sony?

Whilst I might be an unapologetic Sony fan boy even I can’t hide from their rather troubled past when it comes to customer relations. Of course everyone will remember their latest security incident which saw millions of PSN accounts breached but they’ve also had other fun incidents involving auto-installing root kits as copy protection and suing people into silence. Of course every corporation has its share of misgivings but Sony seems to have somewhat of a habit of getting themselves into hot water on a semi-regular basis with their actions. This week brings us another chapter in the saga that is the people vs Sony corporation, but it’s not as bad as it first seems.

Last week saw Sony update their PSN agreement which happens with nearly every system update that the PlayStation 3 receives. However this time around there was a particular clause that wasn’t in there previously, specifically one that could prevent class action lawsuits:

Sony has been hit with a number of class-action lawsuits since the launch of the PlayStation 3, mostly due to the decision to retroactively remove Linux support from the console and losing the data of users due to questionable security practices. Sony has another solution to this problem beyond beefing up security (and it’s not retaining the features you paid for): if you accept the next mandatory system update, you sign away your ability to take part in a class-action lawsuit. The only option left for consumers if they agree is binding individual arbitration.

ANY DISPUTE RESOLUTION PROCEEDINGS, WHETHER IN ARBITRATION OR COURT, WILL BE CONDUCTED ONLY ON AN INDIVIDUAL BASIS AND NOT IN A CLASS OR REPRESENTATIVE ACTION OR AS A NAMED OR UNNAMED MEMBER IN A CLASS, CONSOLIDATED, REPRESENTATIVE OR PRIVATE ATTORNEY GENERAL LEGAL ACTION, UNLESS BOTH YOU AND THE SONY ENTITY WITH WHICH YOU HAVE A DISPUTE SPECIFICALLY AGREE TO DO SO IN WRITING FOLLOWING INITIATION OF THE ARBITRATION. THIS PROVISION DOES NOT PRECLUDE YOUR PARTICIPATION AS A MEMBER IN A CLASS ACTION FILED ON OR BEFORE AUGUST 20, 2011.

Accompanying that particular section is a clause that allows you to opt out of this particular section of the agreement but you have to send a snail mail letter to what I assume to be Sony’s legal department in Los Angeles. On the surface this appears to rule out any further class action suits that Sony might face in the future, at least in the majority of cases where people simply click through without reading the fine print. Digging through a couple articles (and one insightful Hacker News poster) on it however I don’t think that this is all it’s cracked up to be, in fact it might have been wholly unnecessary for Sony to do it in the first place.

The clause explicitly excludes small claims which can be up to thousands of dollars. Now I’ve never been involved in any class action suits myself but the ones I’ve watched unfold online usually end up with all affected parties receiving extremely small pay offs, on the order of tens or hundreds of dollars. If you take Sony hacking case as an example a typical out of pocket expenditure for a victim of identity theft is approximately $422 (in 2006), much lower than the threshold for small claims. Considering that Sony already provided identity fraud insurance for everyone affected by the PSN hack it seems like a moot point anyway.

Indeed the arbitration clause seems to be neither here or there for Sony either with the new clause binding both parties to the arbitrator’s decision, rendering them unable to contest it in a higher court. The arbitration can also occur anywhere in the USA so that people won’t have to travel to Sony in order to have their case heard. The clause also doesn’t affect residents of Europe or Australia further limiting its reach. All in all it seems like it tackles a very narrow band of potential cases, enough so that it barely seems necessary for Sony to even put it in.

Honestly I feel that it’s more that given their track record Sony has to be extremely careful with anything they do that could be construed as being against their consumers. The arbitration clause, whilst looking a lot like a storm in a teacup, just adds fuel to the ever burning flamewar that revolves around Sony being out to screw everyone over. Hopefully they take this as a cue to rework their PR strategies so that these kind of incidents can be avoided in the future as I don’t think their public image can take many more beatings like this.

Website Performance (or People are Impatient).

Way back when I used to host this server myself on the end of my tenuous ADSL connection loading up the web site always felt like something of a gamble. There were any number of things that could stop me (and the wider world) from getting to it like: the connection going down, my server box overheating or even the power going out at my house (which happened more often than I realised). About a year ago I made the move onto my virtual private server and instantly all those worries evaporated and the blog has been mostly stable ever since. I no longer have to hold my breath every time I type my url into the address bar nor do I worry about posting media rich articles anymore, something I avoided when my upstream was a mere 100KB/s.

What really impressed me though was the almost instant traffic boost that I got from the move. At the time I just put it down to more people reading my writing as I had been at it for well over a year and a half at that point. At the same time I had also made a slight blunder with my DNS settings which redirected all traffic from my subdomains to the main site so I figured that the burst in traffic was temporary and would drop off as people’s DNS caches expired. The strangest thing was though that the traffic never went away and continued to grow steadily. Not wanting to question my new found popularity I just kept doing what I was always doing until I stumbled across something that showed me what was happening.

April last year saw Google mix in a new metric to their ranking algorithm: page load speed, right around the same time that I experienced the traffic boost from moving off my crappy self hosting and onto the VPS. The move had made a significant improvement in the usability of the site, mostly due to the giant pipe that it has, and it appeared that Google was now picking up on that and sending more people my way. However the percentage of traffic coming here from search engines remained the same but since it was growing I didn’t care to investigate much further.

I started to notice some curious trends though when aggregating data from a couple different sources. I use 2 different kinds of analytics here on The Refined Geek the first being WordPress.com Stats (just because it’s real-time) and Google Analytics for long term tracking and pretty graphs. Now both of them agree with each other pretty well however the one thing they can’t track is how many people come to my site but leave before the page is fully loaded. In fact I don’t think there’s any particular service that can do this (I would love to be corrected on this) but if you’re using Google’s Webmaster Tools you can get a rough idea of the number of people that come from their search engine but get fed up waiting for your site to load. You can do this by checking the number of clicks you get from search queries and comparing that to the number of people visiting your site from Google Analytics. This will give you a good impression of how many people abandon your site because it’s running too slow.

For this site the results are quite surprising. On average I lose about 20% of my visitors between them clicking on the link in Google and actually loading a page¹. I shudder to think how many I was losing back in the days where a page would take 10+ seconds to load but I’d hazard a guess it was roughly double that if I take into account the traffic boost I got after moving to a dedicated provider. Getting your site running fast then is probably one of the most important things you can do if you’re looking to get anywhere on the Internets, at least that’s what my data is telling me.

After I realised this I’ve been on a bit of a performance binge, trying anything and everything to get it running better. I’m still in the process of doing so however and many of the tricks that people talk about for WordPress don’t translate well into the Windows world so I’m basically hacking my way through it. I’ve dedicated part of my weekend to this and I’ll hopefully write up the results next week so that you other crazy Windows based WordPressers can benefit from my tinkering.

¹If people are interested in finding out this kind of data from their Google Analytics/Webmasters Tools account let me know and I might run up a script to do the comparison for you.

 

Google+ API is Here, But is it Enough?

Google+ has only been around for a mere 2 months yet I already feel like writing about it is old hat. In the short time that the social networking service as been around its had a positive debut to the early adopter market, seen wild user growth and even had to tackle some hard issues like their user name policy and user engagement. I said very early on that Google had a major battle on their hands when they decided to launch another volley at an another silicone valley giant but early indicators were pointing towards them at least being a highly successful niche product at the very least, if for the only fact that they were simply “Facebook that wasn’t Facebook“.

One of the things that was always lacking from the service was an API that was on the same level as its competitors. Both Facebook and Twitter both have exceptional APIs that allow services to deeply integrate with them and, at least in the case of Twitter, are responsible in a large part for their success. Google was adamant that an API was on the way and just under a week ago they delivered on their promise, releasing an API for Google+:

Developers have been waiting since late June for Google to release their API to the public.  Well, today is that Day.  Just a few minute ago Chris Chabot, from Google+ Developer Relations, announced that the Google+ API is now available to the public. The potential for this is huge, and will likely set Google+ on a more direct path towards social networking greatness. We should see an explosion of new applications and websites emerge in the Google+ community as developers innovate, and make useful tools from the available API. The Google+ API at present provides read-only access to public data posted on Google+ and most of the Google+ API follows a RESTful API design, which means that you must use standard HTTP techniques to get and manipulate resources.

Like all their APIs the Google+ one is very well documented and even the majority of their client libraries have been updated to include the new API. Looking over the documentation it appears that there’s really only 2 bits of information available to developers at this point in time, those being public Profiles (People)  and activities that are public. Supporting these APIs is the OAuth framework so that users can authorize external applications so that they can access their data on Google+. In essence this is a read only API for things that were already publicly accessible which really only serves to eliminate the need to screen scrape the same data.

I’ll be honest, I’m disappointed in this API. Whilst there are some useful things you can do with this data (like syndicating Google+ posts to other services and reader clients) the things that I believe Google+ would be great at doing aren’t possible until applications can be given write access to my stream. Now this might just be my particular use case since I usually use Twitter for my brief broadcasts (which is auto-syndicated to Facebook) and this blog for longer prose (which is auto shared to Twitter) so my preferred method of integration would be to have Twitter post stuff to my Google+ feed. Because as it is right now my Google+ account is a ghost town compared to my other social networks simply because of the lack of automated syndication.

Of course I understand that this isn’t the final API, but even as a first attempt it feels a little weak.

Whilst I won’t go as far as to say that Google+ is dying there is data to suggest that the early adopter buzz is starting to wind down. Anecdotally my feed seems to mirror this trend with average time between posts on there being days rather than minutes it is on my other social networks. The API would be the catalyst required to bring that activity back up to those initial levels but I don’t think it’s capable of doing so in its current form. I’m sure that Google won’t be a slouch when it comes to releasing new APIs but they’re going to have to be quick about it if they want to stem the flood of inactivity.

I really want to use Google+, I really do it’s just that the lack of interoperability that keeps all my data out of it. I’m sure in the next couple months we’ll see the release of a more complete API that will enable me to use the service as I, and many others I feel, use our other social networking services. 

WinRT

Windows 8 and WinRT: On the Cusp of Platform Unification.

Last week saw the much talked about Microsoft BUILD conference take place, the one for which all us developers tentatively held our breath wondering what the future of the Microsoft platform would be. Since then there’s been a veritable war chest of information that’s come from the conference and I unfortunately didn’t get the time to cover it last week (thanks mostly to my jet setting ways). Still not writing about it right away has given me some time to digest the flood of information and speculation that this conference has brought us and I personally believe that Windows 8 is nothing but good news for developers, even those who thought it would lead to the death of their ecosystem.

For starters the project codenamed Jupiter has an official name of Windows Run Time (WinRT) and looks to be an outright replacement for the Win32 API that’s been around since 1993. The big shift here is that whilst Win32 was designed for a world of C programmers WinRT will instead be far more object-oriented, aimed more directly at the C++ world. WinRT applications will also use the XAML framework for their user interfaces and will compile to native x86 code rather than to .NET bytecode like they currently do. WinRT applications also do away with the idea of dialog boxes, removing the notion of modal applications completely (at least, in the native API). This coupled with the fact that any API that takes longer than 50ms to respond being asynchronous means that Metro apps are inherently more responsive, something that current x86 desktop apps can’t guarantee. Additionally should an app be designed for the Metro styled interface it must only use the WinRT libraries for the interface, you can’t have mixed Metro/Classic applications.

If you’re after an in-depth breakdown of what WinRT means for developers Miguel de Icaza (of Mono fame) has a great breakdown here.

WinRT will also not be a universal platform on which will provide backwards compatibility for all current Windows applications. It’s long been known that Windows 8 will be able to run on ARM processors but what wasn’t clear was whether or not current applications would be compatible with the flavour of Windows running on said architecture. As it turns out x86 applications won’t work on the ARM version of Windows however applications written on the WinRT framework will run on every platform with only minor code changes (we’re talking single digit lines here). Those legacy applications will still run perfectly well in the Desktop mode that Windows 8 offers and they’ll be far from second class citizens as Microsoft recognizes how things like their Office suite don’t translate well to the tablet environment.

At the same time Microsoft has also announced that the web browser in the Metro UI will not support any kind of plug-ins including their very own Silverlight. Of course you’re always welcome to switch into desktop mode should you visit a website that requires a plug-in but Microsoft said the aim is for everyone to transition away from plug-ins and onto the HTML5/JavaScript stack. On the surface this seems to verify the notion that Silverlight developers are screwed as all their apps are second class citizens in the new Metro world. However since WinRT apps are developed in a very similar way to that of Silverlight apps the transition to the Metro platform will probably be nothing more than changing namespaces and tidying up the UI so it fits in with the new design. Distribution of said apps will then come via the Microsoft app store, rather than a company’s web server. Sure it’s a paradigm shift away from what they’re currently used to, but Silverlight developers will find themselves right at home with WinRT.

Taking this all into consideration it seems like there will be a line in the sand between what I’ll call “Full” Windows 8 users and “Metro” based users. Whilst initially I thought that Jupiter would mean any application (not just those developed on WinRT) would be able to run anywhere it seems that only WinRT apps have that benefit, with current x86 apps relegated to desktop mode. That leads me to the conclusion that the full Windows 8 experience, including the Desktop app, won’t be available to all users. In fact those running on ARM architecture more than likely won’t have access to the desktop at all instead being relegated to just the Metro UI. This isn’t a bad thing at all since tablets, phones et. al. have very different use cases than those of the desktop but, on the surface at least, it would appear to be a step away from their Three Screens vision.

From what I can tell though Microsoft believes the future is Metro styled apps for both desktop and tablet users a like. John Gruber said it best when he said “it’s going to be as if Mac OS X could run iPad apps, but iPads could still only run iPad apps. Metro everywhere, not Windows everywhere.” which I believe is an apt analogy. I believe Microsoft will push WinRT/Metro as the API to rule them all and with them demoing Xbox Live on Windows 8 it would seem that at least on some level WinRT will be making it’s way to the Xbox, thereby realizing Microsoft’s Three Screens idea. Whether the integration between those 3 platforms works as well as advertised remains to be seen but the demo’s shown at BUILD are definitely promising.