Whilst the only current smartphone platform I’ve had any decent experience with is Apple’s iPhone I’m still not completely tied to it. Sure the platform is great and I’ll always be keeping an iOS device around for as long as I keep developing for the platform but my next handset purchase is more than likely not going to be another Apple device. The case is strong for a Windows Phone 7 handset thanks to its great tool set and general esoteric-ness (I don’t yet know anyone who’s bought one) but that same air of mystery is a double edged sword. Sure most of my general applications will be available on it, like Twitter and Facebook, but past that there’s not a whole lot of interest in the platform.
It’s not surprising really considering that slice of the mobile market pie that Microsoft commands is only a mere 5.5% according to the IDC, which includes all handsets that come under the Windows umbrella. The nearest rival to them is RIM (creator of the Blackberry handset series) who nearly triple their share at a whopping 14.9% and even they don’t seem to command a 3rd party developer army comparable to that of Android or Google. Still with them sealing the deal on a partnership with Nokia recently the IDC has reported that Microsoft’s WP7 platform will begin to surge ahead, overtaking iOS and being second only to Android.
The intial reaction to this was of course, utter disbelief:
In the close to six months that WP7 has been available, it has failed to set sales on fire. In fact, Microsoft hasn’t provided any metrics on how many WP7 handsets have been sold. Also, the 5.5% market share that Microsoft has now represents both WP7 and the old Windows Mobile 5.x and 6.x systems, which are still being sold on enterprise handhelds.
Further, Microsoft has stumbled badly with the first two system updates for its smartphone platform. First by delaying it for nearly two months, and second by bungling the actual delivery of the updates. Things are not going so smoothly for Microsoft. Heck, WP7 champion Joe Belfiore actually wrote a public apology to its WP7 customers about the whole update debacle.
Zeman makes some good points about the WP7 ecosystem and the troubles Microsoft has faced in dragging their Windows Mobile platform into the modern age. The sales figures aren’t that impressive when you compare them to iOS and Android, heck they’re not even that impressive compared to single handsets on either platform. Still this ignores the fact that WP7 is still a nascent platform and it will be a while before it reaches the maturity level that everyone’s expecting of it. If we’re fair and compare the initial WP7 sales to the initial release of Android you’ll actually find them quite comparable with the G1 selling some 600,000 handsets in the first couple months and WP7 cracking 1.5 million in its first 6 weeks. It took quite a while for Android and even the iPhone to hit the fever pitch that they have today so the current market share of WP7 devices shouldn’t really come as a surprise.
I can’t provide an excuse for their botched update schedule however. Apple seems to be the only major competitor that’s nailed this completely with Android and WP7 both suffering from the same carrier induced delays and fragmentation problems. It’s actually one of the reasons why I haven’t already lashed out for a WP7 handset since the main carrier of them here in Australia, Telstra, is still testing the pre-update update and has no schedule for the release of the coveted Nodo update. Since there doesn’t seem to be any way to route around the carrier and install the patch manually (although I’ll admit I haven’t done a ton of research on this) it means I’m wholly dependent on someone other than Microsoft to get my handset updated. With Telstra’s track record that doesn’t exactly inspire much confidence in the platform.
Both Android and iOS faced similar problems in their infancy and I’m sure WP7 will be able to overcome them in the future. Whether it will become the second most popular platform though remains to be seen as whilst the Nokia relationship means they have a strong possibility of gaining some serious traction it’s not a sure bet that every current Symbian customer will convert over to WP7. With Microsoft being particularly coy about their sales figures its hard to get a good reading of how their new mobile platform is trending but it will definitely be interesting to see how their market share changes as Nokia begins releasing their WP7 handsets.
Maybe I’m just hanging around the wrong places on the Internet but recently there seemed to be a higher than average level of vitriol being launched at Microsoft. From my totally arbitrary standpoint it seems that most people don’t view Microsoft as the evil empire that they used to and instead now focus on the two new giants in the tech center, Apple and Google. This could be easily explained by the fact that Microsoft hasn’t really done anything particularly evil recently whilst Apple and Google have both been dealing with their ongoing controversies of platform lock-down and privacy related matters respectively. Still no less than two articles have crossed my path of late that squarely blame Microsoft for various problems and I feel they warrant a response.
The first comes courtesy of the slowly failing MySpace who has been bleeding users for almost 2 years straight now. Whilst there are numerous reasons as to why they’re failing (with Facebook being the most likely) one blog asked the question if their choice of infrastructure was to blame:
1. Their bet on Microsoft technology doomed them for a variety of reasons.
2. Their bet on Los Angeles accentuated the problems with betting on Microsoft.
Let me explain.
The problem was, as Myspace started losing to Facebook, they knew they needed to make major changes. But they didn’t have the programming talent to really make huge changes and the infrastructure they bet on made it both tougher to change, because it isn’t set up to do the scale of 100 million users it needed to, and tougher to hire really great entrepreneurial programmers who could rebuild the site to do interesting stuff.
I won’t argue point 2 as the short time I spent in Los Angeles showed me that it wasn’t exactly the best place for acquiring technical talent (although I haven’t been to San Francisco to give it a good comparison, but talking with friends who have seems to confirm this). However betting on Microsoft technology is definitely not the reason why MySpace started on a long downward spiral several years ago, as several commenters point out in this article. Indeed MySpace’s lack of innovation appears to stem from the fact that they outsourced much of their core development work to Telligent, a company that provides social network platforms. The issue with such an arrangement meant that they were wholly dependent on Telligent to provide updates to the platform they were using, rather than owning it entirely in house. Indeed as a few other commenters pointed out the switch to the Microsoft stack actually allowed MySpace to Scale much further with less infrastructure than they did previously. If there was a problem with scaling it definitely wasn’t coming from the Microsoft technology stack.
When I first started developing what became Lobaco scalability was always something that was nagging at the back of my head, taunting me that my choice of platform was doomed to failure. Indeed there have been only a few start-ups that have managed to make it big using the Microsoft technology stack so it would seem like the going down this path is a sure fire way to kill any good idea in its infancy. Still I have a heavy investment in the Microsoft line of products so I kept on plugging away with it. Problems of scale appear to be unique for each technology stack with all of them having their pros and cons. Realistically every company with large numbers of users has their own unique way of dealing with it and the technology used seems to be secondary to good architecture and planning.
Still there’s still a strong anti-Microsoft sentiment amongst those in Silicone Valley. Just for kicks I’ve been thumbing through the job listings for various start ups in the area, toying with the idea of moving there to get some real world start-up experience. Most commonly however none of them want to hear anything about a Microsoft based developer, instead preferring something like PHP/Rails/Node.js. Indeed some have gone as far as to say that .NET development is black mark against you, only serving to limit your job prospects:
Programming with .NET is like cooking in a McDonalds kitchen. It is full of amazing tools that automate absolutely everything. Just press the right button and follow the beeping lights, and you can churn out flawless 1.6 oz burgers faster than anybody else on the planet.
However, if you need to make a 1.7 oz burger, you simply can’t. There’s no button for it. The patties are pre-formed in the wrong size. They start out frozen so they can’t be smushed up and reformed, and the thawing machine is so tightly integrated with the cooking machine that there’s no way to intercept it between the two. A McDonalds kitchen makes exactly what’s on the McDonalds menu — and does so in an absolutely foolproof fashion. But it can’t go off the menu, and any attempt to bend the machine to your will just breaks it such that it needs to be sent back to the factory for repairs.
I should probably point out that I don’t disagree with some of the points of his post, most notably how Microsoft makes everything quite easy for you if you’re following a particular pattern. The trouble comes when you try to work outside the box and many programmers will simply not attempt anything that isn’t already solved by Microsoft. Heck I encountered that very problem when I tried to wrangle their Domain Services API to send and receive JSON a supported but wholly undocumented part of their API. I got it working in the end but I could easily see many .NET developers simply saying it couldn’t be done, at least not in the way I was going for it.
Still that doesn’t mean all .NET developers are simple button pushers, totally incapable of thinking outside the Microsoft box. Sure there will be more of those type of programmers simply because .NET is used is so many places (just not Internet start-ups by the looks of it) but to paint all of those who use the technology with the same brush seems pretty far fetched. Heck if he was right then there would’ve been no way for me to get my head around Objective-C since it’s not supported by Visual Studio. Still I managed to get competent in 2 weeks and can now hack my way around in Xcode just fine, despite my extensive .NET heritage.
It’s always the person or company, not the technology, that limits their potential. Sure you may hit a wall with a particular language or infrastructure stack but if you’re people are capable you’ll find a way around it. I might be in the minority when it comes to trying to start a company based around Microsoft technology but the fact is that attempting to relearn another technology stack is a huge opportunity cost. If I do it right however it should be flexible enough so that I can replace parts of the system with more appropriate technologies down the line, if the need calls for it. People pointing the finger at Microsoft for all their woes are simply looking for a scapegoat so they don’t have to address the larger systemic issues or are simply looking for some juicy blog fodder.
I guess they found the latter, since I certainly did 😉
The date is fast approaching April and that means the Fringe Benefits Tax year is about to roll over. For most people this is a non-event unless you’re salary sacrificing a car but for contractors like me it means I can write off another phone and laptop device on tax, effectively getting them for half the market price. Whilst it’s not as good as it used to be (you were also able to depreciate them, making said devices basically free) there hasn’t been a year yet when I haven’t taken advantage of at least getting a new phone, and last year was the first when I purchased my Macbook Pro. So of course I’ve spent the last couple weeks looking through the available selection of phones and tablets with which to gorge myself upon and the more I look the more I get the feeling I won’t be able to leave my iPhone behind like I did with my other smart phones.
The tablet choice is pretty easy since I’m not particularly fond of the iPad (I don’t need another iDevice) and getting something like the Motorola Xoom would cover off my need for an Android device to code against. To have all current platforms covered then the smart phone choice (HA! See what I did there?) would be a Windows Phone 7 handset. Taking a look around I found a few pretty good deals on various handsets with contracts comparable to what I’m on now with tons of extra value. My handset of choice is the HTC Mozart which appears to be the cream of the current crop of WP7 handsets, anything else is just too far off on the horizon to be worth considering.
Of course whenever I’m contemplating a new phone I’ll always compare it to what I currently have to see if it fixes the things that bug me and whether or not it will be worth it. Whilst my 3GS is less than a year old it’s nipping on the feet of being 2 generations behind the current trend so any recent handset should beat it hands down. A quick look at the similarly priced handsets shows this to be true all of them bristling with bigger CPUs, more RAM and better dedicated graphics. Unfortunately however there’s one thing that all the other handsets I’ve been looking at don’t cover.
That unfortunate beast is the Apple App Store.
Despite the insane growth that Android has shown over the past year Apple is still the platform of choice for many early adopters and developers. It’s extremely rare for a company to attempt to launch a mobile application on anything but Apple first, simply because the user base tends much more towards that early adopter mindset of trying things out. Sure there are many examples of popular apps that made their debut on the Android markets (although none that I’m aware of for WP7) but when you compare them to the number of success Apple can count using its platform there’s really no contest. Couple that with the fact that despite Android’s runaway popularity the App store is still by far more profitable for developers looking to sell their wares and you’d really have to be crazy not to launch on their platform.
For me this presents an interesting conundrum. Whilst I was never going to sell my 3GS since it will make a good test bed for at least another year or two I do use it quite extensively to test out potential competitor’s applications. Since most of them launch on iPhone first this hasn’t been a big deal but with me planning to move to WP7 (or possibly Android) for my main handset I can’t help but feel that I’ll probably want to keep it on hand so that I can keep a close eye on the market. Sure I could just make a note to try an application later but many up and coming products are based around using them for a particular purpose, not booting them up occasionally to see the new features. Granted this is probably limited to social applications but any new product is almost guaranteed to have some kind of social bent baked in (heaven knows I tried to avoid it for the longest time with Lobaco).
The market could change and with the growth that Android is experiencing I may be singing a completely different tune a year from now. Still until the Android store starts pumping out billions of dollars to its developers I can’t see a future where any serious developer isn’t focused primarily on Apple first with Android planned down the line. For now I think I’ll stick with my plan of a WP7 phone and an Android tablet, keeping the gaggle of devices close at my side at all times so that I can test any app regardless of its platform. It’s the same line of thinking that lead me to buy every major console, although the Wii has only ever been used a couple times.
There’s an analogy in there somewhere 😉
Ah Crysis, one of the few games that basically dared anyone with a top of the line PC to try and run it max settings only to have it bring it down in a screaming mess. I remember the experiences fondly with many machines coming up against the Crysis beast only to fall when things were turned up to the nth degree. To it’s credit though it aged quite well, meaning that unlike other games like Far Cry 2 that chugged for seemingly no reason Crysis was really a generation ahead of itself. I only managed to get a full play through of it done after I upgraded in mid-2008 and I can remember it being quite a beautiful game even then. It’s been a long time between drinks for the Crysis series but last week, after over 3 years since its first release, Crysis 2 debuted to much fanfare and the lament of those who had not upgraded.
I was amongst those who had upgraded just after the original Crysis came out and haven’t done so since. Apart from upgrading the graphics card once my machine is still a Core 2 Duo E8400 with 4GB RAM and a Radeon HD4970 graphics card. You can then imagine my anxiety as I booted up the game for the first time and seeing the game choosing a somewhat less than optimal display resolution for my widescreen monitors. Still I figured I should at least give it a go at native resolution before turning it down again, figuring it would be fun to see my beast struggle under the load of the next generation of Crysis, something which I haven’t really seen in years.
You can then imagine my surprise when everything ran, for want of a better phrase, fucking brilliantly.
Whilst the first 15 minutes of the game aren’t much more than a glorified movie quite a lot of it is done in game. Whilst I was first taken a back by how smooth it was running I figured it was because of the limited scenery and effects, thinking that once I was out in the urban jungle of New York my PC would begin crying under the load. But still the whole way through the game from wide open scenes with explosions going off everywhere to the various underground passages I spelunked the game ran incredibly smooth with the only signs of the framerate dropping when my PC decided it really needed to do something on my games drive.
It’s at this point that I’d usually make some quip about how all games run well on old hardware since they’ve all been primarily designed for consoles first but looking at Crysis 2, even though it’s on all major platforms, I couldn’t really pick any areas that suffered because of this. The graphics are phenomenal, easily trouncing everything I’ve played through this year. This is even after they’ve included all the goodies like volumetric lighting, realistic fog and awesome effects like the cloaking transparency. Truly Crytek have outdone themselves with CryEngine 3 bringing great graphics to the masses.
After all that gushing about the graphics, I suppose I should say something about the game 😉
Crysis 2 is set entirely in New York City where the Ceph, an alien race that players of the original Crysis will be familiar with, have begun their invasion of earth. It appears to be a 2 pronged invasion with them releasing a virus that seems to be randomly striking down all of the burrow’s denizens as well as flooding the streets with their cyborg warriors. You play as Alcatraz a marine who’s being sent into New York to extract Dr. Gould, a scientist who may have information regarding the alien invasion. Unfortunately your submarine is taken out by a Ceph ship and you’re seemingly left to die until Prophet (again a familiar face for original Crysis players) rescues you and bestows his nanosuit upon you.
Game play has been refined and streamlined from the original Crysis. Instead of picking a particular mode for your suit (speed, strength, cloak, armor) most abilities will automatically engage when you do something that requires it (like sprinting or holding down the jump to do a super jump). You still have cloak and armor modes which have to be actively enabled but thankfully they’re mapped by default to E and Q respectively, making the transition quite easy. Additionally the suit can be upgraded through a very similar interface to the gun modification menu, requiring you to collect Nano Catalyst which drops from Ceph enemies when you defeat them. This allows you to change the way you play the game quite significantly, giving you the choice between your typical run and gun FPS to an entirely stealth game with only smatterings of toe-to-toe combat.
Indeed unlike many of the more recent cinematic shooters we’ve seen released over the past year or so Crysis 2 doesn’t have that feeling of being totally locked to the one path the game designer had in mind. Nearly every encounter can be completed through the use of stealth or just as easily by jumping into the thick of battle and blasting your way through the waves of enemies that come at you. This is also complemented by the range of weapons the game throws at you, leaving you the choice to take the most appropriate one for your particular play style. Of course there are some encounters where doing it in a particular way with a certain weapon will be an order of magnitude easier than the other choices but it’s still much better than have no choice at all, like we’ve become accustomed to with the recent influx of AAA FPS titles.
The game is unfortunately not without its faults however, as the screenshot above would allude to. Whilst this particular incident of tearing was isolated to a 30 minute section of game play (and no it was not overheating since it went away in the next scene) there were a couple other non-breaking issues that plagued my game time. Often I’d find a weapon I’d like to swap my current one for after seeing what’s coming up ahead only for the game to not register the gun’s existence, rendering me unable to pick it up. Reloading would usually fix this but since there’s no option to manually save your game this could sometimes send me quite far back in the game so most of the time I just went wanting. Additionally some of the scene geometry’s hit boxes would be bigger than they appeared on screen, getting my character stuck on invisible boxes. All these problems pale in comparison to the games biggest flaw: the multi-player.
Now I don’t do a whole lot of multi-player gaming unless it’s with friends but I really enjoyed the multi-player in Crysis and Crysis: Warhead so I figured I’d give it a go, thinking it would make good blog fodder. Hitting the multi-player link on the main screen I was prompted to enter in my game key again, strange since I was pretty sure I had to do that to play the game. Thankfully it came up with my pre-order bonuses so I figured it must’ve just needed it for the initial multi-player set up. After looking around the server list for a while I found one with some spots spare and clicked join, only to receive the error “Serial code currently in use”. Unphased I restarted Crysis 2 to attempt it again, only to be asked yet again for my serial key and receiving the exact same error upon attempting to join a server.
Strangely enough I could join empty games no problem so I figured it might be something to do with the way I was trying to join games. I hit up the quick match and chose Instant Action (everyone for themselves) and managed to get 2 games in. Those brief moments were quite fun as the games were chaotic with people appearing and disappearing everywhere. Satisfied that I wasn’t doing something wrong I tried yet again to join a server only to be greeted with the same errors. My frustrations were compounded by the fact that there’s no auto-retry function to attempt to join a server that’s full, leaving me the option of trying to find one that’s partially full (which doesn’t seem to happen very often) or waiting in an empty room for people to join (which also doesn’t happen). I tried in vain for another 30 minutes to get in one more game before giving up entirely and tweeting my frustrations at the Crysis team.
Like nearly all AAA titles Crysis 2’s ending also screams “OMG THERE WILL BE A SEQUEL” so loudly at the end that you’d have to leave the room not to know about it. Sure they made it clear at the start that Crysis was a trilogy so 2 sequels were a given but this almost felt like Crytek saying “Hey guys, guess who’s going to be the next Call of Duty franchise?”. I’m a fan of solid FPS action as much as the next guy but leaving the ending deliberately open just gives me the shits, even if the current story was wrapped up well enough.
Despite these problems the core of the game is good, extending on the success that Crysis enjoyed whilst showing off the capabilities of the CryEngine 3 magnificently. I’ve had several on the fence friends see me playing through the game on Steam ask me if it was worth the purchase and I’ve told them, even if you negate the multi-player (which in all honesty where the true replay value of games like this lie), the game is still good value for money. Whilst I haven’t been at a LAN in over a year I can still see Crysis 2 being a LAN favorite for some time to come with the extensive variety of multiplayer modes available along with numerous smaller maps to cater for smaller groups. Whilst the game ran incredibly smooth on my current rig I’m still excited at the prospect of upgrading yet again just to see how capable the game is when everything is driven to its absolute max as it was unabashedly gorgeous on my now 3 year old rig.
Rating: 9.0/10 (I’m being kind an excluding the multi-player snafu since I don’t usually include multi, but if you want to know I’d rate it 8 with it in).
Crysis 2 is available right now on PC, Xbox360 and Playstation 3 right now for $69.99, $108 and $108 respectively. Game was played entirely on the PC version on Hard difficulty with around 10 hours of game time total. Multiplayer was attempted on the 28th of March 2011 with 2 Instant Action games played totaling about 30 minutes.
A company is always reliant on its customers, they’re the sole reason that they continue to exist. For small companies customers are even more critical as losing one for them is far more likely to cause problems than when a larger company loses one of theirs. Many recent start ups have hinged on their early adopters not only being closely tied to the product so that they form a shadow PR department but also many of them hobbyist developers, providing additional value to their platform at little to no cost to them. Probably the most successful example of this is Twitter who’s openness with their API fostered the creation of many features (retweets, @ replies, # tags) that they had just never seen before. It seems however that they think the community has gone far enough, and they’re willing to take it from here.
It was about two weeks ago when Twitter updated their terms of service and guidelines for using their API. The most telling part about this was the section that focused on Twitter clients where they explicitly stated that developers should no longer focus on making new clients, and should focus on other verticals:
The gist of what Sarver said is this; Twitter won’t be asking anyone to shut down just as long as they stick within the required api limits. New apps can be built but it doesn’t recommend doing so as it’s ‘not good long term business’. When asked why it wasn’t good long term business, Sarver said because “that is the core area we investing in. There are much bigger, better opportunities within the ecosystem”
Sarver insists this isn’t Twitter putting the hammer down on developers but rather just “trying to be as transparent as possible and give the guidance that partners and developers have been asking for.”
To be honest with you they do have a point. If you take a look at the usage breakdown by client type you’ll notice that 43% of Twitter’s usage comes from non official apps, and diving into that shows that the vast majority of unofficial clients don’t drive that much traffic with 4 apps claiming the lion’s share of Twitter traffic. A developer looking to create a new client would be running up against a heavy bit of inertia trying to differentiate themselves from the pack of “Other Apps” that make up the 24% of Twitter’s unofficial app usage, but that doesn’t mean someone might not be capable of actually doing it. Hell the official client wasn’t even developed by Twitter in the first place, they just bought the most popular one and made it free for everyone to use.
Twitter isn’t alone in annoying its loyal developer following. HTC recently debuted one of their new handsets, the Thunderbolt. Like many HTC devices its expected that there will be a healthy hacking scene around the new device, usually centered on th xda-developers board. Their site has really proved to be invaluable to the HTC brand and I know I stuck with my HTC branded phones for much longer than I would have otherwise thanks to the hard work these guys put in. However this particular handset is by far one of the most locked down on the market, requiring all ROMs to be signed with a secret key. Sure they’ve come up against similar things in the past but this latest offering seems to be a step above what they normally put in, signalling this a shot across the bow of those who would seek to run custom firmware on their new HTC.
In both cases these companies had solid core products that the community was able to extend upon which provided immense amounts of value that came at zero cost to them. Whilst I can’t attribute all the success to the community it’s safe to say that the staggering growth that these companies experienced was catalyzed by the community they created. To suddenly push aside those who helped you reach the success you achieved seems rather arrogant but unfortunately it’s probably to be expected. Twitter is simply trying to grab back some of the control of their platform so they can monetize it since they’re still struggling to make decent revenues despite their huge user base. HTC is more than likely facing pressure from carriers to make their handsets more secure, even if that comes at the cost of annoying their loyal developer community.
Still in both these situations I feel like there would have been a better way to achieve the goals they sought without poisoning the well that once sustained them. Twitter could easily pull a Facebook maneuver and make all advertising come through them directly, which they could do via their own in house system or by simply buying a company like Ad.ly. HTC’s problem is a little more complex but I still can’t understand why the usual line of “if you unlock/flash/hack it, you’re warranty’s void” wasn’t enough for them. I’m not about to say that these moves signal the down fall of either company but it’s definitely not doing them any favors.
I’m always highly skeptical of any product that comes my way that’s supposed to solve all my problems in a particular area. Cloud computing was a great example of this as I had already gone through most of the marketing spiel previously with Software as a Service and was stunned when it made its triumphant return with a few additional bells and whistles. Granted I’m coming around to the idea since the services have matured but I still don’t believe its the panacea to all of your IT woes as many of its advocates will have you believe. Of course this kind of hype talk is always around and the current buzzword du jour is the coming of the “Post-PC era”, a time where the personal computer is replaced by tablets and smartphones. Needless to say I’m highly skeptical of this kind of marketing malarkey,which in no small part is due to the fact that Steve Jobs has been the one to start spruiking the term.
The idea seems to be steaming from the recent growth in non-PC devices that replicate certain PC functionalities. For example the mobile web experience has matured significantly over the past 3 years with many web sites (including this one) creating separate sites designed for the mobile platform. Additionally native applications on phones are becoming increasingly more capable with many functions that used to take a fully fledged desktop or laptop now available in the palm of your hand. Truly the capability explosion that mobile devices have undergone in the past few years is quite extraordinary and extrapolating that out would have you believe that in a few short years these devices will be as capable as their PC cousins, if not more so.
However I just can’t see a future where the PC isn’t around.
You see these mobile devices (phones, tablets and what have you) are primarily consumption devices. This is because the platform lends itself to this quite readily as creation on these devices is quite a chore when compared to its bigger, tethered brethren. For instance I’ve tried several times to write blog posts on the run using my smartphone (even one with a physical keyboard) and the experience has been nothing short of atrocious. Sure hammering out a tweet or 10 is easy, 140 characters doesn’t take long at all, but any long interaction with my phone is quite a laborious exercise. Thus most applications on these devices are centered around consuming something rather than creating, simply because these devices aren’t really made for using longer than 5~10 minutes.
But I can the post-PC crazies saying “but wait you could pair your tablet with a keyboard and mouse thus solving this issue!”. Well yes, of course you could but in reality aren’t you just replacing your laptop for a tablet/smartphone with a giant dock attached to it? Realistically you’re just replacing the innards of your current PC with something that’s, I’ll admit, far more portable but also a whole lot less capable. You’d probably find that there would be beefed up versions of these mobile devices available, sacrificing battery life and weight to give you a little more power. That or they’d rely on massive back end infrastructure, in essence going back to the good old days of mainframes and thin terminals (defeating the whole post-pc era idea completely).
Are there things that PCs should give way to? Of course, the fact that mobile devices are limited primarily to consuming content rather than producing it means that the consumer experience on these devices is quite good. Whilst I may use several services from my PC the vast majority of my time spent on social media is through my iPhone simply because it’s easy and available. It also makes for a great travel companion when I don’t want to lug my Macbook Pro around and only need access to a few files like itineraries or other information. Does that mean they can replace my PC outright? Hell no, but there are many use cases where I’d prefer to be using my mobile rather than a desktop PC.
I think there will be a few people who will be able to replace their current PCs, whatever their form factor, with the new wave of “post-pc era” devices. Similarly there are also a similar number who will never have a need for such a device and will continue along as they are now. In the middle there will be those who use both, supplementing their PC with additional devices that suit a particular purpose they have in mind. That middle sector is where I believe most of the future users will reside, using the most appropriate device for the task at hand. Over time I believe our view of what constitutes a PC will shift but there will always be a place for a dedicated computing device, even if that ends up just being the horsepower driving the services behind the post-pc devices.
I’m always surprised at the lengths that Google will go to in order to uphold its Don’t Be Evil motto. The start of last year saw them begin a very public battle with the Chinese government, leading them to put the pressure on by shutting down their Chinese offices and even going so far as to involve the WTO. Months passed before the Chinese government retaliated, in essence curtailing all the efforts that Google had gone to in order to operate their search engine the way they wanted to. After the initial backlash with a few companies pulling parts of their business out of China there really wasn’t much more movement from either side on the issue and it just sort of faded into the background.
In between then and now the world has seen uprisings and revolutions in several countries like Tunisia, Egypt and Libya. Whilst the desire for change is stronger than any tool services like Twitter, Facebook and Gmail have been instrumental in helping people to gather and organize the movements on scales that would’ve taken much more effort than before. Indeed those in power have recognized the usefulness of these tools as they’ve usually been the first thing that gets cut when a potential uprising begins to hit critical mass. China is known for its harsh stance on protesters and activists and they’re not shy when it comes to interfering with their activities.
It seems that Google has picked up on them doing just that with Gmail:
Google has accused the Chinese government of interfering with its popular Gmailemail system. The move follows extensive attempts by the Chinese authorities to crack down on the “jasmine revolution” – an online dissident movement inspired by events in the Middle East.
According to the search giant, Chinese customers and advertisers have increasingly been complaining about their Gmail service in the past month. Attempts by users to send messages, mark messages as unread and use other services have generated problems for Gmail customers.
Screwing around with their communications is one of the softest forms of oppression that the government can undertake without attracting to much attention. Whilst I believe an uprising on the scale we’ve seen in the middle east is highly improbable in China, thanks entirely to the fact that the sentiment I get from people I know in China is that they like the current government, this doesn’t mean that they aren’t conducting operations to kill any attempts in it’s infancy. They’ve previously targeted other activists with similar attacks in order to gain information on them and that’s what sparked Google’s first outburst against the Chinese government. Why they continue to poke this particular bear is beyond me and unfortunately Google is in the hard position of either continuing to offer services (and all the consequences that follows) or pull out completely, leaving activists in China few options that aren’t at least partially government controlled.
There’s also rumors that the government is now implementing similar technology to their Great Firewall onto the cellular network. Some users are reporting that their phone calls drop out after saying certain phrases, most notably “protest”. Whilst I hesitate to accept that story whole heartedly (the infrastructure required to do that is not outside the Chinese governments ability) there is precedent for them to conduct similar operations with other forms of communication, namely the Internet. Unfortunately there’s no real easy way to test it (doing encrypted calls is a royal pain in the ass) without actually being there so unless some definitive testing is done we’ll just have to put this one down to a rumor and nothing more.
Google has shown several times now that it’s not afraid to go against the Chinese government if they believe their users are under threat from them. It’s unfortunate that there haven’t been many more companies that have lined up behind Google to support them but if they continue to be as outspoken as they are I can’t see them staying silent indefinitely. Of course many Internet services in China are at least partially controlled by the government so any native business there will more than likely remain silent. I don’t believe this is the last we’ll hear on the Google vs China battle but unlike last time I’m not entirely sure it will lead.
There’s always risk in innovation. When you’re creating a new product or service there’s always the chance that nobody will want what you’re creating. Similarly whatever you end up creating could very well end up grating against the current norms in such a way that your product is almost wholly rejected by those its aimed at. A great example of this, which I covered in the past, was Windows Vista. In order for Microsoft to move ahead into the future they had to break away from some of their old paradigms and this drew the ire of many of their loyal customers. The damage that was done there is still being felt today with slower adoption rates of their latest product but had it not been for this initial failure they may not have been enjoying the level of success that Windows 7 has today.
In fact many pioneering products and services were first faced with dismal (albeit, mostly profitable) reception initially. Steam was a great example of this, debuting back in a time where broadband penetration numbers in many countries wasn’t particularly great and sought to deliver all games digitally direct to the consumer. Couple this with the fact that they were cutting out the publishers and distributors in the process the guys at Valve faced an extremely long, uphill battle in order for their platform to gain dominance. Still three years later they started to get big titles releasing on their platform and the rest, as they say, is history.
Interestingly enough I began to notice similar things happening with the Playstation Portable. Whilst the next version of the handheld, the NGP, is not going to be a digital only download device Sony has recently said that all games will be available digitally with only the bigger titles coming to the physical world:
“One thing we learnt from PSP, is that we want to have simultaneous delivery in digital and physical for NGP. Just to clarify that, all games that appear physically will be made available digitally, said House. He added, “Not necessarily all games have to be made available physically. And having the option of a digital-only method affords more creative risk-taking, and that’s because you don’t-have that in-built risk of physical inventory.”
For those who follow Sony you’d be aware of the dismal failure that was the PSP Go. Debuting at an insanely high price (costing just a hair below a full PS3) whilst offering little in the way of improvements the PSP Go was never going to be a phenomenal success. However it was particularly hampered by the lack of compatibility with its current gen brethren, doing away with the UMD drive in favor of a fully digital distribution model. This annoyed PSP customers to no end because their current collection of games could not be migrated onto the new platform (other than through nefarious means). Looking at the NGP there’s no way to get UMD games onto it but since most people are already aware that their current UMD titles will not have a format transition to the new platform they’ve avoided doing the same amount of damage to their next generation handheld as they did to the PSP Go.
Failure teaches you where you went wrong and where you should be heading in order to avoid making such mistakes again. Many successful products have been built on the backs of dismal failures, just look at satellite phones and radio for example. Sometimes it requires a risk taker to pave the way forward for those who will profit from the endeavor and hopefully that risk taker gets some of the kudos down the line. Digital distribution is one of those such areas where path has already been beaten and even some of the pioneers are continuing to profit from it.
As I’ve said previously sequels are always a tricky thing to get right. They will inevitably be compared to their predecessors and should they not be a wholesale improvement on the experience that came before them then you’re guaranteed to cop some serious flak. Still due to their almost guaranteed market potential any original game that enjoys a modicum of success is pretty much guaranteed to have a sequel, or at least a spiritual successor. Dragon Age 2 is one of these such games and since I thoroughly enjoyed Dragon Age: Origins, despite its flaws, it didn’t take much for me to open my wallet yet again for a Bioware RPG. Over 2 weeks and 30+ hours of playtime I managed to conclude the story of Dragon Age 2 and I’m still trying to figure out where I stand with this game, as are many who’ve done the same.
Dragon Age 2 differs quite significantly from its predecessor. Whilst Origins was your typical pick your own adventure style RPG Dragon Age 2 instead takes the Mass Effect style of game play, giving you the choice of a few character classes which you will play through the game with. This drew the ire of many RPG fans as the depth of character development you could undertake in Origins was quite significant and the switch to a Mass Effect style of game play was seen as a dumbing down of Bioware’s standards. Since I’m not usually one to replay a game (unless it’s really, really good) I didn’t take that much issue with it, but I can understand where the complaints are coming from.
You play this game as Hawke, a man/woman who fled the land of Ferelden during the blight that took place during Origins to seek safety in the land of Kirkwall a once slave nation. The game follows the trials and tribulations of Hawke attempting to regain the notoriety that his family once had and his rise to the position of “Champion of Kirkwall”, told in retrospect by one of his companions Varric. There’s a constant sense of foreboding in how the story is recounted so you always have the sense that something big is going to happen. However unlike its predecessor which had a large, overarching plot that drove you from one quest to the next Dragon Age 2 instead sort of drifts between plot points with each chapter having a different (and completely unrelated) goal. This is where Origins shone as the entire game was leading up to that one point at the end, whereas Dragon Age 2 instead switches between no less than 3 different goals, none of which build towards the final conclusion.
Satisfyingly though combat in Dragon Age 2 is really quite enjoyable. I lamented back in my Origins review that the combat, whilst feeling decidedly epic at many points in the game, was quite a bug ridden affair with my warrior constantly getting stuck on hit boxes and abilities that failed to work as advertised. Dragon Age 2 still has many of the trademark skills that its predecessor had however they all work as expected and the attributes and talents system has been completely revamped. The end result was that my warrior in Dragon Age 2 became a wrecking ball of devastation that was on par with the blood mages in Origins. He could decimate entire swaths of enemies with 2 abilities and towards the end he had practically unlimited stamina allowing all his abilities to be used to their fullest potential. This was probably the best part of Dragon Age 2 for a Mass Effect fan like myself and whilst I wasn’t able to play it like Mass Effect (I.E. without pausing the game to micro my team mates) it was still thoroughly satisfying.
Thankfully the default behavior of your team mates has also been vastly improved. In Origins should you neglect to look at the tactics screen you’d be running with a party that had almost no idea what to do apart from auto-attack everything that you attacked. Whilst I had to make a few minor adjustments to the default settings I don’t think I spent anymore than 15 minutes configuring my party’s behavior before they were adequately fulfilling the roles I had chosen for them. Sure they’re still not able to position themselves automatically but apart from that they were far more capable then their counter-parts in Origins, something which I was very thankful for.
Many other parts of Dragon Age 2 have also been streamlined or revamped to take some of the grind out of the game. Crafting has been redone so that instead of having to level through it in order to get better potions/runes/posions you simply find the recipes around the world. In order to make them however you have to find resources which aren’t depleted when you use them. For someone like me who really doesn’t have an interest in leveling tradeskills in a single player game this was a welcome change and something that made me far more interested in hunting down ingredients and recipes. Old school RPGers will probably say that this takes away from the value of crafted items since you don’t have to really do a lot for them but in the end I’d rather not waste another 10 hours in the game just so I could upgrade my armor or weapon a little more.
Loot in the game is almost too plentiful with my character often having multiple sets of armor after a single dungeon run. Initially it all seems kind of pointless since there’s little to spend gold on and you’ll end up having well over 100 gold after the first chapter ends. However the final few chapters took their toll on my gold reserves thanks to many difficult encounters and full sets of armor that were just begging for rune upgrades. This also extends to your companions who, unlike Origins where you could dress them as you felt, will also require upgrades which can be found at vendors all over Kirkwall and its surrounding regions. Some fights these small additional upgrades can be the difference between winning and losing, as I painfully found out several times over.
The relationship Hawke develops with his companions is one of the better aspects of Dragon Age 2. Unlike Origins where romancing someone was a game of playing their friendship right with gifts and certain dialog options Dragon Age 2 simplifies the idea considerably since the conversation wheel alerts you to the romantic option. Gifts are still around (and form part of the romance should you pursue it) but there’s no longer a myriad of things you can lavish on your potential lover. As the above screenshot shows I had a real soft spot for Merrill and thankfully the relationship didn’t end after a single session of bonking (unlike with Isabella). Whilst I didn’t feel as emotionally involved with the characters as I did with say Heavy Rain I still genuinely cared for them, especially Merrill. It seemed some of them also cared for me as well, with Isabella leaving me and then returning later on in the game.
There are however some extremely visible issues with Dragon Age 2 that need to be pointed out. Not least of which is the repeating environments and asset reuse that is seen throughout the entirety of the game. All caves are the same cave save for doors being open/closed or done backwards and all slaver hideouts, mansions and underground passageways are completely identical. Additionally many in game items are direct copies from Origins which would be fine if they referenced some lore about it but many are just straight up ported from the game’s predecessor. This made the game somewhat predictable in parts since all the encounters happened at the same place, diminishing the replay value significantly.
There’s also a couple issues with difficulty pacing within the game. Whilst there were many times I’d make it through by the skin of my teeth the fights were always controllable and I never felt like I was fighting against a brick wall. The end of Act 2 however brought one fight (I’ll avoid the spoilers for now) that was in essence, impossible. Now I’d usually chalk this up to me going through the game too quickly and not taking the time to level. However all the other fights felt more like ones with mechanics that punished you for getting things wrong. The fight in question however punished you regardless, stretching out the encounter to abysmally long lengths. Talking with my friends reveals that a critical dialog option wasn’t available to me, putting me in the rather unenviable position of having to use the dev console to get past this roadblock. Thankfully this only happened once (the other time was due to me attempting a fight before I was capable of completing it) but it still felt highly out of place when I wasn’t struggling with the game up to this point.
The lack of an over-arching plot was something that Origins did magnificently which Dragon Age 2 simply fails to accomplish. Whilst I can understand the reasoning behind it (Origins took place over 2 years, Dragon Age 2 took 10) the fact that there’s really no goal for Hawke in Kirkwall means that during the various side quests you get a feeling you’re just doing them for the sake of it, rather than building towards the ultimate end. Plus amidst the cacophony of other things you’re doing in the game the main plot line can seem to be nothing more than a footnote until one character tells you “Hey, this could take a while are you sure you’ve done everything?”. With some parts of Dragon Age 2’s story being really satisfying this lack of direction really detracted from the experience.
Despite these faults however I really did enjoy my time with Dragon Age 2. Whilst I would sigh at the repetitive dungeons and lament the fact I had little direction apart from the issue du jour the engrossing combat and reduction of miscellaneous crap made me forget all my troubles almost instantly. I found the conversations a lot more interesting now that my character actually had a voice, even if the choice options didn’t always align with what I thought they were. Initially I was going to say this game didn’t stand up to its predecessor due to the problems that came from the rushed development cycle but in all honesty Dragon Age 2 is a much better game overall. Given the same amount of time to develop Dragon Age 2 as they did Origins I’m sure it would have come out as a game that both delighted fans of the IP and newcomers alike. It’s good enough that I’m considering a second playthrough and possibly ponying up for the DLC, something which I rarely do even for my most cherished of games.
Dragon Age 2 is available right now on PC, XBox 360 and Playstation 3 for $69, $108 and $108 respectively. Game was played entirely on Hard difficulty with approximately 31 hours of play time and my character reaching level 21 by the end.
Review scandal: http://www.kotaku.com.au/2011/03/dragon-age-ii-dev-rates-his-own-game-on-metacritic/
Mercury is a strange little beast of a planet. It’s the closest planet to our sun and manages to whip around it just under 88 days. Its “days” are 59 earth days long and whilst it’s not tidally locked to our parent star (like the moon is to us, always showing the same face to the earth) it is in a 3:2 spin-orbit resonance. This has led to some interesting phenomena when we’ve sent probes to image it as the only probe to ever visit it, Mariner 10, only managed to image 45% of the planet’s surface on it’s 2 encounter trip with the tortured little planet. That all changed a few years ago when MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) made its first approach to Mercury in January 2008 and sent back images of the as of yet unseen side of the planet. Ever since then MESSENGER has been on a long trajectory that will eventually bring it into orbit with Mercury and it will begin it’s year long mission of observations.
It just so happens that that day is today.
MESSENGER has been in space for an extremely long time, almost 7 years. You might be wondering why it has taken this craft so long to reach Mercury and the answer requires that you understand a little about orbital mechanics. You see as a heavenly body, in this case a satellite, moves closer to another body it will tend to speed up. This is known as the conservation of angular momentum and it’s the same principle that governs the increase in speed when you bring your arms in closer whilst you’re spinning. Thus for a satellite that’s launched from Earth to be able to orbit Mercury it has to shed all that extra speed so it can match up to it, otherwise it would just whiz right past it. Since doing this with a rocket is rather expensive (the fuel required would be phenomenal) NASA instead opts to shed velocity by a complicated set of maneuvers between planets, each of which removes a portion of the satellite’s velocity. This is cheap fuel wise but means the space craft will have to endure many years in space before it reaches its destination.
As I write this MESSENGER is making its final preparations to insert itself into an orbit around Mercury. MESSENGER hopes to demystify the diminutive planet by providing hi-resolution imaging of the planet (there’s still 5% we haven’t seen yet), doing chemical analysis to determine the planet’s makeup and attempting to figure out why Mercury has a magnetic field. Probably the most interesting part of MESSENGER will be the last part as our current theories on planet formation point to Mercury being much like our moon with a solid core and no magnetic field to speak of. The presence of one there suggests that part of Mercury’s core is still molten and raises a number of questions over how planets and natural satellites like our moon form. It will also be the first ever artificial satellite of Mercury, something that still eludes many of the other planets in our solar system.
This is the kind of science that NASA really excels at, the stuff that just hasn’t been done before. It’s really amazing to see NASA flex their engineering muscle, designing systems that survive in the most unforgiving environment we know for decades and still function as expected. The next year will be filled with all kinds of awesome discoveries about our tortured little cousin Mercury and I for one can’t wait to see how the analysis of its magnetic field changes the way we model planet formations in the future.