It’s pretty well known that the communities that surround the MOBA genre, whether it be the original DOTA or the newer incarnations such as DOTA2, League of Legends or Heroes of Newerth, are spectacularly hostile. Indeed in the beginning when DOTA was just a custom application many people relied on 3rd party ban services, many of which relied on user generated lists to filter out bad people. Of course these being user generated led to a lovely hate cycle where people would simply ban whoever they felt like. Once you were past that barrier it didn’t get much better with any requests for help or misunderstanding of certain mechanics usually earning you a place on those lists. It was for this reason that many of us just stopped playing the original DOTA, the community around it was just horrifically toxic.
There was hope that the newer entries into the MOBA scene would help to alleviate this somewhat as a fresh platform would give the community a chance to reinvent itself. Unfortunately, at least in my experience on HoN (I only played 3~4 games of LoL), the same toxic community sprouted once again and I found myself wondering why I was bothering. DOTA2 started out the same way with my first few games being marred by similar experiences but there was enough to the game that kept me coming back and something strange started to happen: the people I was playing with were becoming infinitely better. Not just in terms of skill but in terms of being productive players, those with an active interest in helping everyone out and giving solid criticism on improving their play.
Initially most of that was due to me moving up the skill brackets however there was still a noticeable amount of toxicity even at the highest skill levels. What really made the change however was the introduction of communication bans, a soft ban mechanism that prevents a player from communicating directly with their team, limiting them to canned responses and map pings. Whilst the first week or two were marred with issues surrounding the system, although I do bet a few “issues” were people thinking they were in the right for abusing everyone, soon after the quality of my in game experience improved dramatically. It’s even got to the point where I’ve had people apologize for losing their cool when it’s pointed out to them something which has just never happened to me before in an online game.
It was then interesting to read about Microsoft’s new reputation system that they’ll be introducing with the Xbox One. Essentially there’s 3 levels of reputation: “good players” which compromise most of the gaming community, “needs attention” a kind of warning zone that tells you that you’re not the saint your mother says you are and finally “avoid me” which is pretty self explanatory. It’s driven by an underlying score that centers on community feedback so a group of jerks can’t instantly drop you to avoid me nor can you simply avoid a game for a couple months and have it reset on you. Additionally there’s a kind of credibility score attached to each player so those who report well are given more weight than those who report anyone and everyone who looks at them the wrong way.
Considering my experience with the similar system in DOTA2 I have pretty high hopes for the Xbox One’s reputation system to go a fair way to improving the online experience on Xbox Live. Sure it won’t be perfect, no system ever is, but you’d be surprised how quickly people will change their behavior when they get hit with something that marks them as being a negative impact on the community. There will always be those who enjoy nothing more than making everyone else’s online life miserable but at least they’ll quickly descend into an area where they can play with like minded individuals. That avoid me hell, akin to low priority in DOTA2, is a place that no one likes to be in for long and many are happy to pay the price of being a nice person in order to get out of it.
In the days before ubiquitous high speed Internet people the idea of having games that were only available when you were online were few and far between with the precious few usually being MMORPGs. As time went on however and the world became more reliably connected game developers sought to take advantage of this by creating much more involved online experiences. This also lead to the development of some of the most insane forms of DRM that have ever existed, schemes where the game will constantly phone home in order to verify if the player is allowed to continue playing the game. The advent of cheap and ubiquitous Internet access then has been both a blessing and a curse to us gamers and it may be much more of the latter for one particular type of game.
Way back when an Internet connection was considered something of a luxury the idea of integrating any kind of on line experience was something of a pipe dream. There was still usually some form of multiplayer but that would usually be reserved the hallowed times of LAN parties. Thus the focus of the game was squarely on the single player experience as that would be the main attraction for potential gamers. This is not to say that before broadband arrived there was some kind of golden age of single player games (some of my favourite games of all time are less than 5 years old) but there definitely was more of a focus on the single player experience back then.
Today its much more common to see games with online components that are critical to the overall experience. For the most part this is usually some form of persistent multiplayer which has shown to be one of the most successful ways to keep players engaged with the game (and hence the brand) long after the single player experience has faded from memory. We can squarely lay the blame for this behaviour at big titles like Call of Duty and Battlefield as most multiplayer systems are seeking to emulate the success those games enjoyed. However the biggest blow that single player games has come from something else: the online requirement to just to be able to play games.
Now I’m not specifically referring to always on DRM, although that is in the same category, more the requirement now for many games to go online at least once before they let you play the game. For many of us this check comes in the form of a login to Steam before we’re able to play the games and for others its built directly into the game, usually via a phone home to ensure that the key is still valid. Whilst there is usually an offline mode available I’ve had (and heard many similar stories) quite a few issues trying to get that to work, even when I still have an Internet connection to put them into said mode. For modern games then the idea that something is truly single player, a game that can be installed and played without the need of any external resources, is dead in the water.
This became painfully obvious when Diablo III, a game considered by many (including myself) to be a primarily single player experience, came with all the problems that are evident in games like MMORPGs. The idea that a single player experience required maintenance enraged many players and whilst I can understand the reasons behind it I also share their frustration because it calls into question just how long these games will continue to exist in the future. Whilst Blizzard does an amazing job with keeping old titles running (I believe the old Battle.Net for Diablo 1 is still up and running) many companies won’t care to keep the infrastructure up and running once all the profit has been squeezed out of a title. Some do give the courtesy of patching the games to function in stand alone mode before that happens, but its unfortunately not common.
It’s strange to consider then that the true single player games of the Internet dark ages might live on forever whilst their progeny may not be usable a couple years down the line. There’s a valid argument for companies not wanting to support things that are simply costing them money that’s only used by a handful of people but it then begs the question as to why the game was developed with such heavy reliance on those features in the first place. Unfortunately it doesn’t look like this will be a trend that will be reversed any time soon and our salvation in many cases will come from the dirty pirates who crack these systems for us at no cost. This can not be relied on however and it should really fall to the game developers to have an exit strategy for games that they no longer want to support should they want to keep the loyalty of their long time customers.
I’m not a terribly picky consumer. I mean there are particular shops and sellers I’ll favor for particular products (I get nearly all of my PC equipment from PC Case Gear, for instance) but if I’m looking for an one off item I’ll usually go wherever I can find the best price from a reputable seller. If I don’t get a recommendation from a friend this usually has me shopping through sellers on eBay or through price aggregation sites and for the most part I’ve never been lead wrong with this. My most recent experience, one that involves the Australian retailer Kogan, wasn’t a particularly bad experience but I feel that there’s some things people need to know about them before they buy something through this online only retailer.
So the item I was looking for was a Canon 60D to upgrade my ageing 400D that’s served me well for the past 5 years. I did the usual snoop through Ebay and some other sites and found it could be had for around $900, shipping included. After doing a bit more searching I found it available from Kogan for a paltry $849 (and it has since dropped another $20) and even when combined with the shipping it came out on top. The rest of the items I was looking at (namely a Canon EF 24-105 F/4L lens, Canon Speedlite 403EX II and a 32GB SD card) were also all available from there for a pretty good price. All up I think I got all the kit for about $150 less than I would have gotten it through eBay which is pretty amazing considering that I’ve struggled to find cheaper prices before.
I hit a hurdle with them when they requested a land line phone number they could call so they could verify the credit card information used in the transaction. I have a land line number but it’s not hooked up to anything (the only phone I’ve got seems to be broken as it doesn’t ring when I call it) as its just used for the Internet connection. I offered to forward this to my mobile if they needed it but they instead just called me on my mobile directly. This isn’t the first time I’ve heard of people getting asked for land lines to verify things (I gave a reference for a friend and they insisted on being given one) so I don’t know if they can do some kind of verification on the back end that that number belongs to me or something, but even if they did then the same tech should work for mobiles as well. Anyway it was a small snag and it was just unfortunate that it meant my order didn’t get processed until the following Monday, no big deal.
Now since I ordered everything together I expected it all to come as one package but that’s not the case with Kogan. I received my 4 items in 4 separate deliveries through 2 different shipping companies. Now I’m lucky and my wife was at home because she is studying for exams but at any other time I wouldn’t have been there to pick up all these different items. This wouldn’t have been too bad if they all arrived on the same day but the delivery time from first received to last spans just over a week and a half with the last item arriving yesterday (I placed the order on the 01/06/2012). Considering that I’ve ordered similar items from Hong Kong, the 400D being one of them, and have managed to receive them all at the same time I found this piecemeal mailing approach rather annoying as I bought all the items to be used together and it wasn’t until yesterday that I had the completed package.
Looking at Kogan’s website you’d be forgiven for thinking that all their products were Australian versions until you get to the fine print at the bottom of the page. I’m not going to blame Kogan for this, they’re quite clear about the fact that anything that doesn’t carry the Kogan name will come from their HK branch, but it certainly does give the impression to the contrary. I’d like to think of myself as an observant person and I didn’t pick up on the fact that it would be coming from HK until I saw where it was being delivered from. This isn’t a bad thing per se, just something you should be aware of when you’re comparing them to similar sellers on eBay and the like.
Realistically had they shipped everything in one lot, even if it was a little late, I don’t think I’d be feeling as sour about my Kogan experience as I do now. I bought the items figuring that shipping wouldn’t take more than a week as I had an upcoming trip that the camera was intended for. Thankfully the trip was cancelled so I wasn’t left with half of the items that I wanted to take with me, but it could have just as easily gone the other way. I can probably see myself going back there for single items, possibly an extra battery for said camera, but for anything else I think I’ll be going elsewhere. This isn’t to say that you should though, but do take these points into consideration before making your purchase.
UPDATE: You should read my latest post on Kogan here as they’ve really improved the whole experience since I wrote this almost a year ago.
If you’ll allow me to get a little hipster for a second you’ll be pleased to find out that I’ve been into the whole Multiplayer Online Battle Arena (MOBA) scene since it first found its roots way back in Warcraft 3. Back then it was just another custom map that I played along with all the other customs I enjoyed, mostly because I suffered from some extreme ladder anxiety. Since then I’ve played my way through all of the DOTA clones that came out (Heroes of Newerth, Leaggue of Legends and even that ill fated experiment from GPG, Demigod) but none of them captured me quite as much as the seemingly official successor, DOTA 2, has.
Defense of the Ancients 2 should be familiar to anyone who played the original DOTA or one of the many games that followed it. In a team of 5 you compete as single heros, choosing from a wide selection who all have unique abilities and uses, pushing up one of three lanes with a bunch of NPC creeps at your side. The ultimate goal is the enemies ancient, a very well defended building that will take the concerted effort of all team members to reach and finally, destroy. There are of course many nuances to what would, on the surface, seem to be a simple game and it’s these subtleties which make the game so engrossing.
When compared to its predecessor that was limited by the graphics engine of WarCraft 3 DOTA2 stands out as a definite improvement. It’s not a graphical marvel, much like many of the MOBA genre, instead favoring heavily stylized graphics much like Blizzard does for many of their games. The recent updates to DOTA2 have seen some significant improvements over the first few initial releases both in terms of in-game graphics and the surrounding UI elements. Valve appears to be heavily committed to ensuring DOTA2′s success and the graphical improvements are just the tip of the iceberg in this regard.
Back in the old days of the original DOTA the worst aspect of it was finding a game and then hoping that no one would drop out prematurely. There were many 3rd party solutions to this problem, most of which were semi-effective but were open to abuse and misuse, but none of them could solve the problem of playing a game with similarly skilled players. DOTA2, like nearly every other MOBA title, brings in a matchmaking system that will pair you up with other players and also brings with it the ability to rejoin a game should your client crash or your connection drop out.
Unfortunately since DOTA2 is still in beta the matchmaking system is not yet entirely working as I believe it’s intended to. It does make the process of finding, joining and completing a game much more streamlined but it is blissfully unaware of how skilled a potential player is. What this means is that the games have a tendency to swing wildly in one teams favour and unlike other games where this leads to a quick demise (thus freeing you up toplay again) DOTA instead is a drawn out process and should you decide to leave prematurely you’ll be hit with a dreaded “abandoned” mark next to your record. This is not an insurmountable probelm though and I’m sure that future revision of DOTA2 will address this issue.
The core gameplay of DOTA2 is for the most part unchanged from back in the days of the original DOTA. You still get your pick from a very wide selection of heros (I believe most of the AllStars team are in there), the items have the same names and you still go through each of the main game phases (laneing, pushing, ganking) as the game progresses. There have been some improvements to take away some of the more esoteric aspects of DOTA2 and for the most part they’re quite welcome.
Gone are the days where crafting items required either in depth knowledge of what made what or squinting at the recipe text, instead you can click on the ultimate item you want to craft and see what items go in to make it. Additionally there’s a list of suggested items for you hero which, whilst not being entirely appropriate for every situation, will help to ease players into the game as they learn some of the more intricate aspects of iteming a character correctly. It’s still rather easy to draw the ire of players who think they know everything there is to know about certain characters (I’ll touch more on the community later) but at least you won’t be completely useless if you stick to the item choices the game presents for you.
Know which hero to pick is just as important as knowing how to item them and thankfully there are some improvements to the hero choosing system that should make do so a little easier for everyone. Whilst the hero picking has always made delineations between int/str/agi based heros you can now also filter for things like what kind of role the character fills like support, ganker or initiator. For public games though it seems everyone wants to play a carry (mostly because they’re the most fun) and there’s little heed paid to good group composition but this is not a fault of the game per se, but there is potential there for sexing up the lesser played types so that pub compositions don’t end up as carry on carry battles.
It’s probably due to the years of play testing that the original DOTA received but the heroes of DOTA2 are fairly well balanced with no outright broken or overpowered heroes dominating the metagame. There are of course heros that appear to be broken in certain situations (I had the pleasure of seeing Outworld Destroyer killing my entire team in the space of 10 seconds) but in reality it’s the player behind that character making them appear broken. This bodes well for the eSports scene that Valve is fostering around DOTA2 and they’re going to need to keep up this level of commitment if they want a chance of dethroning the current king, League of Legends.
The eSports focused improvements in DOTA2 are setting the bar for new game developers who have their eye on developing an eSports scene for their current and future products. The main login screen has a list of the top 3 spectated games and with a single click you can jump in and watch them with a 2 minute delay. This can be done while you’r waiting to join a game yourself and once your game is ready to play you’re just another click away from joining in on the action. It’s a fantastic way for both newcomers and veterans of the genre to get involved in the eSports scene, but that’s just he start of it.
Replays can be accessed directly from a player’s profile or downloaded from the Internet. Game casters can embed audio directly into the replay allowing users to watch the replay in game with the caster’s commentary.They can also watch the caster’s view of the game, use a free camera or using the built in smart camera that will automatically focus on the place where the most action is happening. It’s a vast improvement over how nearly all other games do their replays and Valve really has to be commended for the work they’ve done here.
For all the improvements however there’s one thing that DOTA2 can’t seem to get away from and that’s its elitist, almost poisonous community that is very hostile to new players. Whilst the scsreenshot above is a somewhat tongue-in-cheek example of the behavior that besots the DOTA2 community it still holds true that whilst many concessions have been made to make the game more palatable for newcomers the DOTA2 community still struggles with bringing in new players to the fold. League of Legends on the other hand crack this code very early on and the following success is a testament to how making the game more inviting for new users is the ultimate way to drive the game forward. I don’t have an answer as to how to fix this (and whilst I say LoL cracked the code I’m not 100% sure their solution is portable to DOTA2) and it will be very interesting to see how DOTA2 develops in the shaodw of the current MOBA king.
DOTA2 managed to engage me in a way that only one other game has managed to do recently and I belive there’s something to that. Maybe it’s a bit of nostalgia or possibly my inner eSports fan wanting to dive deep into another competitive scene but DOTA2 has really upped the MOBA experience that I first got hooked on all those years ago and failed to rekindle with all the other titles in this genre. I’d tell you to go out and buy it now but it’s still currently in beta so if you can get your hands on a key I’d definitely recommend doing so and if you’re new to this kind of game just ignore the haters, you won’t have to deal with them for long.
Defense of the Ancients 2 is currently in beta on PC. Approximately 60 hours of total game play were undertaken prior to this review with a record of 32 wins to 36 losses.
I love me some Sony products but I’m under no delusion that their user experience can be, how can I put this, fantastically crap sometimes. For the most part their products are technologically brilliant (both the PS3 and the DSC-HX5V that I have fit that category) but the user experience outside that usually leaves something to be desired. This isn’t for a lack of trying however as Sony has shown that they’re listening to their customers, albeit only after they’ve nagged about it for years before hand. After spinning up my PS3 again for the first time in a couple months to start chipping away at the backlog of console games that I have I feel like Sony needs another round of nagging in order to improve the current user experience.
The contrast between Sony’s and Microsoft’s way of doing consoles couldn’t be more stark. Microsoft focused heavily on the online component of the Xbox and whilst there might be a cost barrier associated with accessing it Xbox Live still remains as the most active online gaming networks to date. Sony on the other hand left the access free to all to begin with and has only recently begun experimenting with paid access (the jury is still out on how successful that’s been). One of the most notable differences though is the updating process, major source of tension for PS3 owners worldwide.
As I sat down to play my copy of Uncharted 3: Drake’s Fortune I first greeted with the “A system update is required” message in the top right hand corner of my TV. Since I wasn’t really planning to go online with this one just yet I figured I could ignore that and just get to playing the game. Not so unfortunately as it has been so long since I last updated that Uncharted 3 required an update to be applied before I could play it. Fair enough I thought and 15 mins later I was all updated and ready to go. Unfortunately the game itself also had an update, pushing back my game time by another 5 minutes or so. This might not seem like a lot of time (and I know, #firstworldproblems) but the time taken was almost enough for me not to bother at all, and this isn’t the first time it has happened either.
Nearly every time I go to play my PS3 there is yet another update that needs to be downloaded either for me to get online or to play the game that I’m interested in playing. My Xbox on the other hand rarely has updates, indeed I believe there’s been a grand total of 1 since the last time I used it. Both of these approaches have their advantages and disadvantages but Sony’s way of doing it seems to be directly at odds with the primary use case for their device, something which doesn’t necessarily have to be that way. In fact I think there’s a really easy way to reduce that time-to-play lag to zero and it’s nothing radical at all.
Do the updates while the PS3 is turned off or not in use.
Right now the downloading of updates is a manual process, requiring you to go in and agree to the terms and conditions before it will start the downloads. Now I can understand why some people wouldn’t want automatic updating (and that’s perfectly valid) so there will have to be an option to turn it off. Otherwise it should be relatively simple to periodically boot the system into a low power mode and download the latest patches for both system and games that have been played on it. If such a low power mode isn’t possible then scheduling a full system boot at a certain time to perform the same actions would be sufficient. Then you can either have the user choose to automatically install them or keep the process as is from there on, significantly reducing the time-to-play lag.
I have no doubt that this is a common complaint amongst many PS3 users, especially since it’s become the target of Internet satire. Implementing a change like this would go a long way to making the PS3 user base a lot happier, especially for those of us who don’t use it regularly. There’s also a myriad of other things Sony could do as well but considering how long it took them to implement XMB access in games I figure it’s best to work on the most common issue first before we get caught up in issue paralysis. I doubt this blog post will inspire Sony to make the change but I’m hopeful that if enough people start asking for it then one day we might see it done.
Adobe had also been quite stalwart in their support for Flash too, refusing to back down on their stance that they were “the way” to do rich content on the Internet. Word came recently however that they were stopping development on the mobile version of Flash:
Graphics software giant Adobe announced plans for layoffs yesterday ahead of a major restructuring. The company intends to cut approximately 750 members of its workforce and said that it would refocus its digital media business. It wasn’t immediately obvious how this streamlining effort would impact Adobe’s product line, but a report that was published late last night indicates that the company will gut its mobile Flash player strategy.
Adobe is reportedly going to stop developing new mobile ports of its Flash player browser plugin. Instead, the company’s mobile Flash development efforts will focus on AIR and tools for deploying Flash content as native applications. The move marks a significant change in direction for Adobe, which previously sought to deliver uniform support for Flash across desktop and mobile browsers.
Now the mobile version of Flash had always been something of a bastard child, originally featuring a much more cut down feature set than its fully fledged cousin. More recent versions brought them closer together but the experience was never quite as good especially with the lack of PC level grunt on mobile devices. Adobe’s mobile strategy now is focused on making Adobe AIR applications run natively on all major smart phone platforms, giving Flash developers a future when it comes to building mobile applications. It’s an interesting gamble, one that signals a fundamental shift in the way Adobe views the web.
Arguably the writing has been on the wall for this decision for quite some time. Back at the start of this year Adobe released Wallaby, a framework that allows advertisement developers the capability to convert Flash ads into HTML5. Indeed even back then I said that Wallaby was the first signal that Adobe thought HTML5 was the way of the future and were going to start transitioning towards it as their platform of the future. I made the point then that whilst Flash might eventually disappear Adobe wouldn’t as they have a history for developing some of the best tools for non-technical users to create content for the web. Indeed there are already prototypes of such tools already available so it’s clear that Adobe is looking towards a HTML5 future.
The one place that Flash still dominates, without any clear competitors, is in online video. Their share of the market is somewhere around 75% (that’s from back in February so I’d hazard a guess that its lower now) with the decline being driven from mobile devices that lack support for Flash video. HTML5′s alternative is unfortunately still up in the air as the standards body struggles to find an implementation that can be open, unencumbered by patents and yet still be able to support things like Digital Rights Management. It’s this lack of standardization that will see Flash around for a good while yet as until there’s an agreed upon standard that meets all those criteria Flash will remain as the default choice for online video.
So it looks like the war that I initially believed that Adobe would win has instead seen Adobe pursuing a HTML5 future. Its probably for the best as they will then be providing some of the best tools in the market whilst still supporting open standards, something that’s to the benefit of all users of the Internet. Hopefully that will also mean better performing web sites as well as Flash had a nasty reputation for bringing even some of the most powerful PCs to their knees with poorly coded Flash ads. The next few years will be crucial to Adobe’s long term prospects but I’m sure they have the ability to make it through to the other end.
Whilst I might be an unapologetic Sony fan boy even I can’t hide from their rather troubled past when it comes to customer relations. Of course everyone will remember their latest security incident which saw millions of PSN accounts breached but they’ve also had other fun incidents involving auto-installing root kits as copy protection and suing people into silence. Of course every corporation has its share of misgivings but Sony seems to have somewhat of a habit of getting themselves into hot water on a semi-regular basis with their actions. This week brings us another chapter in the saga that is the people vs Sony corporation, but it’s not as bad as it first seems.
Last week saw Sony update their PSN agreement which happens with nearly every system update that the PlayStation 3 receives. However this time around there was a particular clause that wasn’t in there previously, specifically one that could prevent class action lawsuits:
Sony has been hit with a number of class-action lawsuits since the launch of the PlayStation 3, mostly due to the decision to retroactively remove Linux support from the console and losing the data of users due to questionable security practices. Sony has another solution to this problem beyond beefing up security (and it’s not retaining the features you paid for): if you accept the next mandatory system update, you sign away your ability to take part in a class-action lawsuit. The only option left for consumers if they agree is binding individual arbitration.
ANY DISPUTE RESOLUTION PROCEEDINGS, WHETHER IN ARBITRATION OR COURT, WILL BE CONDUCTED ONLY ON AN INDIVIDUAL BASIS AND NOT IN A CLASS OR REPRESENTATIVE ACTION OR AS A NAMED OR UNNAMED MEMBER IN A CLASS, CONSOLIDATED, REPRESENTATIVE OR PRIVATE ATTORNEY GENERAL LEGAL ACTION, UNLESS BOTH YOU AND THE SONY ENTITY WITH WHICH YOU HAVE A DISPUTE SPECIFICALLY AGREE TO DO SO IN WRITING FOLLOWING INITIATION OF THE ARBITRATION. THIS PROVISION DOES NOT PRECLUDE YOUR PARTICIPATION AS A MEMBER IN A CLASS ACTION FILED ON OR BEFORE AUGUST 20, 2011.
Accompanying that particular section is a clause that allows you to opt out of this particular section of the agreement but you have to send a snail mail letter to what I assume to be Sony’s legal department in Los Angeles. On the surface this appears to rule out any further class action suits that Sony might face in the future, at least in the majority of cases where people simply click through without reading the fine print. Digging through a couple articles (and one insightful Hacker News poster) on it however I don’t think that this is all it’s cracked up to be, in fact it might have been wholly unnecessary for Sony to do it in the first place.
The clause explicitly excludes small claims which can be up to thousands of dollars. Now I’ve never been involved in any class action suits myself but the ones I’ve watched unfold online usually end up with all affected parties receiving extremely small pay offs, on the order of tens or hundreds of dollars. If you take Sony hacking case as an example a typical out of pocket expenditure for a victim of identity theft is approximately $422 (in 2006), much lower than the threshold for small claims. Considering that Sony already provided identity fraud insurance for everyone affected by the PSN hack it seems like a moot point anyway.
Indeed the arbitration clause seems to be neither here or there for Sony either with the new clause binding both parties to the arbitrator’s decision, rendering them unable to contest it in a higher court. The arbitration can also occur anywhere in the USA so that people won’t have to travel to Sony in order to have their case heard. The clause also doesn’t affect residents of Europe or Australia further limiting its reach. All in all it seems like it tackles a very narrow band of potential cases, enough so that it barely seems necessary for Sony to even put it in.
Honestly I feel that it’s more that given their track record Sony has to be extremely careful with anything they do that could be construed as being against their consumers. The arbitration clause, whilst looking a lot like a storm in a teacup, just adds fuel to the ever burning flamewar that revolves around Sony being out to screw everyone over. Hopefully they take this as a cue to rework their PR strategies so that these kind of incidents can be avoided in the future as I don’t think their public image can take many more beatings like this.
The perception in the tech community, at least up until recently, was that Google simply didn’t understand social the way Twitter and Facebook does. The figures support this view too, with Facebook fast approaching 1 billion users and Twitter not even blinking an eye when Buzz came on the scene. Still they’ve had some mild success with their other social products so whilst they might not have been the dominant social platform so I believe they get social quite well, they’re just suffering from the superstar effect that makes any place other than first look like a lot like last. Google+ then represents something of reinvention of their previous attempts with a novel approach to modelling social interactions, and it seems to be catching on.
It’s only been 2 weeks since Google+ became available to the wider public and it’s already managed to attract an amazing 10 million users. Those users have also already shared over 1 billion articles in the short time that G+ has been available. For comparison Buzz, which I can’t seem to find accurate user information on, shared an impressive 9 million articles in 2 days a far cry from the success that G+ has been enjoying. What these numbers mean is that Google is definitely doing something right with the new platform and the users are responding in kind. However we’re still deep in the honeymoon period for Google+ and whilst their initial offering is definitely a massive step in the right direction we’ll have to wait and see if this phenomenal growth can continue.
That’s not to say the G+ platform doesn’t have the potential to do so, far from it. Right now the G+ platform stands alone in its own ecosystem with only a tenuous link to the outside world via the +1 button (which ShareThis is still yet to implement and I don’t want to install yet another button to get it). Arguably much of the success of G+’s rival platforms comes from their APIs and with the initial user traction problem out of the way G+ is poised to grab an even larger section of the market once they release their API. I believe the API will be critical to the success of G+ and not just because that’s what their competitors did.
Google+, for me at least, feels like it would be the best front end to all my social activities on the web. Whilst there are many other services out there that have been attempting to be the portal to online social networking none of them have managed to capture my attention in quite the same way as G+ has done. The circles feature of G+ is also very conducive to aggregation as I could easily put all my LinkedIn contacts in Colleagues, Twitter in Following and Facebook friends in well, the obvious place. Then my G+ stream would become the magical single pane of glass I’d go to for all my social shenanigans and those who weren’t on G+ would still be connected to me through their network of choice.
That last point is key as whilst G+’s growth is impressive it’s still really only hitting a very specific niche, mostly tech enthusiasts and early adopters. That’s not a small market by any stretch of the imagination but since less than 20% of my social circle has made their way onto G+ from Facebook the ability to communicate cross platforms will be one of the drivers of growth for this platform. Whilst I’d love G+ to become the dominant platform it’s still 740 million users short of hitting that goal and Facebook has a 7 year lead on them with this. It’s not impossible, especially with the kind of resources and smarts Google has to throw at the problem, but it’s not a problem that can be solved by technology alone.
Google+ is definitely on track to be a serious contender to Facebook but its still very early days for the service. What’s ahead of Google is a long, uphill battle against an incumbent that’s managed to take down several competitors already and has established themselves as the de-facto social network. Unlike like their other social experiments before it Google+ has the most potential to bring about change in the online social networking ecosystem and with a wildly successful 2 weeks under their belt Google is poised to become a serious competitor, if not the one to beat.
It’s no secret that I’m not a believer in the iPad (or any tablet for that matter) as the herald of a new era in the world of media. Whilst I now have to admit that Apple has managed to take a product that’s already been done and popularise it to the point of mainstream I still remain wholly unconvinced that this new platform will change the way the media giants operate. Thus far all experiments with launching on this platform haven’t done well but this could be easily due to them not working well in their traditional forms either. Then comes along The Daily, the brainchild of media giant Rupert Murdoch which be almost wholly confined to the iPad. With $30 million spent on research and development and a budget of $500,000 a day you’d think that this publication would have a real chance at beginning the media revolution, but I’m still not convinced.
You see whilst I might be coming around to the idea that this whole tablet craze might actually have something to it (I’m really taking a shine to the Motorola Xoom) the media industry has an absolutely terrible track record when it comes to adopting new forms of media. Whilst a new platform might be extremely popular if it conflicts with their way of doing business they are more likely to fight it than they are to try and innovate with it. Heck many of the traditional media outlets are still struggling to make their subscription based model work on the Internet and that hasn’t enjoyed the success they thought it would. Why then would the same model work for the iPad? From what I can see it doesn’t.
But don’t take my word for it (since I’m a biased source on this subject) take it from the many other people that are under whelmed by Murdoch’s latest offerings. From the videos and initial user reports it seems like The Daily is much like its print cousins, delivering news the day after it happens. They have managed to blend in a lot of social media elements (like Twitter streams and Facebook sharing) but the integration appears to be very weak with the Twitter streams being half a day old and the link sharing giving only a small part of the article. In an age where social media thrives on the latest information even being a day behind in the news¹ means you’re way behind what everyone is interested in. There’s still a place for good journalism however I don’t believe it’s on the iPad, at least not in the form that has been presented to us thus far.
One good thing to come out of this though is the addition to iOS SDK that allows app developers to make use of the subscription framework that The Daily uses. It’s not a major change to the SDK but it does allow other publications and apps the ability to deliver additional paid content to an iOS device without prompting having to prompt the user or sending them through some weird web work flow.
More it seems that people are interested in crafting their own news feed based around their mediums of choice. Twitter is arguably the medium for breaking news with blogs coming in close second and traditional media sources serving as verification once the story has been broken. This is one of the core principles of the Internet in action and no matter how hard you try time has shown that free access to a service is wildly more successful than a walled garden with a ticket price. Of course it’s still very early days for The Daily and the next few months will be crucial in terms of judging the viability of the publication. Right now it doesn’t look good for them but since they’re already $30 million in the hole I figure Murdoch is watching the reaction to his new publication closely and if he’s smart there’ll be some radical changes coming soon.
¹Yes yes, it’s quite obviously that I’m usually several days (or weeks) behind when it comes to reporting stuff. This isn’t a news blog though so being in the midst of media storms isn’t my thing so you can keep your “how ironic” comments to yourselves
There’s been very few times in my online life when I’ve felt the need to go completely anonymous in order to voice my opinion or partake in an activity. Mostly that’s because I’ve got quite a bit invested in my online identity and with that comes a certain amount of pride which I hope to carry with me during my online activities. I think the only times I can remember trying to be anonymous about something was when I wanted to pull a prank on someone or if I was voicing a controversial/against the groupthink opinion. Still I recognise the need for a medium such as the Internet to facilitate completely anonymous communication especially when it facilitates such great things as Wikileaks.
I remember back in the early days of the Internet I spent the vast majority of my time there under a pseudonym purely because that was the way it was done back then. Indeed sharing personal information across the wire seemed like a bit of a faux pas as you couldn’t trust the people on the other end not to use it for nefarious purposes. Over time however I saw services begin to crop up that chipped away at this idea, encouraging their users to divulge some sort of personal information in order to get something in return. Blogs were a great example of this with many of the blogging starlets being those who shared interesting stories about their lives like Tucker Max or Outpost Nine. Still for the majority there was still a layer of anonymity between the writer and the reader with many choosing not to reveal details that could identify them personally, keeping their online and offline presence happily separate.
A few years later we saw the beginnings of the current social Internet revolution. These services are based around the idea of mimmicing those interactions we would have in everyday life and usually attempting to augment them as well. In order to facilitate such an idea any of the anonymity granted by the Internet has to be stripped away so that the offline relationships can be replicated online. Such information also forms the basis of the revenue streams for those who provide these online services to everyone, usually at no cost to the end user. In essence you’re trading your online anonymity (and by extension privacy) for the use of a service, effectively turning it into a currency.
Interestingly enough is that your privacy doesn’t have a fixed cost, it’s quite relative to who you are. Heavy users of social networking tools are in essence costing the company providing the service more money than those who don’t use the service as much. From a pure metric standpoint you could boil this down to bandwidth, storage space and potential incidents raised that need to be fixed by a member of your team. However those heavy users are more likely to have more personal data on your website making them far more valuable than someone else. If you take an example of say a celebrity on Twitter (as much as it pains me to say it, like Bieber and Lady Gaga) they are probably the biggest cost to you on a per user basis, but they’re also the most valuable. In essence one unit of their privacy currency is worth oodles more than someone like me.
Still the use of these services does not preclude you from going anonymous when you need to. If I really wanted to hide my tracks I could go to an Internet cafe in another city, encrypt my connection and pipe it through TOR and start blasting out information through all sorts of means without it ever being traced back to me. All the information about me online then would be less than useless, save for the fact that anyone attempting to trace me would figure out that I knew a thing or two about IT. Realistically even in this time of sharing almost too much information with the world there are still very few barriers to hiding yourself completely should the need arise.
I will admit though that the traditional means of being anonymous, which were usually an innate part of the service, have faded away. The Web 2.0 revolution’s focus on user generated content has meant that there’s is literally untold masses of information available, something which hasn’t gone unnoticed by the Internet giants:
“There was five exabytes [five billion gigabytes] of information created between the dawn of civilization through 2003,” he said. “But that much information is now created every two days, and the pace is increasing… People aren’t ready for the technology revolution that’s going to happen to them.
“If I look at enough of your messaging and your location, and use artificial intelligence, we can predict where you are going to go,” Schmidt said, adding unnervingly.
“Show us 14 photos of yourself and we can identify who you are. You think you don’t have 14 photos of yourself on the internet? You’ve got Facebook photos!”
For those who enjoyed the anonymous online life this means that, like it or not, there’s probably information on you out there in the Internet. Whilst we’re still a long way from being able to make sense of this data avalanche the ever rapid advancement in computing technology means that one day we will. This means that peeling back the veil of anonymity will be easier for those seeking to do so but on the flip side that just encourages those who value their online anonymity to find better ways to combat that. In essence we have an arms race that I can’t fathom how it will play out, but history has shown that a dedicated minority can put up one hell of a fight if they’re cornered.
I guess I take a engineering perspective to online anonymity: it’s a tool to be used for certain problems. When the time comes that you need to do something online that doesn’t come back to bite you there are options for you to follow. I’m quite happy to trade some personal information for the use of a service that I deem valuable, especially when most of it is a matter of public record anyway. In the end whilst we might see the end of our traditional views of online privacy and anonymity the tales of its death are greatly exaggerated and it will remain a fundamental feature of the medium for as long as it functions.