If there’s one thing that turn an otherwise professional looking document into a piece of horrifying garbage it’s clip art. Back in the days when graphics on computers were a still a nascent field, one populated with people with little artistic style, they were the go-to source for images to convey a message. Today however, with clip art’s failure to modernize in any way (mostly due to the users who desperately cling to it’s disgustingly iconic style) it’s become a trademark of documents that have had little to no thought put into them. Microsoft has been aware of this for some time, drastically reducing the amount of clip art present in Office 2010 and moving the entire library online in Office 2013. Now that library no longer contains any clip art at all, now it just points to Bing Images.
As someone who’s had to re-enable access to clip art more times then he’d have liked to I’m glad Microsoft has made this move as whilst it won’t likely see everyone become a graphic designer overnight it will force them to think long and hard about the images they’re putting into their documents. The limited set of images provided as part of clip art usually meant people would try to shoehorn multiple images together in order to convey what they were after, rather than attempting create something in Visio or just searching through the Internet. Opening it up to the Bing Image search engine, which by default filters to images which have the appropriate Creative Commons licensing, is obviously done in a hope that more people will use the service although whether they will or not remains to be seen.
However what’s really interesting about this is what it says about where Microsoft is looking to go in the near term when it comes to its Office line of products. Most people wouldn’t know it but Microsoft has been heavily investing in developing Office to be a much more modern set of documentation tools, retaining their trademark backwards compatibility whilst making it far more easier to make documents that are clean, professional and, above all, usable. The reason why most people wouldn’t know about it is that their latest product, Sway, isn’t yet part of the traditional Office product suite but with Microsoft’s push to get everyone on Office 365 I can’t see that being the case for too long.
Sway is essentially a replacement for PowerPoint, yet another Microsoft product that’s been lauded for it’s gaudy design principles and gross overuse in certain situations. However instead of focusing just on slides and text it’s designed to be far more interactive and inter-operable, able to gather data from numerous different sources and present it in a format that’s far more pleasing than any PowerPoint presentation I’ve seen. Unfortunately it’s still in closed beta for the time being so I can’t give you my impressions with it (I’ve been on the waiting list for some time now) but suffice to say if Sway is the future of Microsoft’s Office products than the ugly history of clip art might end up just being a bad memory.
It’s just more evidence that the Microsoft of today is nothing like the one it was in the past. Microsoft is still a behemoth of a company, one that’s more beholden to it’s users than it’d like to admit, but we’re finally starting to see some forms of innovation from them rather than their old strategy of embrace, extend, extinguish. Whether its users will embrace the new way of doing things or cling to the old (like they continue to do) will be the crux of Microsoft’s strategy going forward but either way it’s an exciting time if you’re a Microsoft junkie like myself.
I’m not exactly what you’d call a fashionista, the ebbs and flows of what’s current often pass me by, but I do have my own style which I usually refresh on a yearly basis. More recently this has tended towards my work attire, mostly because I spend a great deal more time in it than I did previously. However the act of shopping for clothes is one I like to avoid as I find it tiresome, especially when trying to find the right sizes to fit my not-so-normal dimensions. Thus I’ve recently turned towards custom services and tailoring in order to get what I want in the sizes that fit me but, if I’m honest, the online world still seems to be light years behind that which I can get from the more traditional fashion outlets.
For instance one of the most frustrating pieces of clothing for me to buy is business shirts. Usually they fall short in one of my three key categories (length, sleeve length and fit in the mid section) so I figured that getting some custom made would be a great way to go. So I decided that I’d last out for a couple shirts from 2 online retailers, Original Stitch and Shirts My Way, to see if I could get something that would tick all 3 categories. I was also going to do a review of them against each other to see which one of the retailers provided the better fit and would thus become my defacto supplier of shirts for the foreseeable future. However upon receiving both shirts I was greeted with the unfortunate reality: they both sucked.
They seemed to get some of the things right, like the neck size and overall shirt length, however they both seemed to be made to fit someone who weighed about 40kg more than I do with the mid section being like a tent. Both of them also had ridiculously billowy sleeves, making my arms appear to be twice as wide as they should be. I kind of expected something like this to happen with Original Stitch, since their measurements aren’t exactly comprehensive, but Shirts My Way also suffered from the same issues even though I followed their guidelines exactly. Comparing this to the things I’ve had fitted or tailored in the past I was extremely disappointed as I was expecting as good or better service.
The problem could be partially solved by technology as 3D scanning could provide extremely accurate sizing that online stores could then incorporate in order to ensure you got the right fit the first time around. In fact I’d argue that there should be some kind of open standard for this, allowing all the various companies to develop their brand of solutions for it that would be interoperable between different clothing companies. That is something of a pipe dream, I know, but I can’t be the only person who has had this kind of frustration trying to get the right fits from online retailers.
I guess for now I should just stick with the tried and true methods for getting the clothing that I want as the online experience, whilst infinitely more convenient, ultimately delivers a lacklustre product. I’m hopeful that change is coming although it’s going to take time for it to become widespread and I’m sure that there won’t be any standards across the industry for a long time after that. Maybe one day I’ll be able to order the right fits from the comfort of my own home but, unfortunately, that day is not today.
It’s pretty well known that the communities that surround the MOBA genre, whether it be the original DOTA or the newer incarnations such as DOTA2, League of Legends or Heroes of Newerth, are spectacularly hostile. Indeed in the beginning when DOTA was just a custom application many people relied on 3rd party ban services, many of which relied on user generated lists to filter out bad people. Of course these being user generated led to a lovely hate cycle where people would simply ban whoever they felt like. Once you were past that barrier it didn’t get much better with any requests for help or misunderstanding of certain mechanics usually earning you a place on those lists. It was for this reason that many of us just stopped playing the original DOTA, the community around it was just horrifically toxic.
There was hope that the newer entries into the MOBA scene would help to alleviate this somewhat as a fresh platform would give the community a chance to reinvent itself. Unfortunately, at least in my experience on HoN (I only played 3~4 games of LoL), the same toxic community sprouted once again and I found myself wondering why I was bothering. DOTA2 started out the same way with my first few games being marred by similar experiences but there was enough to the game that kept me coming back and something strange started to happen: the people I was playing with were becoming infinitely better. Not just in terms of skill but in terms of being productive players, those with an active interest in helping everyone out and giving solid criticism on improving their play.
Initially most of that was due to me moving up the skill brackets however there was still a noticeable amount of toxicity even at the highest skill levels. What really made the change however was the introduction of communication bans, a soft ban mechanism that prevents a player from communicating directly with their team, limiting them to canned responses and map pings. Whilst the first week or two were marred with issues surrounding the system, although I do bet a few “issues” were people thinking they were in the right for abusing everyone, soon after the quality of my in game experience improved dramatically. It’s even got to the point where I’ve had people apologize for losing their cool when it’s pointed out to them something which has just never happened to me before in an online game.
It was then interesting to read about Microsoft’s new reputation system that they’ll be introducing with the Xbox One. Essentially there’s 3 levels of reputation: “good players” which compromise most of the gaming community, “needs attention” a kind of warning zone that tells you that you’re not the saint your mother says you are and finally “avoid me” which is pretty self explanatory. It’s driven by an underlying score that centers on community feedback so a group of jerks can’t instantly drop you to avoid me nor can you simply avoid a game for a couple months and have it reset on you. Additionally there’s a kind of credibility score attached to each player so those who report well are given more weight than those who report anyone and everyone who looks at them the wrong way.
Considering my experience with the similar system in DOTA2 I have pretty high hopes for the Xbox One’s reputation system to go a fair way to improving the online experience on Xbox Live. Sure it won’t be perfect, no system ever is, but you’d be surprised how quickly people will change their behavior when they get hit with something that marks them as being a negative impact on the community. There will always be those who enjoy nothing more than making everyone else’s online life miserable but at least they’ll quickly descend into an area where they can play with like minded individuals. That avoid me hell, akin to low priority in DOTA2, is a place that no one likes to be in for long and many are happy to pay the price of being a nice person in order to get out of it.
In the days before ubiquitous high speed Internet people the idea of having games that were only available when you were online were few and far between with the precious few usually being MMORPGs. As time went on however and the world became more reliably connected game developers sought to take advantage of this by creating much more involved online experiences. This also lead to the development of some of the most insane forms of DRM that have ever existed, schemes where the game will constantly phone home in order to verify if the player is allowed to continue playing the game. The advent of cheap and ubiquitous Internet access then has been both a blessing and a curse to us gamers and it may be much more of the latter for one particular type of game.
Way back when an Internet connection was considered something of a luxury the idea of integrating any kind of on line experience was something of a pipe dream. There was still usually some form of multiplayer but that would usually be reserved the hallowed times of LAN parties. Thus the focus of the game was squarely on the single player experience as that would be the main attraction for potential gamers. This is not to say that before broadband arrived there was some kind of golden age of single player games (some of my favourite games of all time are less than 5 years old) but there definitely was more of a focus on the single player experience back then.
Today its much more common to see games with online components that are critical to the overall experience. For the most part this is usually some form of persistent multiplayer which has shown to be one of the most successful ways to keep players engaged with the game (and hence the brand) long after the single player experience has faded from memory. We can squarely lay the blame for this behaviour at big titles like Call of Duty and Battlefield as most multiplayer systems are seeking to emulate the success those games enjoyed. However the biggest blow that single player games has come from something else: the online requirement to just to be able to play games.
Now I’m not specifically referring to always on DRM, although that is in the same category, more the requirement now for many games to go online at least once before they let you play the game. For many of us this check comes in the form of a login to Steam before we’re able to play the games and for others its built directly into the game, usually via a phone home to ensure that the key is still valid. Whilst there is usually an offline mode available I’ve had (and heard many similar stories) quite a few issues trying to get that to work, even when I still have an Internet connection to put them into said mode. For modern games then the idea that something is truly single player, a game that can be installed and played without the need of any external resources, is dead in the water.
This became painfully obvious when Diablo III, a game considered by many (including myself) to be a primarily single player experience, came with all the problems that are evident in games like MMORPGs. The idea that a single player experience required maintenance enraged many players and whilst I can understand the reasons behind it I also share their frustration because it calls into question just how long these games will continue to exist in the future. Whilst Blizzard does an amazing job with keeping old titles running (I believe the old Battle.Net for Diablo 1 is still up and running) many companies won’t care to keep the infrastructure up and running once all the profit has been squeezed out of a title. Some do give the courtesy of patching the games to function in stand alone mode before that happens, but its unfortunately not common.
It’s strange to consider then that the true single player games of the Internet dark ages might live on forever whilst their progeny may not be usable a couple years down the line. There’s a valid argument for companies not wanting to support things that are simply costing them money that’s only used by a handful of people but it then begs the question as to why the game was developed with such heavy reliance on those features in the first place. Unfortunately it doesn’t look like this will be a trend that will be reversed any time soon and our salvation in many cases will come from the dirty pirates who crack these systems for us at no cost. This can not be relied on however and it should really fall to the game developers to have an exit strategy for games that they no longer want to support should they want to keep the loyalty of their long time customers.
I’m not a terribly picky consumer. I mean there are particular shops and sellers I’ll favor for particular products (I get nearly all of my PC equipment from PC Case Gear, for instance) but if I’m looking for an one off item I’ll usually go wherever I can find the best price from a reputable seller. If I don’t get a recommendation from a friend this usually has me shopping through sellers on eBay or through price aggregation sites and for the most part I’ve never been lead wrong with this. My most recent experience, one that involves the Australian retailer Kogan, wasn’t a particularly bad experience but I feel that there’s some things people need to know about them before they buy something through this online only retailer.
So the item I was looking for was a Canon 60D to upgrade my ageing 400D that’s served me well for the past 5 years. I did the usual snoop through Ebay and some other sites and found it could be had for around $900, shipping included. After doing a bit more searching I found it available from Kogan for a paltry $849 (and it has since dropped another $20) and even when combined with the shipping it came out on top. The rest of the items I was looking at (namely a Canon EF 24-105 F/4L lens, Canon Speedlite 403EX II and a 32GB SD card) were also all available from there for a pretty good price. All up I think I got all the kit for about $150 less than I would have gotten it through eBay which is pretty amazing considering that I’ve struggled to find cheaper prices before.
I hit a hurdle with them when they requested a land line phone number they could call so they could verify the credit card information used in the transaction. I have a land line number but it’s not hooked up to anything (the only phone I’ve got seems to be broken as it doesn’t ring when I call it) as its just used for the Internet connection. I offered to forward this to my mobile if they needed it but they instead just called me on my mobile directly. This isn’t the first time I’ve heard of people getting asked for land lines to verify things (I gave a reference for a friend and they insisted on being given one) so I don’t know if they can do some kind of verification on the back end that that number belongs to me or something, but even if they did then the same tech should work for mobiles as well. Anyway it was a small snag and it was just unfortunate that it meant my order didn’t get processed until the following Monday, no big deal.
Now since I ordered everything together I expected it all to come as one package but that’s not the case with Kogan. I received my 4 items in 4 separate deliveries through 2 different shipping companies. Now I’m lucky and my wife was at home because she is studying for exams but at any other time I wouldn’t have been there to pick up all these different items. This wouldn’t have been too bad if they all arrived on the same day but the delivery time from first received to last spans just over a week and a half with the last item arriving yesterday (I placed the order on the 01/06/2012). Considering that I’ve ordered similar items from Hong Kong, the 400D being one of them, and have managed to receive them all at the same time I found this piecemeal mailing approach rather annoying as I bought all the items to be used together and it wasn’t until yesterday that I had the completed package.
Looking at Kogan’s website you’d be forgiven for thinking that all their products were Australian versions until you get to the fine print at the bottom of the page. I’m not going to blame Kogan for this, they’re quite clear about the fact that anything that doesn’t carry the Kogan name will come from their HK branch, but it certainly does give the impression to the contrary. I’d like to think of myself as an observant person and I didn’t pick up on the fact that it would be coming from HK until I saw where it was being delivered from. This isn’t a bad thing per se, just something you should be aware of when you’re comparing them to similar sellers on eBay and the like.
Realistically had they shipped everything in one lot, even if it was a little late, I don’t think I’d be feeling as sour about my Kogan experience as I do now. I bought the items figuring that shipping wouldn’t take more than a week as I had an upcoming trip that the camera was intended for. Thankfully the trip was cancelled so I wasn’t left with half of the items that I wanted to take with me, but it could have just as easily gone the other way. I can probably see myself going back there for single items, possibly an extra battery for said camera, but for anything else I think I’ll be going elsewhere. This isn’t to say that you should though, but do take these points into consideration before making your purchase.
UPDATE: You should read my latest post on Kogan here as they’ve really improved the whole experience since I wrote this almost a year ago.
If you’ll allow me to get a little hipster for a second you’ll be pleased to find out that I’ve been into the whole Multiplayer Online Battle Arena (MOBA) scene since it first found its roots way back in Warcraft 3. Back then it was just another custom map that I played along with all the other customs I enjoyed, mostly because I suffered from some extreme ladder anxiety. Since then I’ve played my way through all of the DOTA clones that came out (Heroes of Newerth, Leaggue of Legends and even that ill fated experiment from GPG, Demigod) but none of them captured me quite as much as the seemingly official successor, DOTA 2, has.
Defense of the Ancients 2 should be familiar to anyone who played the original DOTA or one of the many games that followed it. In a team of 5 you compete as single heros, choosing from a wide selection who all have unique abilities and uses, pushing up one of three lanes with a bunch of NPC creeps at your side. The ultimate goal is the enemies ancient, a very well defended building that will take the concerted effort of all team members to reach and finally, destroy. There are of course many nuances to what would, on the surface, seem to be a simple game and it’s these subtleties which make the game so engrossing.
When compared to its predecessor that was limited by the graphics engine of WarCraft 3 DOTA2 stands out as a definite improvement. It’s not a graphical marvel, much like many of the MOBA genre, instead favoring heavily stylized graphics much like Blizzard does for many of their games. The recent updates to DOTA2 have seen some significant improvements over the first few initial releases both in terms of in-game graphics and the surrounding UI elements. Valve appears to be heavily committed to ensuring DOTA2’s success and the graphical improvements are just the tip of the iceberg in this regard.
Back in the old days of the original DOTA the worst aspect of it was finding a game and then hoping that no one would drop out prematurely. There were many 3rd party solutions to this problem, most of which were semi-effective but were open to abuse and misuse, but none of them could solve the problem of playing a game with similarly skilled players. DOTA2, like nearly every other MOBA title, brings in a matchmaking system that will pair you up with other players and also brings with it the ability to rejoin a game should your client crash or your connection drop out.
Unfortunately since DOTA2 is still in beta the matchmaking system is not yet entirely working as I believe it’s intended to. It does make the process of finding, joining and completing a game much more streamlined but it is blissfully unaware of how skilled a potential player is. What this means is that the games have a tendency to swing wildly in one teams favour and unlike other games where this leads to a quick demise (thus freeing you up toplay again) DOTA instead is a drawn out process and should you decide to leave prematurely you’ll be hit with a dreaded “abandoned” mark next to your record. This is not an insurmountable probelm though and I’m sure that future revision of DOTA2 will address this issue.
The core gameplay of DOTA2 is for the most part unchanged from back in the days of the original DOTA. You still get your pick from a very wide selection of heros (I believe most of the AllStars team are in there), the items have the same names and you still go through each of the main game phases (laneing, pushing, ganking) as the game progresses. There have been some improvements to take away some of the more esoteric aspects of DOTA2 and for the most part they’re quite welcome.
Gone are the days where crafting items required either in depth knowledge of what made what or squinting at the recipe text, instead you can click on the ultimate item you want to craft and see what items go in to make it. Additionally there’s a list of suggested items for you hero which, whilst not being entirely appropriate for every situation, will help to ease players into the game as they learn some of the more intricate aspects of iteming a character correctly. It’s still rather easy to draw the ire of players who think they know everything there is to know about certain characters (I’ll touch more on the community later) but at least you won’t be completely useless if you stick to the item choices the game presents for you.
Know which hero to pick is just as important as knowing how to item them and thankfully there are some improvements to the hero choosing system that should make do so a little easier for everyone. Whilst the hero picking has always made delineations between int/str/agi based heros you can now also filter for things like what kind of role the character fills like support, ganker or initiator. For public games though it seems everyone wants to play a carry (mostly because they’re the most fun) and there’s little heed paid to good group composition but this is not a fault of the game per se, but there is potential there for sexing up the lesser played types so that pub compositions don’t end up as carry on carry battles.
It’s probably due to the years of play testing that the original DOTA received but the heroes of DOTA2 are fairly well balanced with no outright broken or overpowered heroes dominating the metagame. There are of course heros that appear to be broken in certain situations (I had the pleasure of seeing Outworld Destroyer killing my entire team in the space of 10 seconds) but in reality it’s the player behind that character making them appear broken. This bodes well for the eSports scene that Valve is fostering around DOTA2 and they’re going to need to keep up this level of commitment if they want a chance of dethroning the current king, League of Legends.
The eSports focused improvements in DOTA2 are setting the bar for new game developers who have their eye on developing an eSports scene for their current and future products. The main login screen has a list of the top 3 spectated games and with a single click you can jump in and watch them with a 2 minute delay. This can be done while you’r waiting to join a game yourself and once your game is ready to play you’re just another click away from joining in on the action. It’s a fantastic way for both newcomers and veterans of the genre to get involved in the eSports scene, but that’s just he start of it.
Replays can be accessed directly from a player’s profile or downloaded from the Internet. Game casters can embed audio directly into the replay allowing users to watch the replay in game with the caster’s commentary.They can also watch the caster’s view of the game, use a free camera or using the built in smart camera that will automatically focus on the place where the most action is happening. It’s a vast improvement over how nearly all other games do their replays and Valve really has to be commended for the work they’ve done here.
For all the improvements however there’s one thing that DOTA2 can’t seem to get away from and that’s its elitist, almost poisonous community that is very hostile to new players. Whilst the scsreenshot above is a somewhat tongue-in-cheek example of the behavior that besots the DOTA2 community it still holds true that whilst many concessions have been made to make the game more palatable for newcomers the DOTA2 community still struggles with bringing in new players to the fold. League of Legends on the other hand crack this code very early on and the following success is a testament to how making the game more inviting for new users is the ultimate way to drive the game forward. I don’t have an answer as to how to fix this (and whilst I say LoL cracked the code I’m not 100% sure their solution is portable to DOTA2) and it will be very interesting to see how DOTA2 develops in the shaodw of the current MOBA king.
DOTA2 managed to engage me in a way that only one other game has managed to do recently and I belive there’s something to that. Maybe it’s a bit of nostalgia or possibly my inner eSports fan wanting to dive deep into another competitive scene but DOTA2 has really upped the MOBA experience that I first got hooked on all those years ago and failed to rekindle with all the other titles in this genre. I’d tell you to go out and buy it now but it’s still currently in beta so if you can get your hands on a key I’d definitely recommend doing so and if you’re new to this kind of game just ignore the haters, you won’t have to deal with them for long.
Defense of the Ancients 2 is currently in beta on PC. Approximately 60 hours of total game play were undertaken prior to this review with a record of 32 wins to 36 losses.
I love me some Sony products but I’m under no delusion that their user experience can be, how can I put this, fantastically crap sometimes. For the most part their products are technologically brilliant (both the PS3 and the DSC-HX5V that I have fit that category) but the user experience outside that usually leaves something to be desired. This isn’t for a lack of trying however as Sony has shown that they’re listening to their customers, albeit only after they’ve nagged about it for years before hand. After spinning up my PS3 again for the first time in a couple months to start chipping away at the backlog of console games that I have I feel like Sony needs another round of nagging in order to improve the current user experience.
The contrast between Sony’s and Microsoft’s way of doing consoles couldn’t be more stark. Microsoft focused heavily on the online component of the Xbox and whilst there might be a cost barrier associated with accessing it Xbox Live still remains as the most active online gaming networks to date. Sony on the other hand left the access free to all to begin with and has only recently begun experimenting with paid access (the jury is still out on how successful that’s been). One of the most notable differences though is the updating process, major source of tension for PS3 owners worldwide.
As I sat down to play my copy of Uncharted 3: Drake’s Fortune I first greeted with the “A system update is required” message in the top right hand corner of my TV. Since I wasn’t really planning to go online with this one just yet I figured I could ignore that and just get to playing the game. Not so unfortunately as it has been so long since I last updated that Uncharted 3 required an update to be applied before I could play it. Fair enough I thought and 15 mins later I was all updated and ready to go. Unfortunately the game itself also had an update, pushing back my game time by another 5 minutes or so. This might not seem like a lot of time (and I know, #firstworldproblems) but the time taken was almost enough for me not to bother at all, and this isn’t the first time it has happened either.
Nearly every time I go to play my PS3 there is yet another update that needs to be downloaded either for me to get online or to play the game that I’m interested in playing. My Xbox on the other hand rarely has updates, indeed I believe there’s been a grand total of 1 since the last time I used it. Both of these approaches have their advantages and disadvantages but Sony’s way of doing it seems to be directly at odds with the primary use case for their device, something which doesn’t necessarily have to be that way. In fact I think there’s a really easy way to reduce that time-to-play lag to zero and it’s nothing radical at all.
Do the updates while the PS3 is turned off or not in use.
Right now the downloading of updates is a manual process, requiring you to go in and agree to the terms and conditions before it will start the downloads. Now I can understand why some people wouldn’t want automatic updating (and that’s perfectly valid) so there will have to be an option to turn it off. Otherwise it should be relatively simple to periodically boot the system into a low power mode and download the latest patches for both system and games that have been played on it. If such a low power mode isn’t possible then scheduling a full system boot at a certain time to perform the same actions would be sufficient. Then you can either have the user choose to automatically install them or keep the process as is from there on, significantly reducing the time-to-play lag.
I have no doubt that this is a common complaint amongst many PS3 users, especially since it’s become the target of Internet satire. Implementing a change like this would go a long way to making the PS3 user base a lot happier, especially for those of us who don’t use it regularly. There’s also a myriad of other things Sony could do as well but considering how long it took them to implement XMB access in games I figure it’s best to work on the most common issue first before we get caught up in issue paralysis. I doubt this blog post will inspire Sony to make the change but I’m hopeful that if enough people start asking for it then one day we might see it done.
Adobe had also been quite stalwart in their support for Flash too, refusing to back down on their stance that they were “the way” to do rich content on the Internet. Word came recently however that they were stopping development on the mobile version of Flash:
Graphics software giant Adobe announced plans for layoffs yesterday ahead of a major restructuring. The company intends to cut approximately 750 members of its workforce and said that it would refocus its digital media business. It wasn’t immediately obvious how this streamlining effort would impact Adobe’s product line, but a report that was published late last night indicates that the company will gut its mobile Flash player strategy.
Adobe is reportedly going to stop developing new mobile ports of its Flash player browser plugin. Instead, the company’s mobile Flash development efforts will focus on AIR and tools for deploying Flash content as native applications. The move marks a significant change in direction for Adobe, which previously sought to deliver uniform support for Flash across desktop and mobile browsers.
Now the mobile version of Flash had always been something of a bastard child, originally featuring a much more cut down feature set than its fully fledged cousin. More recent versions brought them closer together but the experience was never quite as good especially with the lack of PC level grunt on mobile devices. Adobe’s mobile strategy now is focused on making Adobe AIR applications run natively on all major smart phone platforms, giving Flash developers a future when it comes to building mobile applications. It’s an interesting gamble, one that signals a fundamental shift in the way Adobe views the web.
Arguably the writing has been on the wall for this decision for quite some time. Back at the start of this year Adobe released Wallaby, a framework that allows advertisement developers the capability to convert Flash ads into HTML5. Indeed even back then I said that Wallaby was the first signal that Adobe thought HTML5 was the way of the future and were going to start transitioning towards it as their platform of the future. I made the point then that whilst Flash might eventually disappear Adobe wouldn’t as they have a history for developing some of the best tools for non-technical users to create content for the web. Indeed there are already prototypes of such tools already available so it’s clear that Adobe is looking towards a HTML5 future.
The one place that Flash still dominates, without any clear competitors, is in online video. Their share of the market is somewhere around 75% (that’s from back in February so I’d hazard a guess that its lower now) with the decline being driven from mobile devices that lack support for Flash video. HTML5’s alternative is unfortunately still up in the air as the standards body struggles to find an implementation that can be open, unencumbered by patents and yet still be able to support things like Digital Rights Management. It’s this lack of standardization that will see Flash around for a good while yet as until there’s an agreed upon standard that meets all those criteria Flash will remain as the default choice for online video.
So it looks like the war that I initially believed that Adobe would win has instead seen Adobe pursuing a HTML5 future. Its probably for the best as they will then be providing some of the best tools in the market whilst still supporting open standards, something that’s to the benefit of all users of the Internet. Hopefully that will also mean better performing web sites as well as Flash had a nasty reputation for bringing even some of the most powerful PCs to their knees with poorly coded Flash ads. The next few years will be crucial to Adobe’s long term prospects but I’m sure they have the ability to make it through to the other end.
Whilst I might be an unapologetic Sony fan boy even I can’t hide from their rather troubled past when it comes to customer relations. Of course everyone will remember their latest security incident which saw millions of PSN accounts breached but they’ve also had other fun incidents involving auto-installing root kits as copy protection and suing people into silence. Of course every corporation has its share of misgivings but Sony seems to have somewhat of a habit of getting themselves into hot water on a semi-regular basis with their actions. This week brings us another chapter in the saga that is the people vs Sony corporation, but it’s not as bad as it first seems.
Last week saw Sony update their PSN agreement which happens with nearly every system update that the PlayStation 3 receives. However this time around there was a particular clause that wasn’t in there previously, specifically one that could prevent class action lawsuits:
Sony has been hit with a number of class-action lawsuits since the launch of the PlayStation 3, mostly due to the decision to retroactively remove Linux support from the console and losing the data of users due to questionable security practices. Sony has another solution to this problem beyond beefing up security (and it’s not retaining the features you paid for): if you accept the next mandatory system update, you sign away your ability to take part in a class-action lawsuit. The only option left for consumers if they agree is binding individual arbitration.
ANY DISPUTE RESOLUTION PROCEEDINGS, WHETHER IN ARBITRATION OR COURT, WILL BE CONDUCTED ONLY ON AN INDIVIDUAL BASIS AND NOT IN A CLASS OR REPRESENTATIVE ACTION OR AS A NAMED OR UNNAMED MEMBER IN A CLASS, CONSOLIDATED, REPRESENTATIVE OR PRIVATE ATTORNEY GENERAL LEGAL ACTION, UNLESS BOTH YOU AND THE SONY ENTITY WITH WHICH YOU HAVE A DISPUTE SPECIFICALLY AGREE TO DO SO IN WRITING FOLLOWING INITIATION OF THE ARBITRATION. THIS PROVISION DOES NOT PRECLUDE YOUR PARTICIPATION AS A MEMBER IN A CLASS ACTION FILED ON OR BEFORE AUGUST 20, 2011.
Accompanying that particular section is a clause that allows you to opt out of this particular section of the agreement but you have to send a snail mail letter to what I assume to be Sony’s legal department in Los Angeles. On the surface this appears to rule out any further class action suits that Sony might face in the future, at least in the majority of cases where people simply click through without reading the fine print. Digging through a couple articles (and one insightful Hacker News poster) on it however I don’t think that this is all it’s cracked up to be, in fact it might have been wholly unnecessary for Sony to do it in the first place.
The clause explicitly excludes small claims which can be up to thousands of dollars. Now I’ve never been involved in any class action suits myself but the ones I’ve watched unfold online usually end up with all affected parties receiving extremely small pay offs, on the order of tens or hundreds of dollars. If you take Sony hacking case as an example a typical out of pocket expenditure for a victim of identity theft is approximately $422 (in 2006), much lower than the threshold for small claims. Considering that Sony already provided identity fraud insurance for everyone affected by the PSN hack it seems like a moot point anyway.
Indeed the arbitration clause seems to be neither here or there for Sony either with the new clause binding both parties to the arbitrator’s decision, rendering them unable to contest it in a higher court. The arbitration can also occur anywhere in the USA so that people won’t have to travel to Sony in order to have their case heard. The clause also doesn’t affect residents of Europe or Australia further limiting its reach. All in all it seems like it tackles a very narrow band of potential cases, enough so that it barely seems necessary for Sony to even put it in.
Honestly I feel that it’s more that given their track record Sony has to be extremely careful with anything they do that could be construed as being against their consumers. The arbitration clause, whilst looking a lot like a storm in a teacup, just adds fuel to the ever burning flamewar that revolves around Sony being out to screw everyone over. Hopefully they take this as a cue to rework their PR strategies so that these kind of incidents can be avoided in the future as I don’t think their public image can take many more beatings like this.
The perception in the tech community, at least up until recently, was that Google simply didn’t understand social the way Twitter and Facebook does. The figures support this view too, with Facebook fast approaching 1 billion users and Twitter not even blinking an eye when Buzz came on the scene. Still they’ve had some mild success with their other social products so whilst they might not have been the dominant social platform so I believe they get social quite well, they’re just suffering from the superstar effect that makes any place other than first look like a lot like last. Google+ then represents something of reinvention of their previous attempts with a novel approach to modelling social interactions, and it seems to be catching on.
It’s only been 2 weeks since Google+ became available to the wider public and it’s already managed to attract an amazing 10 million users. Those users have also already shared over 1 billion articles in the short time that G+ has been available. For comparison Buzz, which I can’t seem to find accurate user information on, shared an impressive 9 million articles in 2 days a far cry from the success that G+ has been enjoying. What these numbers mean is that Google is definitely doing something right with the new platform and the users are responding in kind. However we’re still deep in the honeymoon period for Google+ and whilst their initial offering is definitely a massive step in the right direction we’ll have to wait and see if this phenomenal growth can continue.
That’s not to say the G+ platform doesn’t have the potential to do so, far from it. Right now the G+ platform stands alone in its own ecosystem with only a tenuous link to the outside world via the +1 button (which ShareThis is still yet to implement and I don’t want to install yet another button to get it). Arguably much of the success of G+’s rival platforms comes from their APIs and with the initial user traction problem out of the way G+ is poised to grab an even larger section of the market once they release their API. I believe the API will be critical to the success of G+ and not just because that’s what their competitors did.
Google+, for me at least, feels like it would be the best front end to all my social activities on the web. Whilst there are many other services out there that have been attempting to be the portal to online social networking none of them have managed to capture my attention in quite the same way as G+ has done. The circles feature of G+ is also very conducive to aggregation as I could easily put all my LinkedIn contacts in Colleagues, Twitter in Following and Facebook friends in well, the obvious place. Then my G+ stream would become the magical single pane of glass I’d go to for all my social shenanigans and those who weren’t on G+ would still be connected to me through their network of choice.
That last point is key as whilst G+’s growth is impressive it’s still really only hitting a very specific niche, mostly tech enthusiasts and early adopters. That’s not a small market by any stretch of the imagination but since less than 20% of my social circle has made their way onto G+ from Facebook the ability to communicate cross platforms will be one of the drivers of growth for this platform. Whilst I’d love G+ to become the dominant platform it’s still 740 million users short of hitting that goal and Facebook has a 7 year lead on them with this. It’s not impossible, especially with the kind of resources and smarts Google has to throw at the problem, but it’s not a problem that can be solved by technology alone.
Google+ is definitely on track to be a serious contender to Facebook but its still very early days for the service. What’s ahead of Google is a long, uphill battle against an incumbent that’s managed to take down several competitors already and has established themselves as the de-facto social network. Unlike like their other social experiments before it Google+ has the most potential to bring about change in the online social networking ecosystem and with a wildly successful 2 weeks under their belt Google is poised to become a serious competitor, if not the one to beat.