One of the peeves I had with the official Twitter client on Windows Phone 7, something I didn’t mention in my review of the platform, was that among the other things that its sub-par at (it really is the poor bastard child of its iOS/Android cousins) it couldn’t display images in-line. In order to actually see any image you have to tap the tweet then the thumbnail in order to get a look at it, which usually loads the entire large image which isn’t required on smaller screens. The official apps on other platforms were quite capable of loading appropriate sized images in the feed which was a far better experience, especially considering it worked for pretty much all of the image sharing services.
Everyone knows there’s no love lost between Instagram and I but that doesn’t mean I don’t follow people who use it. As far back as I can remember their integration in the mobile apps has left something to be desired, especially if you want to view the full sized image which usually redirected you to their atrocious web view. Testing it for this post showed that they’ve vastly improved that experience which is great, especially considering I’m still on Windows Phone 7 which was never able to preview Instagram anyway, but it seems that this improvement may have come as part of a bigger play from Instagram trying to claw back their users from Twitter.
Reports are coming in far that Instagram has disabled their Twitter card integration which stops Twitter from being able to display the images directly in the feed like they have been doing since day 1. Whilst I don’t seem to be experiencing the issue that everyone is reporting (as you can see from the devastatingly cute picture above) there are many people complaining about this and Instagram has stated that disabling this integration is part of their larger strategy to provide a better experience through their platform. Part of that was improve the mobile web experience which I mentioned earlier.
It’s an interesting move because for those of us who’ve been following both Twitter and Instagram for a while the similarities are startling. Twitter has been around for some 6 years and it spent the vast majority of that being a company that was extraordinarily open with its platform, encouraging developers far and wide to come in and develop on their platform. Instagram, whilst not being as wide open as Twitter was, did similar things making their product integrate tightly with Twitter’s ecosystem whilst encouraging others to develop on it. Withdrawing from Twitter in favour of their own platform is akin to what Twitter did to potential client app developers, essentially signalling to everyone that it’s our way or the highway.
The cycle is eerily similar, both companies started out as small time players that had a pretty dedicated fan base (although Instagram grew like a weed in comparison to Twitter’s slow ride to the hockey stick) and then after getting big they start withdrawing all the things that made them great. Arguably much of Instagram’s growth came from its easy integration with Twitter where many of the early adopters already had large followings and without that I don’t believe they would’ve experienced the massive growth they did. Disabling this functionality seems like they’re shooting themselves in the foot with the intention of attempting some form of monetization eventually (that’s the only reason I can think of for trying to drive users back to the native platform) but I said the same thing about Twitter when they pulled that developer stunt, and they seem to be doing fine.
It probably shouldn’t be surprising that this is what happens when start ups hit the big time because at that point they have to start thinking seriously about where they’re going. For giant sites like Instagram that are still yet to turn a profit from the service they provide it’s inevitable that they’d have to start fundamentally changing the way they do business and this is most likely just the first step in wider sweeping changes. I’m still wondering how Facebook is going to turn a profit from this investment as they’re $1 billion in the hole and there’s no signs of them making that back any time soon.
Google+ has only been around for a mere 2 months yet I already feel like writing about it is old hat. In the short time that the social networking service as been around its had a positive debut to the early adopter market, seen wild user growth and even had to tackle some hard issues like their user name policy and user engagement. I said very early on that Google had a major battle on their hands when they decided to launch another volley at an another silicone valley giant but early indicators were pointing towards them at least being a highly successful niche product at the very least, if for the only fact that they were simply “Facebook that wasn’t Facebook“.
One of the things that was always lacking from the service was an API that was on the same level as its competitors. Both Facebook and Twitter both have exceptional APIs that allow services to deeply integrate with them and, at least in the case of Twitter, are responsible in a large part for their success. Google was adamant that an API was on the way and just under a week ago they delivered on their promise, releasing an API for Google+:
Developers have been waiting since late June for Google to release their API to the public. Well, today is that Day. Just a few minute ago Chris Chabot, from Google+ Developer Relations, announced that the Google+ API is now available to the public. The potential for this is huge, and will likely set Google+ on a more direct path towards social networking greatness. We should see an explosion of new applications and websites emerge in the Google+ community as developers innovate, and make useful tools from the available API. The Google+ API at present provides read-only access to public data posted on Google+ and most of the Google+ API follows a RESTful API design, which means that you must use standard HTTP techniques to get and manipulate resources.
Like all their APIs the Google+ one is very well documented and even the majority of their client libraries have been updated to include the new API. Looking over the documentation it appears that there’s really only 2 bits of information available to developers at this point in time, those being public Profiles (People) and activities that are public. Supporting these APIs is the OAuth framework so that users can authorize external applications so that they can access their data on Google+. In essence this is a read only API for things that were already publicly accessible which really only serves to eliminate the need to screen scrape the same data.
I’ll be honest, I’m disappointed in this API. Whilst there are some useful things you can do with this data (like syndicating Google+ posts to other services and reader clients) the things that I believe Google+ would be great at doing aren’t possible until applications can be given write access to my stream. Now this might just be my particular use case since I usually use Twitter for my brief broadcasts (which is auto-syndicated to Facebook) and this blog for longer prose (which is auto shared to Twitter) so my preferred method of integration would be to have Twitter post stuff to my Google+ feed. Because as it is right now my Google+ account is a ghost town compared to my other social networks simply because of the lack of automated syndication.
Of course I understand that this isn’t the final API, but even as a first attempt it feels a little weak.
Whilst I won’t go as far as to say that Google+ is dying there is data to suggest that the early adopter buzz is starting to wind down. Anecdotally my feed seems to mirror this trend with average time between posts on there being days rather than minutes it is on my other social networks. The API would be the catalyst required to bring that activity back up to those initial levels but I don’t think it’s capable of doing so in its current form. I’m sure that Google won’t be a slouch when it comes to releasing new APIs but they’re going to have to be quick about it if they want to stem the flood of inactivity.
I really want to use Google+, I really do it’s just that the lack of interoperability that keeps all my data out of it. I’m sure in the next couple months we’ll see the release of a more complete API that will enable me to use the service as I, and many others I feel, use our other social networking services.
Technological innovations, you know those things that are supposed to make our lives easier, usually end up becoming the bane of our existence not too long after they’ve lost their novelty. I can’t tell you how many times people have said that they’ve lost control of their email inbox or how they’re constantly distracted by people trying to contact them over the phone, damning the technology for allowing people to interrupt whatever the heck it was they were doing. What amuses me though is I use many of the same technologies that they do yet I don’t feel the same level of pressure that they do, leading me to wonder what the heck they’re complaining about.
Now I’m not saying that email, IM, Twitter et. al. are not distracting, indeed our techno-centric culture is increasingly skewed towards being a distracted one by a veritable tsunami communications tools. I myself struggled with Twitter not too long ago when I attempted to use it the “proper” way over a weekend, seeing my productivity hit the floor as I struggled to strike a balance between my level of engagement and the amount of work I got done. However I soon realised that using said service in the proper way meant that I just ended up as distracted as everyone else, with almost 0 benefit to me other than the small bit of self satisfaction that I was totally doing this social media thing right for a change.
In essence I feel that the reason people get so distracted by these tools is that they feel obligated to respond to them immediately, rather than at a time which suits them best. Thus the tool which is meant to help your productivity becomes a burden, interrupting you at the worst possible time and breaking you out of the flow of the work you were in. If you find yourself in this position you need to set up strict rules for interacting with that particular technology that suit you rather than what suits everyone else. How you go about this is left as an exercise for the reader, but the most effective tool (I’ve found, at least) is to only check your email/Twitter/whatever at certain times during the day and ignoring it at all other times.
The retort I usually get for advocating this kind of stance is “What if something important happens in the interim?”. Thinking really hard about it I can’t think of anything really important that’s come to me via the medium of email, IM, Twitter that didn’t first reach me through some more direct means (like my phone). If you’re relying on these distinctly one way, no way to verify if the person has actually received your message platforms then the message you’re sending can’t really be all that important and can wait a few hours before being responded to. If it can’t then use some more direct means of communicating otherwise you’re just forcing people into the same technological hell that you yourself feel trapped in, continuing the vicious cycle that just doesn’t need to exist.
However sometimes people are just looking for a scapegoat for their situation and it’s far easier to blame a faceless technology than it is to look internally and work out why they’re so distracted. I can kind of sort of understand people getting caught up with communications clients, especially when it’s part of your job, but when you think something like RSS is too distracting (you know, where you choose to subscribe to a site because you’re interested in it) then the problem isn’t the technology it’s your lack of ability to recognize that you’re wasting time. I get literally hundreds of items in my RSS reader every day but do I read them all? Heck no, at most I’ll skim the titles and if I recognize a story I’ve already read then I won’t go back and read it again.
Just seems like common sense to me.
It’s also not helped by the fact that many of us now carry our distractions with us. My phone has all the distraction capability of a modern PC and if it weren’t for my strict rules about only checking things at certain times I’m sure I’d be in the same distraction hell that everyone else is. Of course even though the platform may be different the same rules apply, it’s the feeling of obligation that drives us to distraction when realistically the obligation doesn’t exist, and we’re just slotting into a social norm that ends up wrecking havoc.
Thus all I’m advocating is taking back control of the technology rather than letting it control us. All of these distractions are tools to be used to our advantage and the second they stop being helpful we need to step back and question our use of them to see if we should change the way we use them. Otherwise we just end up being misused by the tools we wish to use and end up blaming them for the problems we in fact caused ourselves.
The perception in the tech community, at least up until recently, was that Google simply didn’t understand social the way Twitter and Facebook does. The figures support this view too, with Facebook fast approaching 1 billion users and Twitter not even blinking an eye when Buzz came on the scene. Still they’ve had some mild success with their other social products so whilst they might not have been the dominant social platform so I believe they get social quite well, they’re just suffering from the superstar effect that makes any place other than first look like a lot like last. Google+ then represents something of reinvention of their previous attempts with a novel approach to modelling social interactions, and it seems to be catching on.
It’s only been 2 weeks since Google+ became available to the wider public and it’s already managed to attract an amazing 10 million users. Those users have also already shared over 1 billion articles in the short time that G+ has been available. For comparison Buzz, which I can’t seem to find accurate user information on, shared an impressive 9 million articles in 2 days a far cry from the success that G+ has been enjoying. What these numbers mean is that Google is definitely doing something right with the new platform and the users are responding in kind. However we’re still deep in the honeymoon period for Google+ and whilst their initial offering is definitely a massive step in the right direction we’ll have to wait and see if this phenomenal growth can continue.
That’s not to say the G+ platform doesn’t have the potential to do so, far from it. Right now the G+ platform stands alone in its own ecosystem with only a tenuous link to the outside world via the +1 button (which ShareThis is still yet to implement and I don’t want to install yet another button to get it). Arguably much of the success of G+’s rival platforms comes from their APIs and with the initial user traction problem out of the way G+ is poised to grab an even larger section of the market once they release their API. I believe the API will be critical to the success of G+ and not just because that’s what their competitors did.
Google+, for me at least, feels like it would be the best front end to all my social activities on the web. Whilst there are many other services out there that have been attempting to be the portal to online social networking none of them have managed to capture my attention in quite the same way as G+ has done. The circles feature of G+ is also very conducive to aggregation as I could easily put all my LinkedIn contacts in Colleagues, Twitter in Following and Facebook friends in well, the obvious place. Then my G+ stream would become the magical single pane of glass I’d go to for all my social shenanigans and those who weren’t on G+ would still be connected to me through their network of choice.
That last point is key as whilst G+’s growth is impressive it’s still really only hitting a very specific niche, mostly tech enthusiasts and early adopters. That’s not a small market by any stretch of the imagination but since less than 20% of my social circle has made their way onto G+ from Facebook the ability to communicate cross platforms will be one of the drivers of growth for this platform. Whilst I’d love G+ to become the dominant platform it’s still 740 million users short of hitting that goal and Facebook has a 7 year lead on them with this. It’s not impossible, especially with the kind of resources and smarts Google has to throw at the problem, but it’s not a problem that can be solved by technology alone.
Google+ is definitely on track to be a serious contender to Facebook but its still very early days for the service. What’s ahead of Google is a long, uphill battle against an incumbent that’s managed to take down several competitors already and has established themselves as the de-facto social network. Unlike like their other social experiments before it Google+ has the most potential to bring about change in the online social networking ecosystem and with a wildly successful 2 weeks under their belt Google is poised to become a serious competitor, if not the one to beat.
If there’s one thing that the search giant Google doesn’t seem to be able to get right it’s social networking. This isn’t for lack of trying however, in Google Latitude is one of the most popular location based social networking applications out there and Orkut, their first social network, is still going strong with over 100 million users. However Orkut is still a far cry from what Facebook has become and Buzz has come no where near touching Twitter as a platform, even with the advantage of being right up in every Gmail user’s face. Google isn’t one to take things like this lightly and rumors have been swirling around for a long time that they were prepping to launch a new product that would be a direct competitor for the social networking starlets.
Today they announced Google+.
In essence it’s yet another social network, but it seems to combine aspects from all the hot start up ideas of the past couple years (group messaging, video chat, social recommendations, filtered photos) with a UX experience that feels distinctly non-googlesque. Whilst the product isn’t available for people to use right now you can put your name and email address in here to get added into the product sometime in the future. The screenshots I’ve been able to get my hands on have definitely piqued my interest in the product, not least of which is because of some of the features.
The first concept that I like, and one that had been talked about extensively prior to the announcement, was the Circles feature. Basically it lets you create groups of people out of your greater social network for sharing things like pictures and status updates. It’s a different paradigm to that of groups within Facebook since they’re only visible to you. It’s a great way of getting around that whole limited profile thing you have to laboriously set up within Facebook to make sure that you don’t inadvertently share something to people you didn’t want to. Grouping people up by interests is great too since I’m sure that not everyone is interested in the same things that I am.
The media sharing aspect sounds interesting too with Google saying it will be heavily integrated with mobile. In essence every picture or movie you take can be automatically uploaded to Google+, although it remains hidden until you choose to share it. Their image editor apparently integrates Instagram like photo filters for those of us who think that makes them some kind of artist, which is great but I feel is only there because that whole filtered photo thing is so hot right now. Google+ also has what they call Hangouts, basically video chat rooms that up to 10 friends can join. Hopefully that product doesn’t necessarily require video to work as it would be great to get an upgrade to Google Talk.
However after looking at what Google+ has to offer I started thinking about what I’d be using it for. I’d love to start using it in place of Facebook but unfortunately my entire social network is already on there and apart from the technically curious among them I can’t see any of them bothering to make the transition across to Google+. This means for Google+ to be any use to me it will need to have some pretty heavy duty integration with Facebook (and probably Twitter) in order for me to use it for any length of time. Google has been mum on the details of how deep the integration with existing social networks will go so we’ll just have to wait and see how they tackle this issue.
Like any new Google product it’s always interesting to see what kinds of innovations they bring to the table. Whilst nothing revolutionary in itself Google+ does show that Google is taking the whole social idea very seriously now and is looking to capitalize on many current trends in order to draw people to its platform. Whether or not this will lead to Google+ becoming a successful social network to rival that of Facebook and Twitter remains to be seen but I’ve already put my hand up to be one of the first to try out their latest offering, and I know I’m not alone in that regard (since the page refused to load twice when I tried to sign up).
A company is always reliant on its customers, they’re the sole reason that they continue to exist. For small companies customers are even more critical as losing one for them is far more likely to cause problems than when a larger company loses one of theirs. Many recent start ups have hinged on their early adopters not only being closely tied to the product so that they form a shadow PR department but also many of them hobbyist developers, providing additional value to their platform at little to no cost to them. Probably the most successful example of this is Twitter who’s openness with their API fostered the creation of many features (retweets, @ replies, # tags) that they had just never seen before. It seems however that they think the community has gone far enough, and they’re willing to take it from here.
It was about two weeks ago when Twitter updated their terms of service and guidelines for using their API. The most telling part about this was the section that focused on Twitter clients where they explicitly stated that developers should no longer focus on making new clients, and should focus on other verticals:
The gist of what Sarver said is this; Twitter won’t be asking anyone to shut down just as long as they stick within the required api limits. New apps can be built but it doesn’t recommend doing so as it’s ‘not good long term business’. When asked why it wasn’t good long term business, Sarver said because “that is the core area we investing in. There are much bigger, better opportunities within the ecosystem”
Sarver insists this isn’t Twitter putting the hammer down on developers but rather just “trying to be as transparent as possible and give the guidance that partners and developers have been asking for.”
To be honest with you they do have a point. If you take a look at the usage breakdown by client type you’ll notice that 43% of Twitter’s usage comes from non official apps, and diving into that shows that the vast majority of unofficial clients don’t drive that much traffic with 4 apps claiming the lion’s share of Twitter traffic. A developer looking to create a new client would be running up against a heavy bit of inertia trying to differentiate themselves from the pack of “Other Apps” that make up the 24% of Twitter’s unofficial app usage, but that doesn’t mean someone might not be capable of actually doing it. Hell the official client wasn’t even developed by Twitter in the first place, they just bought the most popular one and made it free for everyone to use.
Twitter isn’t alone in annoying its loyal developer following. HTC recently debuted one of their new handsets, the Thunderbolt. Like many HTC devices its expected that there will be a healthy hacking scene around the new device, usually centered on th xda-developers board. Their site has really proved to be invaluable to the HTC brand and I know I stuck with my HTC branded phones for much longer than I would have otherwise thanks to the hard work these guys put in. However this particular handset is by far one of the most locked down on the market, requiring all ROMs to be signed with a secret key. Sure they’ve come up against similar things in the past but this latest offering seems to be a step above what they normally put in, signalling this a shot across the bow of those who would seek to run custom firmware on their new HTC.
In both cases these companies had solid core products that the community was able to extend upon which provided immense amounts of value that came at zero cost to them. Whilst I can’t attribute all the success to the community it’s safe to say that the staggering growth that these companies experienced was catalyzed by the community they created. To suddenly push aside those who helped you reach the success you achieved seems rather arrogant but unfortunately it’s probably to be expected. Twitter is simply trying to grab back some of the control of their platform so they can monetize it since they’re still struggling to make decent revenues despite their huge user base. HTC is more than likely facing pressure from carriers to make their handsets more secure, even if that comes at the cost of annoying their loyal developer community.
Still in both these situations I feel like there would have been a better way to achieve the goals they sought without poisoning the well that once sustained them. Twitter could easily pull a Facebook maneuver and make all advertising come through them directly, which they could do via their own in house system or by simply buying a company like Ad.ly. HTC’s problem is a little more complex but I still can’t understand why the usual line of “if you unlock/flash/hack it, you’re warranty’s void” wasn’t enough for them. I’m not about to say that these moves signal the down fall of either company but it’s definitely not doing them any favors.
Betas are a tricky thing to get right. Realistically when you’re testing a beta product you’ve got a solid foundation of base functionality that you think is ready for prime time but you want to see how they’ll fair in the wild as there’s no way for you to catch all the bugs in the lab. Thus you’d want your product to get into the hands of as many users as you possibly could as that gives you the best chance to catch anything before you go prime time. Many companies now release beta versions of upcoming software for free to the general public in order to do this and for many of them it’s proven to work quite well. However more recently I’ve seen beta testing used as a way to promote a product rather than test it and the main way they do that is through artificial scarcity.
Rewind back to yonder days of 2004 and you’ll find me happily slogging away at my various exploits when a darkness forms on the horizon: World of Warcraft. After seeing many of the game play videos and demos before I was enamoured with the game long before it hit the retail shelves. You can then imagine my elation when I found out there was a competition for a treasured few closed beta invitations and not 10 minutes later had I entered. As it turns out I got in and promptly spent the next fortnight playing my way through the game and revelling in the new found exclusivity that it had granted me. Being a closed beta tester was something rather special and I spoke nothing of praise to all my friends about this upcoming game.
Come back to the present day and we can make parallels to the phenomenon that is #newtwitter. Starting out on the iPad as the official Twitter Client #newtwitter is an evolution in the service that Twitter is delivering, offering deeper integration with services that augment it and significantly jazzing up the interface. Initially it was only available to a select subset of the wider Twitter audience and strangely enough most of them appeared to be either influential Twitter users or those in the technology media. The reviews of the new Twitter client were nothing short of amazing and as the client has made its way around to more of the waiting public people have been more than eager to get their hands on it. Those carefully chosen beta testers at the start helped formed a positive image that’s helped keep any negativity at bay, even with their recent security problems.
This is in complete contrast to the uproar that was felt when Facebook unveiled its new user interface at the end of last year. Unlike the previous two examples the new Facebook interface was turned on all at once for every single user that visited the site. Immediately following this millions of users cried out in protest, despising the new design and the amount of information that was being presented to them. Instead of the new Facebook being something cool to be in on it proved to be enough of an annoyance to a group of people to cause a stir about it, rather than sing its praises.
The difference lies in the idea of artificial scarcity. You see there really wasn’t anything stopping Blizzard or Twitter from releasing their new product onto the wider world all at once as Facebook did however it was advantageous to them for numerous reasons. For both it allowed them to get a good idea of how their product would work in the wild and catch any major issues before release. Additionally the exclusivity granted to those few souls who got the new product early put them on a pedestal, something to be envied by those who were doing without. Thus the product that was already desirable becomes even more so because not everyone can have it. Doing a gradual release of the product also ensures that that air of exclusivity remains long after it’s released to the larger world as can be seen with #newtwitter.
I say all this because honestly, it works. As soon as I heard about #newtwitter I wanted in on it (mostly because it would be great blog fodder) and the fact that I couldn’t do anything to get it just made me want it all the more. I’ve also got quite a few applications on my phone that I signed up for simply because of the mystery and exclusivity they had, although I admit the fascination didn’t last long for them. Still the idea of a scarce product seems to work well even in the digital age where such restrictions are wholly artificial. Just like when say someone posts a teaser screenshot on Facebook sans URL to an upcoming web application.
I’m sure most of you knew what I was up to anyway 😉
There were so many times when I was coding up early versions of Lobaco that I didn’t give any thought to security. Mostly it was because the features I was developing weren’t really capable of divulging anything that wasn’t already public so I happily kept on coding leaving the tightening up of the security for another day. Afterwards I started using some of the built in authentication services available with the Windows Communication Framework but I realised that whilst it was easy to use with the Silverlight client it wasn’t really designed for anything that wasn’t Windows based. After spending a good month off from programming what would be the last version of Geon I decided that I would have to build my own services from the ground up and with that my own security model.
You’d think with security being such a big aspect of any service that contains personal information about users that there would be dozens of articles about. Well there are but none of them were particularly helpful and I spent a good couple days researching into various authentication schemes. Finally I stumbled upon this post by Tim Greenfield who laid out the basics of what has now become the authentication system for Lobaco. Additionally he made the obvious (but oh so often missed) point that when you’re sending any kind of user name and password over the Internet you should make sure it’s done securely using encryption. Whilst that was a pain in the ass to implement it did mean that I could feel confident about my system’s security and could focus on developing more features.
However when it comes down to the crunch new features will often beat security in terms of priority. There were so many times I wanted to just go and build a couple new features without adding any security into them. The end result was that whilst I got them done they had to be fully reworked later to ensure that they were secure. Since I wasn’t really working under any deadline this wasn’t too much of a problem, but when new features trump security all the way to release you run the risk of releasing code into the wild that could prove devastating to your users.
No example of this has been more prolific than the recent security issues that have plagued the popular micro-blogging service Twitter. Both of them come hot on the heels of the release of the new Twitter website released recently that enables quite a bit more functionality and with it the potential to open up holes for exploitation. The first was intriguing as it basically allowed someone to force the user’s browser to execute arbitrary Java script. Due to the character length limit of Twitter the impact this could have was minimised, but it didn’t take long before malicious attackers got a hold of it and used it for various nefarious purposes. This was a classic example of something that could have easily been avoided if they sanitised user input rather than checking for malicious behaviour and coding against it.
The second one was a bit more ugly as it had the potential to do some quite nasty things to a user’s account. It used the session details that Twitter stores in your browser to send messages via your account. Like the other Twitter exploit it relied on the user’s typical behaviour of following links posted by the people they follow. This exploit can not be squarely blamed at Twitter either as the use of link shortening services that hide the actual link behind a short URL make it that much harder for normal users to distinguish the malicious from the mundane. Still Twitter should have expected such session jacking (I know I have) and built in counter measures to stop them from doing it.
Any large public system will attract those looking to exploit it for nefarious means, that’s part and parcel of doing business on the web. The key then is to build your systems with the expectation that they will be exploited rather than waiting for an incident to arise. As a developer I can empathise that developing code that’s resistance to every kind of attack is next to impossible but there are so many things that can be done to ensure that the casual hackers steer clear. Twitter is undergoing a significant amount of change with a vision to scale themselves up for the big time, right up there with Google and Facebook. Inevitably this will mean they’ll continue to have security concerns as they work to scale themselves out and hopefully these last two exploits have shown them that security is something they should consider more closely than they have in the past.
Enabling your users to interact with your application through the use of open APIs has been a staple of the open web since its inception over a decade ago. Before that as well the notion of letting people modify your product helped to create vast communities of people dedicated to either improving the user experience or creating features that the original creators overlooked. I can remember my first experience with this vividly, creating vast levels in the Duke Nukem 3D level creator and showing them off to my friends. Some of these community developed products can even become the killer feature of the original application itself and whilst this is a boon for the application itself it pose some issues to the developer.
Probably the earliest example I can think of this would have to be World of Warcraft. The client has a pretty comprehensive API available that enabled people to create modifications to do all sorts of wonderful things, from the more mundane inventory managers to boss timer mods that helped keep a raid coordinated. After a while many mods became must haves for any regular player and for anyone who wanted to join in the 40 persons raids they became critical to achieving success. Over the years many of these staple mods got replaced by Blizzard’s very own implementations of them ensuring that anyone that was able to play the game was guaranteed to have them. Whilst most of the creators weren’t enthused that all their hard work was now being usurped by their corporate overlords many took it as a challenge to create even more interesting and useful mods, ensuring their user base stayed loyal.
More recently this issue has come to light with Twitter who are arguably popular due to the countless hours of work done by third parties. Their incredibly open API has meant that anything they were able to do others could do to, even to the point of them doing it better than them. In fact it’s at the point where only a quarter of their traffic is actually on their main site, the other three quarters is from their API. This shows that whilst they’ve built an incredibly useful and desirable service they’re far from the best providers of it, with their large ecosystem of applications filling in the areas where it falls down. More recently however Twitter has begun incorporating features into its product that used to be provided by third parties and the developer community hasn’t been too happy about it.
The two most recent bits of technology that Twitter has integrated have been the new Tweet button (previously provided by TweetMeme) and their new link shortening service t.co which was handled by dozens of others. The latter wasn’t unique to Twitter at all and whilst many of the new comers to the link shortening space made their name on Twitter’s platform many of them report that it’s no longer their primary source of traffic. The t.co shortener is then really about Twitter taking control of the platform that they developed and possibly using the extra data they can gather from it as leverage in brokering advertising and partnership deals. The Tweet button however is a little bit more interesting.
Way back when news aggregator sites were all the rage. From Digg to Del.icio.us to Reddit there were all manner of different sites designed around the central idea of sharing online content with others. Whilst the methods of story aggregation differed from service to service most of them ended up implementing some kind of “Add this story to X” button that could be put on your website. This served two purposes: it helped readers show a little love to the article by giving it some attention on another site and secondly it gave content to the other site to link to, with little involvement from the user. The TweetMeme button then represented a way to drive Twitter adoption further and at the same time get even more data on their users than they previously had before. Twitter, for what it’s worth, said they licensed some of the technology from TweetMeme for their button however they have still in essence killed off one of their popular services and that’s begun to draw the ire of some developers.
The issue many developers take with Twitter building these services into their main product is because it puts a chilling effect on products based on Twitter’s ecosystem. Previously if you had built something that augmented their service chances were you could build yourself quite the web property. Unlike other companies which would acquire these innovator’s companies in order to integrate their technology Twitter has instead taken to developing the same products themselves, in direction competition with those innovators. The reasons behind this are simple, Twitter simply doesn’t have the cash available to do acquisitions like the big guys do. They’re kind of stuck between a rock and a hard place as whilst they need to encourage innovation using their platform they can’t let it go on forever, lest they become irrelevant past delivering an underlying service. Realistically the best option for them is to start generating some cash in order to start acquiring innovator’s technology rather than out competing them but they’re still too cash poor for them to this to be viable.
In the end if you build your product around someone else’s service you’re really putting yourself at their mercy. The chill that Twitter is putting on their developers probably won’t hurt them in the long run should they not continue to copy other’s solutions to their problems however their fledgling advertising based business model is at odds with all the value add developers. Twitter is quite capable of doing some impressive innovation on their own (see #newtwitter) but their in house development is nothing compared to the hordes of third parties who’ve been doing their part to improve their ecosystem. I’m interested to see what direction they go with this, especially so since I’m working on what could be classed as a competing service.
Although I’m hoping people don’t see it that way 😛