Posts Tagged‘api’

VMware Targets OpenStack with vSphere 6.

Despite the massive inroads that other virtualization providers have made into the market VMware still stands out as the king of the enterprise space. Part of this is due to the maturity of their toolset which is able to accommodate a wide variety of guests and configurations but they’ve also got the largest catalogue of value adds which helps vastly in driving adoption of their hypervisor. Still the asking price for any of their products has become something of a sore point for many and their proprietary platform has caused consternation for those looking to leverage public cloud services. With their latest release of their vSphere product VMware is looking to remedy at least the latter issue, embracing OpenStack compatibility for one of their distributions.

vmware_vsphere

The list of improvements that are coming with this new release are numerous (and I won’t bother repeating them all here) but suffice to say that most of them were expected and  in-line with what we’ve gotten previously. Configuration maximums have gone up for pretty much every aspect, feature limitations have been extended and there’s a handful of new features that will enable vSphere based clusters to do things that were previously impossible. In my mind the key improvements that VMware have made in this release come down to Virtual SAN 6, Long Distance vMotion and, of course, their support for OpenStack via their VMware Integrated OpenStack release.

Virtual SAN always felt like a bit of an also-ran when it first came out due to the rather stringent requirements it had around its deployment. I remember investigating it as part of a deployment I was doing at the time, only to be horrified at the fact that I’d have to deploy a vSphere instance at every site that I wanted to use it at. The subsequent releases have shifted the product’s focus significantly and now presents a viable option for those looking to bring software defined datacenter principles to their environment. The improvements that come in 6 are most certainly cloud focused with things like Fault Domains and All Flash configurations. I’ll be very interested to see how the enterprise reacts to this offering, especially for greenfields deployments.

Long Distance vMotion might sound like a minor feature but as someone who’s worked in numerous large, disparate organisations the flexibility that this feature will bring is phenomenal. Right now the biggest issue most organisations face when maintaining two sites (typically for DR purposes) is the ability to get workloads between the sites, often requiring a lengthy outage process to do it. With Long Distance vMotion making both sites active and simply vMotioning workloads between sites is a vastly superior solution and provides many of the benefits of SRM without the required investment and configuration.

The coup here though is, of course, the OpenStack compatibility through VMware’s integrated distribution. OpenStack is notorious for being a right pain in the ass to get running properly, even if you already have staff that have had some experience with the product set in the past. VMware’s solution to this is to provide a pre-canned build which exposes all the resources in a VMware cloud through the OpenStack APIs for developers to utilize. Considering that OpenStack’s lack of good management tools has been, in my mind, one of the biggest challenges to its adoption this solution from VMware could be the kick in the pants it needs to see some healthy adoption rates.

It’s good to see VMware jumping on the hybrid cloud idea as the solution going forward as I’ve long been of the mind that that will be the solution going forward. Cloud infrastructure is great and all but there are often requirements it simply can’t meet due to its commodity nature. Going hybrid with OpenStack as the intermediary layer will allow enterprises to take advantage of these APIs whilst still leveraging their investment in core infrastructure, utilizing the cloud on an as-needed basis. Of course that’s the nirvana state but it seems to get closer to realisation with every new release so here’s hoping VMware will be the catalyst to finally see it succeed.

IBM’s Watson has an API, and It’s Answering Questions.

In a world where Siri can book you a restaurant and Google Now can tell you when you should head for the gate at the airport it can feel like the AI future that many sci-fi fantasies envisioned is already here. Indeed to some extent it is, many aspects of our lives are now farmed out to clouds of servers that make decisions for us, but those machines still lack a fundamental understanding of, well, anything. They’re what are called expert systems, algorithms trained on data to make decisions in a narrow problem space. The AI future that we’re heading towards is going to be far more than that, one where those systems actually understand data and can make far better decisions based on that. One of the first steps to this is IBM’s Watson and it’s creators have done something amazing with it.

IBM_Watson

Whilst currently only open to partner developers IBM has created an API for Watson, allowing you to pose it a question and receive an answer. There’s not a lot of information around what data sets it currently understands (the example is in the form of a Jeopardy! question) but their solution documents reference a Watson Content Store which, presumably, has several pre-canned training sets to get companies started with developing solutions. Indeed some of the applications that IBM’s partner agencies have already developed suggest that Watson is quite capable of digesting large swaths of information and providing valuable insights in a relatively short timeframe.

I’m sure many of my IT savvy readers are seeing the parallels between Watson and a lot of the marketing material that surrounds anything with the buzzword “Big Data”. Indeed much of the concepts of operation are similar: take big chunks of data, throw them into a system and then hope that something comes out the other end. However Watson’s API suggests something that’s far more accessible, dealing in native human language and providing evidence to back up the answers it gives you. Compare this to Big Data tools, which often require you to either learn a certain type of language or create convoluted reports, and I think Watson has the ability to find widespread use while Big Data keeps its buzzword status.

For me the big applications for something like this come for places where curating domain specific knowledge is a long, time consuming task. Medicine and law both spring to mind as there’s reams of information available to power a Watson based system and those fields could most certainly benefit from having easier access to those vast treasure troves. It’s pretty easy to imagine a lawyer looking for all precedents set against a certain law or a doctor asking for all diseases with a list of symptoms, both queries answered with all the evidence to boot.

Of course it remains to be seen if Watson is up to the task as whilst it’s prowess on Jeopardy! was nothing short of amazing I’ve still yet to see any of its other applications in use. The partner applications do look very interesting, and should hopefully be the proving grounds that Watson needs, but until it starts seeing widespread use all we really have to go on is the result of a single API call. Still I think it has great potential and hopefully it won’t be too long before the wider public can get access to some of Watson’s computing genius.

OAuth2 and C# Desktop Apps Don’t Seem To Get Along.

I hadn’t been in Visual Studio for a while now, mostly because I had given up on all of my side projects due to the amount of time they soaked up vs my desire to do better game reviews on here, requiring me to spend more time actually playing the games. I had come up with an idea for a game a while back and was really enjoying developing the concept in my head so I figured it would be good to code up a small application to get my head back in the game before I tackled something a little bit more difficult. One particular idea I had was a Soundcloud downloader/library manager as whilst there are other tools to do this job they’re  a little cumbersome and I figured it couldn’t be too difficult to whip it up in a days worth of coding.

How wrong I was.

OAuth2The Soundcloud API has a good amount of documentation about it and from what I could tell I would be able to get my stream using it. However since this wasn’t something that was publicly available I’d have to authenticate to the API first through their OAuth2 interface, something which I had done with other sites so I wasn’t too concerned that it would be a barrier. Of course the big difference between those other projects and this one was the fact that this application was going to be a desktop app and so I figured I was either going to have to do some trickery to get the token or manually step through the process in order to get authenticated.

After having a quick Google around it looked like the OAuth library I had used previously, DotNetOpenAuth, would probably be able to fit the bill and it didn’t take me long to find a couple examples that looked like they’d do the job. Even better I found an article that showed an example of the exact problem I was trying to solve, albeit using a different library to the one I was. Great, I thought, I’ll just marry up the examples and get myself going in no time and after a little bit of working around I was getting what appeared to be an auth token back. Strangely though I couldn’t access any resources using it, either through my application or directly through my browser (as I had been able to do in the past). Busting open Fiddler showed that I was getting 401 (unauthorized) errors back, indicating that the token I was providing wasn’t a viable option.

After digging around and looking at some various resources it appears that whilst the OAuth API might still be online it’s not the preferred way of accessing anything and, as far as I can tell, is mostly deprecated. No worries I’ll just hit up the OAuth2 API instead, figuring that it should be relatively simple to authenticate to it since DotNetOpenAuth now natively supports it. Try as I might to find a working example I just simply could not get it to work with Soundcloud’s API, not even using the sample application that DotNetOpenAuth provides. Trying to search for other, more simplistic examples left me empty handed, especially if I tried to search for a desktop application workflow.

I’m willing to admit that I probably missed something here but honestly the amount of code and complexity that appears to be required to handle the OAuth2 authentication process, even when you’re using a library, seems rather ludicrous. Apparently WinRT has it pretty easy but those are web pages masquerading as applications which are able to take advantage of their auth work flow, something which I was able to make work quite easily in the past. If someone knows of a better library or has an example of the OAuth2 process working with a desktop application in C# then I’d love to see it because I simply couldn’t find out how to do it, at least not after half a day of frustration.

The Curiously Limiting Twist for Google Glass Applications.

There’s little doubt that Google’s Project Glass is going to be a disruptive technology, although whether that comes from revolutionizing the way we interface with technology or more because of the social implications will remain to be seen. Considering that the device has been limited to the technically elite and the few that got in on the #ifhadglass competition (disappointingly restricted to US citizens only) we still don’t have much to go on as to how Glass will function as an everyday technology. Sure we’ve got lots of impressions about it but the device is still very much in the nascent stages of adoption and third party development on the platform is only just starting to occur.

Google Glass Up Close

We do have a much better idea of what is actually behind Google Glass though thanks to the device reaching more people outside the annals of the Googleplex. From what I’ve read it’s comparable to a mid range smartphone in terms of features with 16GB of storage, a 5MP camera capable of taking 720p video and a big enough battery to get you through the day with typical usage. This was pretty much expected given Glass’ size and recent development schedule but what’s really interesting isn’t so much the hardware that’s powering everything, it’s the terms with which Google is letting you interface with it.

Third party applications, which make use of the Mirror API, are forbidden from inserting ads into their applications. Not only that they are also forbidden from sending API data, which can be anything from feature usage to device information like location, to third party advertisers. This does not preclude Google from doing that, indeed the language hinges on the term 3rd party, however it does firmly put the kibosh on any application that attempts to recoup development costs through the use of ads or on-selling user data. Now whether or not you’ll be able to recoup costs by using Google’s AdSense platform remains to be seen but it does seem that Google wants to have total control of the platform and any revenue generated on it from day 1 which may or may not be a bad thing, depending on how you view Google.

What got me though was the strict limitation of Glass only talking to web applications. Whilst this still allows Glass to be extended in many ways that we’re only really beginning to think of it still drastically limits the potential of the platform. For instance my idea of pairing it with a MYO to create a gesture interface (for us anti-social types who’d rather not speak at it constantly) is essentially impossible thanks to this limitation, even though the hardware is perfectly capable of syncing with BlueTooth devices. Theoretically it’d still be possible to accomplish some of that whilst still using a web app but it’d very cumbersome and not at all what I had envisioned when I first thought of pairing the two together.

Of course that’s just a current limitation set by Google and with exploits already winding their way around the Internet it’s not unreasonable to expect that such functionality could be unlocked should you want it. There’s also the real possibility that this limitation is only temporary and once Glass hits general availability later this year it’ll become a much more open platform. Honestly I hope Google does open up Glass to native applications as whilst Glass has enormous amounts of potential in its current form the limitations put a hard upper barrier on what can be accomplished, something which competitors could rapidly capitalize on.

Google aren’t a company to ignore the demands of developers and consumers at large though so should native apps become the missing “killer app” for the platform I can’t imagine they’d stave off enabling them for long. Still the current limitations are a little worrying and I hope that they’re only an artefact of Glass being in its nascent form. Time will tell if this is the case however and the day of reckoning will come later this year when Glass finally becomes generally available.

I’ll probably still pick one up regardless, however.

What’s With This “Start Open, Get Big, Fuck Everyone Off” Thing Startups Are Doing?

One of the peeves I had with the official Twitter client on Windows Phone 7, something I didn’t mention in my review of the platform, was that among the other things that its sub-par at (it really is the poor bastard child of its iOS/Android cousins) it couldn’t display images in-line. In order to actually see any image you have to tap the tweet then the thumbnail in order to get a look at it, which usually loads the entire large image which isn’t required on smaller screens. The official apps on other platforms were quite capable of loading appropriate sized images in the feed which was a far better experience, especially considering it worked for pretty much all of the image sharing services.

Everyone knows there’s no love lost between Instagram and I but that doesn’t mean I don’t follow people who use it. As far back as I can remember their integration in the mobile apps has left something to be desired, especially if you want to view the full sized image which usually redirected you to their atrocious web view. Testing it for this post showed that they’ve vastly improved that experience which is great, especially considering I’m still on Windows Phone 7 which was never able to preview Instagram anyway, but it seems that this improvement may have come as part of a bigger play from Instagram trying to claw back their users from Twitter.

Reports are coming in far that Instagram has disabled their Twitter card integration which stops Twitter from being able to display the images directly in the feed like they have been doing since day 1. Whilst I don’t seem to be experiencing the issue that everyone is reporting (as you can see from the devastatingly cute picture above) there are many people complaining about this and Instagram has stated that disabling this integration is part of their larger strategy to provide a better experience through their platform. Part of that was improve the mobile web experience which I mentioned earlier.

It’s an interesting move because for those of us who’ve been following both Twitter and Instagram for a while the similarities are startling. Twitter has been around for some 6 years and it spent the vast majority of that being a company that was extraordinarily open with its platform, encouraging developers far and wide to come in and develop on their platform. Instagram, whilst not being as wide open as Twitter was, did similar things making their product integrate tightly with Twitter’s ecosystem whilst encouraging others to develop on it. Withdrawing from Twitter in favour of their own platform is akin to what Twitter did to potential client app developers, essentially signalling to everyone that it’s our way or the highway.

The cycle is eerily similar, both companies started out as small time players that had a pretty dedicated fan base (although Instagram grew like a weed in comparison to Twitter’s slow ride to the hockey stick) and then after getting big they start withdrawing all the things that made them great. Arguably much of Instagram’s growth  came from its easy integration with Twitter where many of the early adopters already had large followings and without that I don’t believe they would’ve experienced the massive growth they did. Disabling this functionality seems like they’re shooting themselves in the foot with the intention of attempting some form of monetization eventually (that’s the only reason I can think of for trying to drive users back to the native platform) but I said the same thing about Twitter when they pulled that developer stunt, and they seem to be doing fine.

It probably shouldn’t be surprising that this is what happens when start ups hit the big time because at that point they have to start thinking seriously about where they’re going. For giant sites like Instagram that are still yet to turn a profit from the service they provide it’s inevitable that they’d have to start fundamentally changing the way they do business and this is most likely just the first step in wider sweeping changes. I’m still wondering how Facebook is going to turn a profit from this investment as they’re $1 billion in the hole and there’s no signs of them making that back any time soon.

The Ghost Towns of Google+.

It’s hard to believe that we’re still in the first year of Google+ as it feels like the service has been around for so much longer. This is probably because of the many milestones it managed to pass in such a short period of time, owing the fact that anyone with a Google account can just breeze on into the nascent social network. I personally remained positive about it as the interface and user experience paradigms suited my geeky ways but the lack of integration with other services along with the lack of migration of others onto the service means that it barely sees any use, at least from me.

Still I can’t generalize my experience up to a wider view of Google+ and not just because that’s bad science. Quite often I’ve found myself back on Google+, not for checking my feed or posting new content, but to see conversations that have been linked to by news articles or friends. Indeed Google+ seems to be quite active in these parts with comment threads containing hundreds of users and multitudes of posts. Most often this is when popular bloggers or celebrities start said thread so its very much like Twitter in that regard, although Google+ feels a whole lot more like one big conversation rather than Twitter’s 1 to many or infinitum of 1 to 1 chat sessions. For the most part this still seems to be heavily biased towards the technology scene, but that could just be my bias stepping in again.

Outside that though my feed is still completely barren with time between posts from users now expanding to weeks. Even those who swore off all other social networks in favour of Google+ have had to switch back as only a small percentage of their friends had an active presence on their new platform of choice. This seems to be something of a trend as user interactivity with the site is at an all time low, even below that of struggling social network MySpace. Those figures don’t include mobile usage but suffice to say that the figures are indicative of the larger picture.

Personally I feel one of the biggest problems that Google+ has is lack of integration with other social network services and 3rd party product developers. Twitter’s success is arguably due to their no holds barred approach to integration and platform development. Whilst Google+ was able to get away with not having it in the beginning the lack of integration hurts Google’s long term prospects significantly as people are far less likely to use it as their primary social network. Indeed I can’t syndicate any of the content that I create onto their social network (and vice-versa) due to the lack of integration and this means that Google+ exists as a kind of siloed platform, never getting the same level of treatment as the other social networks do.

Realistically though it’s all about turning the ghost towns that are most people’s timelines into the vibrant source of conversation that many of the other social networks are. Right now Google+ doesn’t see much usage because of the content exclusivity and effort required to manually syndicate content to it. Taking away that barrier would go a long way to at least making Google+ look like its getting more usage and realistically that’s all that would be required for a lot of users to switch over to it as their main platform. Heck I know I would.

Sortilio Update: It’s Just Better All Over.

So like most products that a developer creates with one purpose in mind my first iteration of Sortilio was pretty bare bones. Sure if you had a small media collection that was named semi-coherently it worked fine (like it did for my test data) but past that it started to fall apart rather rapidly. Case in point: I let it loose on my own media collection, you know for the purposes of eating my own dog food. It didn’t take long for it to fall flat on its face, querying The TVDB’s API so rapidly that the rate limiter kicked in almost instantaneously. There was also the issue of not being able to massage the data once it had done the automated matching portion as even the best automated tools can still make mistakes. With that in mind I set about improving Sortilio and put the finishing touches on it yesterday.

Now the first update you’ll notice is the slightly changed main screen with a new Options tab and two extra buttons down in the right hand corner. They all function pretty much as you’d expect: the options tab has a few options for you to configure (only one of them works currently, the extensions one), save will export the current selection to a file for use later and load will  import said file back into Sortilio. The save/load functionality is quite handy if you’d like to manually go in there and sort out the data yourself as it’s all plain XML that I’m sure anyone with half a coding mind about them would be able to figure out. I put it in mostly for debugging purposes (re-running the identification process is rather slow, more on that in a bit) but I can see it being quite useful, especially with larger collections.

As I mentioned earlier whilst the automated matching does a pretty good job of getting things right there are times when it either doesn’t find anything or its got it completely wrong. To alleviate this I added in the ability for you to be able to double click the row to bring up the following screen:

Shown in this dialog is the series drop down which allows you to select from a list of episodes that Sortilio has already downloaded. The list is populated by the cache that Sortilio creates from its queries to The TVDB so if it managed to match one file in the series correctly it will have it cached already so you can just select it and hit update. Sortilio will then identify other files that had the same search term and ask if you’d like to update them as well (since it will have probably got them wrong as well). Should the series you’re looking for not be available you can then hit the search button which brings up this dialog:

From here you can enter whatever term you want and hit search. This will then query The TVDB and then display the results in a list for you. Select the most appropriate one and then hit OK and you’ll have the new series assigned to that file.

Under the hood things have gotten quite a bit better as well. The season string matching algorithm has been improved a bit so that identifies seasons better than it previously did. For instance if you had a file that was like say battlestar.galactica.2003.s01e20.avi Sortilio would (wrongly) identify that as season 20 because of the 2003 before the series/episode identifier. It now prefers the right kind of identifiers and is a little better overall at getting it right, although I still think that the way I’m going about it is slightly ass backwards. Chalk that up to still figuring out how to best do string splitting based on a regex.

Now on the surface if you were to compare this version to the previous it would appear to run quite a bit slower. There’s a good reason for this and it all comes down to the rate limit on The TVDB API. After playing around with various values I found that the sweet spot was somewhere around a 2 second delay between searches. Without any series cached this would mean that every request will incur a 2 second penalty, significantly increasing the amount of time required to get the initial sort done. I’ve alleviated this somewhat by having Sortilio search its local cache first before attempting to head out to the API but that’s still noticeably slower that it was originally. I’ve reached out to the guys behind The TVDB in the hopes that I can get an excerpt of their database that I can include within Sortilio that will make the process lightening fast but I’ve yet to hear back from them.

So as always feel free to grab it, have a play and then send me any feedback you have regarding it. I’ve already got a list of improvements to make on this version but I’d definitely call this usable and to prove a point I have indeed used it on my own media collection. It gets about 90% of the way there with the last 10% needing manual intervention, either within Sortilio or outside cleaning up after it has done its job. If you’ve used it and encountered problems please save the sort file and the debug log and send them to me at [email protected].

You can grab the latest version here.

[NOTE: There is no link currently because gmail barfed at the file attachment I sent myself to upload this morning. Follow me on Twitter to be notified of when it comes out!]

Google+ API is Here, But is it Enough?

Google+ has only been around for a mere 2 months yet I already feel like writing about it is old hat. In the short time that the social networking service as been around its had a positive debut to the early adopter market, seen wild user growth and even had to tackle some hard issues like their user name policy and user engagement. I said very early on that Google had a major battle on their hands when they decided to launch another volley at an another silicone valley giant but early indicators were pointing towards them at least being a highly successful niche product at the very least, if for the only fact that they were simply “Facebook that wasn’t Facebook“.

One of the things that was always lacking from the service was an API that was on the same level as its competitors. Both Facebook and Twitter both have exceptional APIs that allow services to deeply integrate with them and, at least in the case of Twitter, are responsible in a large part for their success. Google was adamant that an API was on the way and just under a week ago they delivered on their promise, releasing an API for Google+:

Developers have been waiting since late June for Google to release their API to the public.  Well, today is that Day.  Just a few minute ago Chris Chabot, from Google+ Developer Relations, announced that the Google+ API is now available to the public. The potential for this is huge, and will likely set Google+ on a more direct path towards social networking greatness. We should see an explosion of new applications and websites emerge in the Google+ community as developers innovate, and make useful tools from the available API. The Google+ API at present provides read-only access to public data posted on Google+ and most of the Google+ API follows a RESTful API design, which means that you must use standard HTTP techniques to get and manipulate resources.

Like all their APIs the Google+ one is very well documented and even the majority of their client libraries have been updated to include the new API. Looking over the documentation it appears that there’s really only 2 bits of information available to developers at this point in time, those being public Profiles (People)  and activities that are public. Supporting these APIs is the OAuth framework so that users can authorize external applications so that they can access their data on Google+. In essence this is a read only API for things that were already publicly accessible which really only serves to eliminate the need to screen scrape the same data.

I’ll be honest, I’m disappointed in this API. Whilst there are some useful things you can do with this data (like syndicating Google+ posts to other services and reader clients) the things that I believe Google+ would be great at doing aren’t possible until applications can be given write access to my stream. Now this might just be my particular use case since I usually use Twitter for my brief broadcasts (which is auto-syndicated to Facebook) and this blog for longer prose (which is auto shared to Twitter) so my preferred method of integration would be to have Twitter post stuff to my Google+ feed. Because as it is right now my Google+ account is a ghost town compared to my other social networks simply because of the lack of automated syndication.

Of course I understand that this isn’t the final API, but even as a first attempt it feels a little weak.

Whilst I won’t go as far as to say that Google+ is dying there is data to suggest that the early adopter buzz is starting to wind down. Anecdotally my feed seems to mirror this trend with average time between posts on there being days rather than minutes it is on my other social networks. The API would be the catalyst required to bring that activity back up to those initial levels but I don’t think it’s capable of doing so in its current form. I’m sure that Google won’t be a slouch when it comes to releasing new APIs but they’re going to have to be quick about it if they want to stem the flood of inactivity.

I really want to use Google+, I really do it’s just that the lack of interoperability that keeps all my data out of it. I’m sure in the next couple months we’ll see the release of a more complete API that will enable me to use the service as I, and many others I feel, use our other social networking services. 

Google+: 2 Weeks, 10 Million Users and Me Clawing at the Walls For an API.

The perception in the tech community, at least up until recently, was that Google simply didn’t understand social the way Twitter and Facebook does. The figures support this view too, with Facebook fast approaching 1 billion users and Twitter not even blinking an eye when Buzz came on the scene. Still they’ve had some mild success with their other social products so whilst they might not have been the dominant social platform so I believe they get social quite well, they’re just suffering from the superstar effect that makes any place other than first look like a lot like last. Google+ then represents something of  reinvention of their previous attempts with a novel approach to modelling social interactions, and it seems to be catching on.

It’s only been 2 weeks since Google+ became available to the wider public and it’s already managed to attract an amazing 10 million users. Those users have also already shared over 1 billion articles in the short time that G+ has been available. For comparison Buzz, which I can’t seem to find accurate user information on, shared an impressive 9 million articles in 2 days a far cry from the success that G+ has been enjoying. What these numbers mean is that Google is definitely doing something right with the new platform and the users are responding in kind. However we’re still deep in the honeymoon period for Google+ and whilst their initial offering is definitely a massive step in the right direction we’ll have to wait and see if this phenomenal growth can continue. 

That’s not to say the G+ platform doesn’t have the potential to do so, far from it. Right now the G+ platform stands alone in its own ecosystem with only a tenuous link to the outside world via the +1 button (which ShareThis is still yet to implement and I don’t want to install yet another button to get it). Arguably much of the success of G+’s rival platforms comes from their APIs and with the initial user traction problem out of the way G+ is poised to grab an even larger section of the market once they release their API. I believe the API will be critical to the success of G+ and not just because that’s what their competitors did.

Google+, for me at least, feels like it would be the best front end to all my social activities on the web. Whilst there are many other services out there that have been attempting to be the portal to online social networking none of them have managed to capture my attention in quite the same way as G+ has done. The circles feature of G+ is also very conducive to aggregation as I could easily put all my LinkedIn contacts in Colleagues, Twitter in Following and Facebook friends in well, the obvious place. Then my G+ stream would become the magical single pane of glass I’d go to for all my social shenanigans and those who weren’t on G+ would still be connected to me through their network of choice.

That last point is key as whilst G+’s growth is impressive it’s still really only hitting a very specific niche, mostly tech enthusiasts and early adopters. That’s not a small market by any stretch of the imagination but since less than 20% of my social circle has made their way onto G+ from Facebook the ability to communicate cross platforms will be one of the drivers of growth for this platform. Whilst I’d love G+ to become the dominant platform it’s still 740 million users short of hitting that goal and Facebook has a 7 year lead on them with this. It’s not impossible, especially with the kind of resources and smarts Google has to throw at the problem, but it’s not a problem that can be solved by technology alone.

Google+ is definitely on track to be a serious contender to Facebook but its still very early days for the service. What’s ahead of Google is a long, uphill battle against an incumbent that’s managed to take down several competitors already and has established themselves as the de-facto social network. Unlike like their other social experiments before it Google+ has the most potential to bring about change in the online social networking ecosystem and with a wildly successful 2 weeks under their belt Google is poised to become a serious competitor, if not the one to beat.

Resistance is Futile, Integration is Inevitable.

Enabling your users to interact with your application through the use of open APIs has been a staple of the open web since its inception over a decade ago. Before that as well the notion of letting people modify your product helped to create vast communities of people dedicated to either improving the user experience or creating features that the original creators overlooked. I can remember my first experience with this vividly, creating vast levels in the Duke Nukem 3D level creator and showing them off to my friends. Some of these community developed products can even become the killer feature of the original application itself and whilst this is a boon for the application itself it pose some issues to the developer.

Probably the earliest example I can think of this would have to be World of Warcraft. The client has a pretty comprehensive API available that enabled people to create modifications to do all sorts of wonderful things, from the more mundane inventory managers to boss timer mods that helped keep a raid coordinated.  After a while many mods became must haves for any regular player and for anyone who wanted to join in the 40 persons raids they became critical to achieving success. Over the years many of these staple mods got replaced by Blizzard’s very own implementations of them ensuring that anyone that was able to play the game was guaranteed to have them. Whilst most of the creators weren’t enthused that all their hard work was now being usurped by their corporate overlords many took it as a challenge to create even more interesting and useful mods, ensuring their user base stayed loyal.

More recently this issue has come to light with Twitter who are arguably popular due to the countless hours of work done by third parties. Their incredibly open API has meant that anything they were able to do others could do to, even to the point of them doing it better than them. In fact it’s at the point where only a quarter of their traffic is actually on their main site, the other three quarters is from their API. This shows that whilst they’ve built an incredibly useful and desirable service they’re far from the best providers of it, with their large ecosystem of applications filling in the areas where it falls down. More recently however Twitter has begun incorporating features into its product that used to be provided by third parties and the developer community hasn’t been too happy about it.

The two most recent bits of technology that Twitter has integrated have been the new Tweet button (previously provided by TweetMeme) and their new link shortening service t.co which was handled by dozens of others.  The latter wasn’t unique to Twitter at all and whilst many of the new comers to the link shortening space made their name on Twitter’s platform many of them report that it’s no longer their primary source of traffic. The t.co shortener is then really about Twitter taking control of the platform that they developed and possibly using the extra data they can gather from it as leverage in brokering advertising and partnership deals. The Tweet button however is a little bit more interesting.

Way back when news aggregator sites were all the rage. From Digg to Del.icio.us to Reddit there were all manner of different sites designed around the central idea of sharing online content with others. Whilst the methods of story aggregation differed from service to service most of them ended up implementing some kind of “Add this story to X” button that could be put on your website. This served two purposes: it helped readers show a little love to the article by giving it some attention on another site and secondly it gave content to the other site to link to, with little involvement from the user. The TweetMeme button then represented a way to drive Twitter adoption further and at the same time get even more data on their users than they previously had before. Twitter, for what it’s worth, said they licensed some of the technology from TweetMeme for their button however they have still in essence killed off one of their popular services and that’s begun to draw the ire of some developers.

The issue many developers take with Twitter building these services into their main product is because it puts a chilling effect on products based on Twitter’s ecosystem. Previously if you had built something that augmented their service chances were you could build yourself quite the web property. Unlike other companies which would acquire these innovator’s companies in order to integrate their technology Twitter has instead taken to developing the same products themselves, in direction competition with those innovators. The reasons behind this are simple, Twitter simply doesn’t have the cash available to do acquisitions like the big guys do. They’re kind of stuck between a rock and a hard place as whilst they need to encourage innovation using their platform they can’t let it go on forever, lest they become irrelevant past delivering an underlying service. Realistically the best option for them is to start generating some cash in order to start acquiring innovator’s technology rather than out competing them but they’re still too cash poor for them to this to be viable.

In the end if you build your product around someone else’s service you’re really putting yourself at their mercy. The chill that Twitter is putting on their developers probably won’t hurt them in the long run should they not continue to copy other’s solutions to their problems however their fledgling advertising based business model is at odds with all the value add developers. Twitter is quite capable of doing some impressive innovation on their own (see #newtwitter) but their in house development is nothing compared to the hordes of third parties who’ve been doing their part to improve their ecosystem. I’m interested to see what direction they go with this, especially so since I’m working on what could be classed as a competing service.

Although I’m hoping people don’t see it that way 😛