Monthly Archives: April 2010

The Internet Filter: Drop it Like it’s Hot.

In all honesty I’m starting to get bored with bashing the Internet filter. I’ve attacked it from almost every angle and there’s no way that the current idea that Conroy and his department have drummed up can be spun into something that I could wholeheartedly endorse. I’ve been willing to put my partial support behind a filter that at the very least lets you opt-out but even then I’m doing so because apart from killing the legislation completely it seems to be the only idea that’s gaining any traction in parliament. It’s been almost 2 years since the Rudd government started talking about a filter and many months have passed since it was supposed to be implemented and frankly I just keep hoping it will go away so I don’t have to think about it anymore.

It’s no secret that it’s not particularly popular policy, especially with our friendly Internet giants and overseas counterparts. This is especially true with the technology community who have polled overwhelmingly against the filter, to the tune of over 90%. There’s still been little study of what the wider Australian populace thinks about the policy but what has been done shows that most people don’t want the government nor ISPs to be in charge of what they or their children see on the Internet and the majority are concerned that once the filter has been implemented it will be abused for political purposes.

But who am I kidding, if you’re reading this blog it’s pretty much guaranteed you’re in opposition to this filter as well and you already know all these facts. What has just recently come to pass is the admission by omission of the government that even they don’t believe this is popular policy and they’re pushing it to the backburner so it doesn’t become an election issue:

KEVIN Rudd has put another election promise on the backburner with his controversial internet filtering legislation set to be shelved until after the next election. A spokeswoman for Communications Minister Stephen Conroy said yesterday the legislation would not be introduced next month’s or the June sittings of parliament.

With parliament not sitting again until the last week of August, the laws are unlikely to be passed before the election.

Labor promised before the last election it would force internet service providers to block access to illegal content such as child pornography and X-rated images.

With Conroy spouting such fervent rhetoric against those who would oppose the scheme you’d think that he was damned sure this was what the Australian public wanted and would do anything to see it passed. Being held back until after the election tells us a couple things. First Rudd doesn’t believe that pushing this through (and thus following through on an election promise) will win him any favours and you can be damned sure the tech crowd would vote against him in droves if he did. Secondly the rhetoric that Conroy spews constantly mirrors his own views quite consistently as it wasn’t him but one of his spokespeople who made the announcement. Had his belief in the filter been faltering in anyway you can be assured that he would be the one talking about it, since up until now he’s been the only one talking to the press about it.

Broken election promises are nothing new but when something like this, which started out as a proposal that no one cared about since NetAlert failed and it was going to be opt-in (even that apparently wasn’t feasible), gets pushed back again and again you start to question why it keeps happening. I’ve always been of the mind that the government is trying to let it die a slow and quiet death so that they can say they tried to do something but ramble off a list of excuses to save face. Traegically it seems that we’re doomed to a constant cycle of delays and rhetorical battles between the government and the wider world with no end in sight. If they would just hurry up and try to pass this thing we could hopefully see it shot down once and for all. It seems for now we will be denied this pleasure for at least another 5 months.

ProcrastinationOn: Apply Directly to the Forehead!

It was almost 9 months and 200 posts ago that I thrust my pre-alpha version of Geon into the world for everyone to see. Thanks to my innate shyness I didn’t go the whole hog and release it into the wild for the whole world to see and I’m still glad for that as the first version was, to put it lightly, a smoking pile of crap. Had anymore than about 5 users got on it at once (the record stood at 2) my server would have fallen on its face trying to deliver all the content over my poor little 1Mbps connection. The saving grace of Silverlight taught me that I could use my client side programming skills to do what I wanted on the web without having to completely relearn everything and the next few versions of Geon came along that much faster.

Right now I’m comfortable enough to let every reader of this blog know that there’s a new version of Geon up (those adventurous amongst you would’ve noticed a link to the new version in a previous post) and it comes along with a UI change that I had been alluding to a while back. In essence the change was done in order to increase the readability of the information streams you’ve selected as prior to this you just had the one bar that would scroll along madly if you dared to look at multiple locations at once or just so happened to add Twitter from anywhere that was mildly populated. In addition to the UI changes I have also made the switch to Silverlight 4 which added in things like native scroll wheel support (I can’t tell you how happy that made me) and a slight performance improvement over Silverlight 3. Thankfully none of the breaking changes they made in the transition affected Geon so the upgrade was only a few clicks and a restart of Visual Studio away.

The new UI works similarly to the old one as you select your location first by clicking the location button on the left hand side and then clicking the location on the map you want to see. Then you can add in information feeds from the same bar in a similar way and they’ll automatically add themselves to the closet location circle on the map. As of right now all the feeds available work apart from Facebook (you’ll get a pop up asking it to connect with your Facebook account but no information will appear) because their geolocation is still not fully implemented and I’m not keen to do a whole lot of mangling to get results that are more than likely irrelevant anyway¹. Once you’re done adding the streams hit the button up in the left hand corner to see your streams in all their glory. Rows are locations and the columns are the feeds, all titled properly so you can tell what’s what.

Having all that done means however I’m now out of options for procrastinating. You see whilst this version included some new streams (videos and Wikipedia), a much better UI and a cleaner back end (mmmm JSON) most of the heavy lifting had already been done in previous versions. After getting the initial hard parts out of the way with the UI most of it could have been done inside of a week, although I casually programmed it over the course of a month or so. The next thing on the list is the real meat of Geon: the request system.

That pretty much means I have to start diving into something I’ve never coded before: webservices. Whilst I can’t really say I’ve been avoiding this I haven’t been actively looking to do anything about it either, apart from the casual search for tutorials on how to build user authentication systems. I know I’m just being a big baby about this and I should just suck it up and do it but it’s just been so darn easy up until this point I’ve been wondering why no one has done it before. As it turns out the rudimentary parts that most netziens have come to expect are the most complex and tiresome parts which is why it hasn’t been done (and also explains why some services don’t have logins at all).

I’ve decided to suck it up and just start hammering away at it until I get the thing going. It’s much like when I first started out coding Geon and I was using RSS feeds for everything, it was just the first way I found to do things. After fiddling around for a while and getting some advice from a real developer mate I found that had I just taken the time to research it the whole idea of using other formats was so much easier. I’m sure with an afternoon of searching under my belt I’ll be ready to tackle the big bad demon that is the client/server architecture of Geon.

¹I thought I should elaborate on this a little bit. There’s been rumours of a geo-api from Facebook for a while now but with their developer conference f8 over and done with I haven’t heard anything solid about its actual implementation. They’ve tweaked their privacy policy to allow the storage of geo information in Facebook however the API as of right now remains unchanged. There are a lot of apps out there making use of geo data and Facebook but there’s no way to extract that out of Facebook currently. You can kind of figure stuff out by finding out a user’s hometown location, reverse geo-coding the location, figuring out if that’s within your bounding box and then displaying messages from them if it’s within your area HOWEVER that’s an incredibly messy way of doing it and honestly isn’ the kind of thing I was looking for. I’ll be integrating Facebook information when they finalize their geo-api but until then it won’t work.

X-37

X-37B, A Shuttle It Ain’t.

With the retirement of the Shuttle looming over our heads, even though it’s been moved back a couple months (ARGH!), organisations with an interest in space have been looking for alternatives to ensure they still have access once the iconic crafts roll back into their hangars for the last time. Whilst supply missions are more than aptly handled by the European ATV, Japanese HTV or Russian Progress and the ferrying of people handled by the Russian Soyuz it seems that military, who really didn’t get as much of the face time they wanted when it came to the Shuttle, have gone ahead and developed their own purpose built craft and boy does it ever look familiar:

That my friends is the X-37B, an orbital test prototype of the X-37 series of spacecraft.  Don’t let the NASA badging on that plane fool you though as whilst the project was initially started in the hands of NASA it is now completely in the hands of the Department of Defense with NASA only having a small informal involvement in the project. Last week saw this craft successfully make its maiden flight into orbit but not to the usual fanfare that a new craft attracts and for good reason, everything about it is super secret.

About 10 years ago NASA began the X-37 project and invested quite a bit of cash into the development of the vehicle. Even back then the purpose of the craft was somewhat of a mystery as the primary function of this craft would be the launch and retrieval of payloads into space. Realistically this capability was already covered off by the space Shuttle (and indeed this craft was going to be launched in the Shuttle’s payload bay until they figured out that would be a waste of money) and even that had been usurped by the fact that it’s cheaper to deorbit and launch a new satellite than it is to bring an old one down for repairs and send it back up again. In 2004 the X-37 project was transferred to DARPA and the project became classified.

Usually that would mean the project would forever be surrounded in the mystery that accompanied its birth but the acquisition by the Department of Defense clarified its purpose. The Shuttle owes its current massive girth and plane like design due to the military’s involvement. Back then satellites were still expensive and the idea was that the Shuttle should be able to capture and retrieve broken military satellites (hence the large payload bay). Additionally there were some mission profiles which required the shuttle to launch into polar orbits, complete one orbit and then return to where it had launched from. Because of this the Shuttle had to have very large wings in order to be able to glide back to its original position, as the earth would have moved about 2000KM in the time it took them to complete such a maneuver.

Looking at this diminutive cousin of the Shuttle you can see such the characteristics of such missions profiles are very prevalent, such as the large wings and payload bay. The differences begin when you look under the hood and find that it’s fully robotic, capable of completing almost every task without human intervention. Additionally it carries with it a large solar array which allows it to stay in orbit for 270 days which is an eternity when compared to the Shuttle’s measly 2 weeks. Additionally unlike the Shuttle which is in essence its own rocket (those 2 SRBs strapped to the side of it are just to get it started, most of the work is done by the main 3 engine cluster on the back) the X-37 craft launches atop an ATLAS V rocket. The engine you see on the back is used for maneuvering on orbit and nothing else.

Overall its a pretty nifty little ship and really it should’ve been designed at the same time as the Shuttle. This craft serves the purpose of being a reusable transport to space that’s design to deliver and retrieve cargo and the lack of a crew makes it that much more efficient at doing its job. Had such a craft been designed around then you can bet that the Shuttle would look nothing like it does today and, more importantly, it wouldn’t be the huge cash drain that it has been for NASA over the past decade. Still there’s not much reason to dwell on that fact since it will soon be replaced by those upstarts in the private space sector which, in my opinion, can’t come any sooner. Hopefully since the military now has its own craft for performing its super secret missions they’ll keep their noses out of NASA’s business and we’ll avoid the whole design by committee debacle that was the Shuttle’s design process with future craft.

Self Referential Assholes.

There’s an unwritten rule that I adhere to on this blog that only a few people actually know about but something that you should all be aware of. For the most part any article where I’m trying to express facts and not opinions you’ll find links scattered to various corners of the web which support my points with evidence and original research. If you come across an article that’s rather link poor (which this article is shaping up to be) then it’s more than likely that it’s an opinion piece or original research. Which either way you’re going to have to take on face value since I’m just some random person on the Internet, but then again so are all the people that I link to.

For a long time I avoided internally linking back to my own posts as I thought it looked kind of arrogant to reference myself as a point of information. However since I’m rocketing towards a total of 300 posts on this blog I’m finding that there are many times when I’ve said something before and taking another paragraph to explain a concept I’ve covered in much more detail feels like I’m not doing the subject justice and I’ll provide a link back to that post. It also helps that doing so provides a healthy bit of search engine optimization for the article in question, although it doesn’t seem to matter that much since my most popular articles so far have been about the iPad and the Internet filter.

If you frequent other blogs you’ll notice that this rule seems to apply to most of them and even some popular news sites are beginning to cite references to sites other than their own. I’m a big fan of it as I’ve spent many hours of fascinated clicking through links on articles that interest me to get that deep understanding that the writer is trying to impart on me. This is coming from someone who, up until he found the wonders of RSS, didn’t read more than a couple sites a day and now spends hours consuming vast amounts of media.

There are of course assholes out there who use this rule as a means to look like they’re being authoritative when in fact they’re really just after the SEO juice that this internal linking provides, for example this Tech Crunch article I came across this morning (with links to prove my point):

Bloglines, the troubled RSS feed reader, has been down for the past 24 hours. The outage has even created buzz on Twitter (which goes to show some people still use it). When you visit Bloglines, the site has a message up that says it is down temporarily and will be “back shortly.” But with the site’s tumultuous history, you have to wonder how much longer Bloglines has before IAC will finally put it out of its misery.Bought by IAC in February 2005 for around $10 million, the site has been in jeopardy ever since the launch of Google Reader long ago, compounded by the shift from RSS to realtime news streams.

Applying my rule that a seemingly authoritative post would contain a healthy amount of links you’d think that this article was pretty up there in terms of supporting information. Couple that with the fact that Tech Crunch is a pretty big site you’d be forgiven for thinking that this wasn’t just an opinion piece. If you hover over all those links you’ll notice that nearly all of them link back to articles on Tech Crunch as well and if you actually follow those links what you’ll find is an ever inward spiralling cascade of self referential links which are supposed to be supporting arguments to the points they make. They are in essence backing up their opinion with opinions they’ve expressed in the past.

Now I’ve usually got no problem with that but when people are making claims to things, like for instance the death of RSS (really?), I’d like to see some links to some actual real evidence. Clicking through all their links revealed that someone there obviously has an agenda to push as they’ve been bashing Bloglines for quite a while yet if you look at the traffic of the site it doesn’t look like its going anywhere soon and realistically, if anything, the site’s traffic is growing. So much for that site going into the “deadpool” and the “death of RSS’.

Maybe it’s the skeptic in me but when I see this kind of bullshit on the Internet I just can’t help but get enraged. It’s so easy to make sweeping allegations but it’s so much harder to back them up. I know I’ve done it before on this blog and I’m so glad when someone calls me on it because frankly no one should be allowed to get away with it. So if you see anyone trying to make a point and all they reference to is their own material think twice about what they’re saying before you believe it. Because if all they have to go on is themselves then it all comes down to how much you trust that person.

And really, how much can you trust someone on the other side of an Internet connection?

HOWTO: Strangle Yourself With ITIL.

Looking back over my short 6 year career I’ve noticed that I’ve never really worked for any smaller organisations. Probably the smallest team I ever worked for was a group of 20 people on the National Archive’s Digital Preservation Project but even there I was technically part of the larger group of 200 or so people responsible for archiving things of interest for the Australian public. Probably the largest team I was in so far was at Unisys with my team consisting of around 30 people and the section I was a part of had well over 400 staff catering to all aspects of IT for a large government department. Working in such large environments has its benefits, such as lower amounts of responsibility and the opportunity to heavily specialize, but these are easily overshadowed by the drawbacks of needing that many people: managing the lot of them.

For the most part though I’d agree that its required. The last thing you need in a large environment is some cowboy system administrator making wide reaching changes which end up affecting thousands of people or loose process definitions which end up confusing the heck out of anyone trying to get some work done. Once you reach a critical mass of users and staff formalizing your procedures and policies starts to bring tangible benefits and is the basis for the industry buzzterm Six Sigma. However it seems that organisations that are intent on strangling themselves to death with rigid frameworks seemingly looking upon the horror they’ve created with rose coloured glasses.

Take for instance change management. At its heart it’s all about making sure that you don’t go changing things that will inadvertently affect other things and ensuring that all required stakeholders are informed of these decisions. When I was trained in ITIL I instantly recognised the benefit of such processes and was quite thrilled to see it accepted at the first real workplace I had ever landed a job at. Once I had become familiar with its implementations though my view of the glorious world of ITIL compliant processes was tainted with the harsh reality that for the most part it’s completely unachievable.

My current workplace is a glorious example of this. In what supposed to be a relatively progressive workplace (they had 1.0 revisions of blade servers here, a risk very few took) the change process is still done manually. I.E. if I need to change something in the environment I need to fill out a change request (fine, that’s part of ITIL), print it out (hmmm ok sure), take it around to all the service owners and get them to sign it (ummmm what?), get the Global Service Delivery manager and CIO to sign it (argh, why are they never in their office? Oh right they have more important things to do) and then give it to the change manager (total time spent doing this, around 2 hours). The Change Advisory Board (which contains all the service delivery managers) then meets once a week to discuss the changes they’ve already signed and so spend 30 minutes confirming that they actually signed the changes. Oh and they might ask how the previous changes went, maybe.

The above example demonstrates quite aptly how a vision of a process becomes horribly bastardized in the hands of those who don’t really understand it. Worse, even though the process is known to be horribly flawed, nothing has been done to change it in the many years it has been implemented. It seems everyone is content to gripe about how bad it is yet when improvements are made they amount to nothing more than rearranging the deck chairs on the Titanic, giving the impression something is being done when really its not (*cough*management theater*cough*). Couple that with the apparently abhorrent idea of scrapping the entire process and rebuilding it anew and you’ve got a recipe for one bad idea to exist for eternity, right up until the whole organisation collapses upon itself in a flurry of industry buzz words and frameworks.

Many people who’ve come from smaller organisations and start ups often tell me this is a symptom of larger organisations, where bureaucracy reigns supreme. I can’t refut this position as I’ve never worked in such a situation although I’m doing my darnedest to get there. Logically it holds true as the fewer people you have to consult to do something the less time it will take to get it done, and the less likely that one of them will want modifications to your idea. After seeing so many organisations hang themselves on the ITIL/Six Sigma/Lean noose I’ve got to wonder if its the frameworks themselves are flawed and the smaller organisations are immune to their tragedies simply because they haven’t tried to implement them.

Maybe I’m just sour because I’ve never really been in a position to change these processes. Ever since I started working I’ve seen ways that things could be improved only to be told that they just wouldn’t work, no matter how I spun it. There’s the very true possibility that my view of the world is total crap and all my ideas would generally not work but evidence is mounting that non-traditional approaches to business work, especially in our information rich Internet world. The time is fast approaching for me to put up or shut up and hopefully my ideas will work out for the best.

Or I’ll fail miserably and come crawling back to the world of IT support, secretly crying in the corner of server room somewhere ;)

Release Early, Release Often.

Whilst I’m no stranger to the business world I’m still a new player when it comes to developing usable products for a wide audience. My years of training as an engineer and short stint as a project manager gave me a decent amount of insight into designing products and services for a customer who’s shovelling requirements at you but when it comes to designing something to requirements that are somewhat undefined you can imagine I found myself initially dumbfounded. It’s one thing to have an idea in your head, bringing it kicking and screaming into the real world is another.

For the most part I began with an initial concept and started to flesh it out as best I could. The original idea behind Geon was (in my head) called “What’s Going On?” whereby you could plonk down an area on a map and send a question to everyone running the application in the area. The people in the area then could, if they so wanted, respond back via their phone client with some text, image or video. The main idea was to get people communicating and secondary to that would be supplemental information from other sources. After socializing the idea a bit people seemed to think it would be an interesting service (although most declined to make serious comment until after they saw it in action) and the closest competitors looked to be throw-away applications that probably took the developers a couple weeks to slap together. Things were looking good, so I started hacking away.

Behold the horror that was my first attempt, something that I almost foolishy went ahead to try and promote amongst my favourite tech sites. The first iteration was a horrible compilation of ASP.NET and various client libraries that I managed to scrounge from all over the Internet. For the most part it worked as intended, being able to pick up information from various sources depending on your location. The problem was however it was ugly, unintuitive and relied rather heavily on my poor little web server to do all the heavy lifting. Additionally after walking a blogger friend of mine through using it he immediately suggested a couple features that had just never crossed my mind and upon consideration would be absolutely essential in high information density areas. They were so good that even the latest incarnation of Geon incorporates his suggestions.

Looking back over all my experience in designing solutions I realised that I had always been spoiled by having the problem handed to me on a silver platter. When you’re working for a client it’s pretty easy to figure out what they need when they’re telling you at every turn what they want. Sure it might be a hassle to make sure that they properly define their requirements but at least you have a definitive information source on what will constitute a successful outcome. When you’re working to develop something that you’re not quite sure who your client will be the game changes, and you find yourself looking around for answers to questions that might never have been asked before. Right now I find the majority of my answers through other people’s web services, hoping that emulating some of their characteristics will bring along with it some of their success.

At the core of all this is the software development philosophy of release early, release often. Whilst my product isn’t probably ready for prime time the more I show it to people who will (hopefully) end up as my users the more insight I get into what I should and shouldn’t be doing with it. Even better was discussing it with some of my proper software engineering friends who suggested different ways of doing some things which not only simplified my code (to the order of hundreds of lines, thanks Brett ;)) but also opened up services that up until now seemed baffling in the way they returned their data. I guess the lesson to take away from this is that the more you collaborate with others the better your end product will be which is hard for someone who’s as protective of his creations as I am.

I know I harp on a lot about Geon on this blog (and I’m sure you guys are sick of hearing about it!) but it has been the source of many eye opening moments and its all too easy to get caught up in the excitement of sharing something I created with the world. I was never that creative (I can’t draw, I’m not a very sporty person and my music creation skills have been in hiding since my debut song Chad Rock (that’s an anagram of the real title, FYI) has earned me unwanted infamy in my group of friends) and apart from this blog I’ve never really had any other creative outlets. I guess I just want to let the wider world how exciting it is to create something, even if I sound like a hyperactive 2 year old with a new toy ;)

Plus the more I talk about it the more likely I am to work on it, since I feel guilty for being all talk and no action.

Management Theater.

I’ve only ever been in a managerial type position 3 times in my whole life, and 2 of those were at university. The first was for the most part a success due in most part to a solid team of people with one star member who was able to complete work in minutes that took the rest of us days. The second was overall a success but my role as a manager was completely and utterly useless and the project would have done much better if I had just not bothered trying to manage my 3 team members at all. Whilst you’d think an experience like that might have turned me off management entirely I still held aspirations of being a project manager some day, only to get into said position and leave it 6 months later. So whilst I may not have been anyone’s boss for an extended period of time I’ve had a taste of the managerial world, so I know when people are, how does the Internet put it, doing it wrong.

For the most part I’ve seen 2 types of managers in my time: those who rose from the ranks of their former colleagues to become the managers they are today and those who were somehow born into management positions, either from an outside company or via qualifications. The first tend to have a good grounding in what it is like for their underlings and are usually pretty attentive to their wants and needs. However they also usually lack any formal managerial skills and tend to be too involved in day to day matters to make them decent managers. The latter are usually better at being managers in the general sense (shielding their underlings from the workplace politics) but will have more trouble interfacing with those they are supposed to lead. It then follows that these kinds of managers aren’t as liked as their rise from the ranks counterparts (and forms the basis of the Pointy Haired Boss character in Dilbert). Overall neither one is inherently worse than the other, they’re just different faces of the same coin.

Despite how they came into their position of power managers at all levels engage in what I like to call management theater. Much like its cousin of security theater, which details security measures undertaken to give the feeling of security without actually increasing security, management theater is the practice in which a manager appears to be managing a group of people but realistically they’re not. The management function that they provide is in some way usurped from either below (I.E. underlings managing their own workloads and fighting their own political battles) or up above (another manager doing the managers job for them). Whilst most won’t engage holistically in this behaviour many will in some way engage in acts that appear to be managerial when in fact that are nothing but.

Take for instance a recent event at where I work. The process was designed to give all the underlings, from the lowest ranks to the just under executive management, a voice with which to communicate their concerns to the entire section. In essence it was a good idea but as always the implementation was extremely lacking. The whole event smacked of management theater as the managers spruiked the fact that the goals set out then would be implemented by management, giving the illusion that the underlings had some power over their current work situation. Here we are over 4 months later and I’ve yet to see one of the ideas actually gain any ground or any reports from management about how all the wonderful ideas gained from the junket are changing the way we do our day to day work. The whole exercise was a pointless waste of everyone’s time that was done as a management theater exercise to make it look like they wanted to do something about everyone’s grievances, when in fact they never had any intention of following through.

I wish I could say that this kind of malarky was limited just to government agencies but it was rampant in the private sector to. A great example of this was back in my days at Unisys we were canvased for an opportunity to become CITRIX administrators with the juicy part being that we’d get sent on week long training for it. Seeing how much of a benefit this would be to both my current position and future career I put my hand up, along with 3 other people. The training was good and I was all geared up to take on some more work as a CITRIX admin but instead they hired 2 specialists to fill the role, neatly negating the need for the training I had just went through.

The management theater performed in this case was then to do with the managers wanting to look good for our client, saying that when the new system was installed they’d have 4 able bodied people ready, willing and able to take control of it. However with the project budget big enough to cover off 2 specialists when the system was in use by less than a few hundred people having a team of 6 dedicated to it was woefully inefficient and thus we were never called on to do any CITRIX administration duties. As time went on our skills in the area began to fade to the point of irrelevancy and my manager scolded me for leaving after they had sunk so much cash into me, oblivious to the fact that I hadn’t used one bit of the training since I received it.

All these reasons have culminated in the realization that I probably won’t be happy until I’m working for the one person I can’t disagree with, myself. The last 6 months have seen me attempting to build an empire out of my own skills and for the most part I’m being successful. Time will tell if I can leave the work a day world completely but when I can easily lose a day working on my own projects I know I’m doing the right thing. I just hope it will be enough to keep the bills paid ;)

Was Being Social Always This Hard?

If there’s one defining feature about Web 2.0 is that the focus shifted from one way information delivery to user centered interactions. Primarily I’d attribute this to the resulting fallout from the dot-com crash that fostered an environment for innovators to rise from the ashes of the former Internet giants. Such companies didn’t have the built in following that the companies that preceded them did so their best bet for success was to focus on drawing users into their various services. Once it became the in thing to be big on the Internet we saw the explosion of user centric services we see today and the current starlets of the Web 2.0 stage are of course the social networks.

Owing their success to people’s inate desire to belong and a non-obvious competetion element (read: the friend/follower/whatever counts) the social networks started out as just that, a place for you to keep in touch with real world friends. However as their popularity grew they inevitably attracted the attention of big business who, after many years no longer had the bitter after-taste of the dot-com crash in their mouths, saw a large and as of yet untapped market. From there it wasn’t very long before the user centric service became yet another essential part of every marketing campaign known to man, save for the few who see the social web as a passing fad.

For the most part though this actually increases the allure of social networks for most people. The fact that there’s even a tenuous connection between you and the celebrity du jour or your favourite company, whether it be following them on Twitter or becoming a fan of them on Facebook, gives the Web 2.0 generation that same sense of belonging that they craved when they first joined their social networks. This also works well for the other side of the equation too (the celebrities, Internet starlets et al) as there is little disconnection between themselves and their fans, meaning that they are much more able to command the attention of their audience. When your audience numbers aren’t big enough for you to command your own research and marketing teams social networks become your lifeline to staying in touch with your audience and hopefully keeping them on as fans.

However after using the top tier of social networks for a couple years I’ve started to notice an interesting yet puzzling trend. For the most part people will usually settle on their network of choice which is largely centered around what their highest percieved value of said network is. For the wide majority the pervasiveness of Facebook amongst a wide demographic makes it the best for connecting with friends. Others crave the constant stream of consciousness that is Twitter whilst some may just prefer to see videos from a select bunch of people, thus gravitating towards Youtube. For those on the other side of the equation, those looking to exploit the ability to capture an audience, it seems that you can’t be as choosy with your social networks. You’ve basically got to be on all of them.

Readers of this blog may or may not know the many different ways I promote it, but I know how many of you come through my different channels. Taking a gander of my Google Analytics reveals that about 11% of you have this site bookmarked (or type the address in manually every time, you sadists!), 9% come from Facebook and a mere 2.5% come from links on Twitter. The vast majority of people who get here come through searches, just under 50%. But as many marketer’s will tell you ignoring the long tail can be quite foolish and that is the exact reason why I’m publishing myself through all of those mediums (hey come on I never made it much of a secret that I thought this blog would be my shot at Internet fame and fortune :P).

For many of the current generation of Internet starlets they are in the exact same position. The place that I see this being most prevelent is on Youtube with every single big channel littered with links to follow them on Twitter and fan them on Facebook. They know that if they deliberately abstain from being available through these mediums they’re losing a potential audience. Never mind that the content that they delivered is what made them popular in the first place the fact that someone doesn’t participate in a certain social medium says more than “I can’t be bothered” to the social networking crowd. They would seem to take it as you don’t really care about them, almost tantamount to ignoring them in public.

Partially this is what spurred my current conquest of aggregating a whole lot of information from across the Internet. Whilst I’m under no delusions that I will be the next big thing on the Internet I’m finding more and more that the delivery of information doesn’t seem to matter as much as the people and places that it is coming from. Whilst I’m still aghast to calling my current project a social network (although I will admit I’m about to cave on that point) the high value information streams come from said networks. Thank the Web 2.0 gods for the mantra of being open and accessible or I probably wouldn’t be working on the application at all.

I guess what I’m really getting at here is that the segmentation of social networks would on the surface appear to be capturing different markets when in reality it’s just the same market duplicated several times over. Hats off to them for doing it though as traditional industry couldn’t of fathomed capturing the same market 3 times over but it feels like there’s so much duplication of effort for little benefit.

Maybe it’s the engineer in me seeing redundancy when its not needed that has lead to this feeling of wasted effort but every time I see those familiar icons on the side of a blog or whatever page to link up via various social networks I always twinge a little inside. We live in an age where information is so accessible yet we seem intent on erecting walled gardens everywhere that serve no purpose but to make dissemination of that information harder than it needs to be. Maybe I’m wrong and the simple act of providing an aggregate interface to all these services will change people’s view of such networks, but if that’s the case I’m one genius kid in a garage away from being over taken as the aggregator to use on the Internet.

39 Years On, I Salute You Russia.

I must admit I don’t give the Russians enough credit for the work they do in space. If a Shuttle launches you can bet your bottom dollar that I’ll write a post up about it and guaranteed I’ll be letting everyone else know about it if they don’t read my blog. Still at least 5 times a year Russia launches up a Soyuz craft carrying 3 *nauts and in between those they launch another 4 or so Progress craft to resupply the International Space Station. So whilst their craft might not be as iconic and grandiose as the Shuttle they are in essence the innovators of cheap, reliable access to space and have been for many decades.

Really though Russia’s prowess in terms of pioneering space technologies goes much further than that. Today 39 years ago they launched an extremely ambitious project called Salyut-1, the first ever space station. For the most part you would call it an observatory as the majority of the work during the Salyut’s maiden flight was Earth facing observation plus a few deep space spectrographs. Like with all firsts in space there were many problems with the first flight being unable to dock due to their docking couple failing and the second crew, who were tragically lost upon re-entry but have since ensured that none will be lost in that way again, were forced out of the station due to numerous problems including an electrical fire (probably the worst thing that can happen in high pressure oxygen rich environments).

The Salyuts were fairly spacious for their time having just under 100m³ of area for the visiting cosmonauts (compare that to the ISS’s living volume of 373m³). Couple that with 20 portholes that littered the space craft and life up there wouldn’t have been too bad. Especially for those cosmonauts who were used to the cramped confines of the Soyuz which, up until recently, didn’t even have a window for the orbital module (not that it needed it anyway). Taking a tour of the inside reveals that most areas had enough room to be quite comfortable, they even had a treadmill.

True to their core philosophies of truly repeatable mission profiles Salyut 1 wasn’t the last of its kind. The Salyut mission profiles flew another 8 missions and were responsible for many of the components that made up the Mir Space Station that followed. The last of the Salyut series even managed to stay up in orbit for over 4 years and was visited repeatedly by a total of 10 crews. If you compare that to the American offering of the same time who managed to keep a station in orbit for 6 years but only ever managed to visit it 3 times you can see why that even today the core of our biggest space station ever is in fact a Russian module called Zvezda. It’s no wonder the Chinese turned to Russia for their space program.

The Salyut craft also had a slightly darker side with 3 of the missions (2, 3 and 5) designated as Almaz military Orbital Piloted Stations. These are the only (known) craft to ever fly weapons in space and whilst they didn’t carry anything really destructive, a Nudelman-Rikhter NR-23, it was still successfully fired at a test satellite on orbit. Other than that however they were just your typical military operation with the exception that they were piloted by humans on orbit rather than on the ground.

Despite this however the Salyut missions were another step forward in humanity’s endeavours in space and we owe a great deal to Russia for them. Without their courage and sacrifice there’s no telling how much longer it would have taken us to big something as impressive as the ISS or the technology to keep humans alive in orbit for a period that was measured in months instead of weeks. I believe I speak for many of us space nuts in saying that on today of all days we salute you Russia.

My Trouble With Game Reviews.

It’s been just on 5 months since I took it upon myself to start reviewing some of the more well known gaming titles and for the most part its been pretty enjoyable. Up until about a month ago I was able to play my way through an A list title every week or two and usually got the review out the following Monday morning. They’re great blog fodder as it’s something that I’ve been passionate about for many years and they’re probably some of the easiest writing I’ve ever done. Casting an eye back over them though I see that for the most part my reviews are overwhelmingly positive with no game scoring below an 8 out of 10 and most criticisms are forgiven rather quickly. After a while I began to hope for a really bad game to cross my path so I could slam it on my blog, just for something different.

After actually seeking a bad game out it all became clear why I’d rarely ever review one, I just can’t finish the bastards.

Take for instance Bayonetta. If you’re in the business of knowing about games you would’ve likely heard of it a long time ago as the new IP title from the famous Devil May Cry creator Hideki Kamiya which managed to achieve the coveted 40/40 score from Japanese gaming magazine Famitsu. I’d heard about it a long time before the first review came out and was intrigued by the buzz that was surrounding this little known game and ended up buying myself a copy about a week after it came out. After coming off a high of finishing Assassin’s Creed II I was ready, willing and able to sink my teeth into another blockbuster title. What followed however was a cheezy, hyper-sexualized game with an impossibly proportioned librarian nymph who’s battle suit is made from her own hair which she uses to smite angels. I’ve never been much of a fan of hack and slash games but I was willing to give the game a go considering its extremely glowing reviews but after about 4 hours of game play I just couldn’t really force myself to continue playing. Sure I wanted to get my monies worth (I just paid for the equivalent of 5 movie tickets for 4 hours of entertainment, geeze) but in the end Bayonetta sits next to my PS3 gathering dust, begging me to put it out of its misery.

That’s not the only example either. In fact the majority of games that I’ve come across recently have been rather sub par when compared to the first quarters releases. Here’s a list of the games I’ve tried to play and had to put down for one reason or another:

  • White Knight Chronicles: A game that haunted me for so long that nothing could stop me from buying the damn thing the second it was available. What I was greeted with when putting the disc into my PS3 was however a far cry from the image built up in my head. As it turns out WKC is a single player MMO with massive amounts of cut scenes (you don’t get to actually play the game for a good 30 mins) and all the fun of grinding and levelling. I should’ve cottoned on when they only released it for Japan initially but really when a game fails to grab me in the first couple of hours and the average playtime is about 25~30 hours I just lose all hope. This is probably why I’ve never really got into the Final Fantasy game series (apart from its horrible turn based combat system) and really if I’m going to grind any game I’m going to do it where I can chat to all my friends.
  • Tomb Raider: Underworld: I got this as part of the Eidos pack I purchased mainly to get Arkham Asylum. Since I played many of the older versions of this game I thought it would be nice to revisit the jump puzzle 3rd person shooter for a little bit of nostalgia and a refreshing change to my usual diet of A list titles. What I was met with however was a buggy game that crashed no less than 6 times in an hour and would randomly fail to render the screen, leaving me with a black nothingness to stare at until I could CTRL + ALT + DEL my way out of it. I couldn’t actually play this game for more than an hour because of this although I will admit their auto-save feature is top notch, rarely losing than a few minutes of progress. That’s not enough to gloss over the fact that crashing every 10 minutes or so makes the game completely unplayable.
  • Red Faction: Guerrilla: Yet another game I picked up in a pack that came with hearty recommendations from a few friends. The games core mechanic, pretty much everything is destructible, plays quite well and there’s infinite amounts of fun in smashing the crap out of a large building with just a sledge hammer. The real problem comes however when your FPS drops below 60 and the game’s engine freaks out and starts shovelling on a ton of input lag. Now I’m not a game developer, I don’t even play one on TV, but I know bad programming when I see it. The input lag became so bad that there was a definite 1 second delay between key presses and something happening on the screen. When you’re doing say a vehicle mission that is quite fast paced this makes the game annoyingly difficult for no reason whatsoever. Sure the problem went away when I lowered all my settings to nothing but realistically every other game I’ve played thus far as been done at max settings without having these problems. Couple that with the mediocre story and lack of eye candy I can only play this game in 1~2 hour bursts, and I’ve only done that about 3 times so far.

The end result of all this? I caved and restarted my World of Warcraft subscription. I was instantly hooked as things that used to take hours to get organised and completed now take less than 20 minutes and it seems my dreams of good loot raining from the sky have come true. Its so easy to get gratification that I instantly dropped any idea of powering through any of the 4 titles I mentioned in favour of spending some quality time with my little hunter avatar. I feel infinitely dirty for doing so, but it’s the good kind of dirty.

It really goes to show just how good the first couple months of this year was for us gamers and looking back over all my reviews I stand by all the scores I gave out. It’s disappointing to not be able to write a review of a good game every other week but when I just can’t bring myself to finish one it tells me that it’s probably not deserving of a review, even a bad one. I’ve got high hopes of writing another good review soon (Just Cause 2 is looking like a prime candidate) but until then I’m going to go wallow in my addiction to World of Warcraft once again.n