So a couple days ago I caught wind of yet another upcoming Google service called Buzz. On the surface it looked like another attempt to crack into that oh-so-lucrative area of social networking (remember Orkut? Still big in Brazil and India apparently) but with a slight twist, it was going to appear in Gmail. Initially I wrote this off since I don’t use the Gmail interface very often, I’m more of an Outlook kind of guy, but when I logged in this morning and was invited to give Buzz a go I thought I might as well give it the once over to see if there would be any value in switching across.
So the integration into Gmail is pretty seamless, its just another folder on the web interface. With Gmail attracting some 150 million users every month (Less than half of Facebook, FYI) that means they have a good amount of eyes on their product already. Still it will be interesting to see the conversion rates from regular Gmail users to Buzz as the welcome screen lets you opt out completely with one click. There’s really no bells and whistles on the landing page for Buzz either so you’re not going to have to duck and weave your way through a new UI to get Buzzing. Overall you could be mistaken for thinking that Buzz was just a strangely named email folder with an icon.
I set about adding contacts to my Buzz page to see how adding people would go. Much like Facebook searching for anyone’s name directly usually ends up with thousands of people who you’ve never seen before. You can search through your contacts but this is probably the first place where Buzz falls down. To add all my Gmail buddies (which are few since I don’t use the web interface) I had to go to the search box and type in their name. I wouldn’t want to have a long list of people I’d like to add to this as I’d have to type them all in again to add them to my Buzz feed. Also while running under Firefox 3.5 I had the search box lock up on me at least 3 times and had to wait for the script kill pop up to be able to regain control of my browser. Granted this was the only technical difficulty I had with it (a long way from Wave, which we managed to crash regularly) but still any web application that locks up my browser doesn’t give me a good impression, especially when it’s something from Google.
After getting all my contacts into the list (and noticing that they haven’t posted anything to Buzz yet) I started adding in some “connected sites”. These are basically sites that you either contribute to like Youtube and Twitter or sites you own, like this blog. If you’ve created a Google Profile before you’ll be familiar with this process and the list they create is drawn from the same information. Most notably if you’ve used Google’s Webmaster’s Tools it will pick up on the sites in there as well as some other services that use your Google login. Unlike their profile service the number of extra sites is quite limited with services like Youtube, LinkedIn and Orkut missing from the list. This is strange considering 2 out of 3 of that list are in fact owned by Google.
So the real meat in Buzz seems to come from its ability to aggregate information from a whole bunch of sources into one location. I can understand the motivation behind this as it is pretty much the same idea that drives Geon. There’s also the fact that Buzz will have integration into other Google services like Maps and will also let you export a person’s feed as RSS. It would be quite an understatement to say that this wasn’t a goldmine for Geon as at its core these 2 technologies are what drives the information that will be available through it. For good measure Google slapped on the ability to post directly to Buzz which I think is completely useless but is required to get those Gmail users using Buzz sooner rather than later.
Overall it looks like a decent service and the captive Gmail audience was a good target to launch this product at. However Buzz detracts quite heavily from Google’s other communication product Wave. I sung high praises of Wave when it was first released but I’ll be honest with you, my last couple logins have seen it turn into a ghost town. My last wave is dated the 27th of November and I haven’t heard anyone else mention it in well over a month. Buzz claws away that tiny amount of market share that Wave had by giving the same level of information aggregation minus the confusing interface and social convention shift. Wave may be great for collaboration but its current market is pretty narrow, especially when there’s no one else using it. Had wave been introduced in a similar way to Buzz I could have seen it garnering much more acceptance and Buzz would’ve become an augmentation of it. Rather now it seems Wave will be left to its niche and Buzz will be the one to enjoy more widespread success. That might have been Google’s plan all along however.
Will it ever be as popular as Facebook or Twitter? Probably not but I don’t think that’s Google’s intention. It’s another avenue that Google can exploit to better target their advertising and increase user engagement with their services. With Wave still trying to find its place (and monetization stream) in the world Buzz is a more cautious step towards getting more people on Google’s products other than search. Personally I can’t see myself actively using it, but I’ll definitely be integrating it in much the same way as I did with Twitter.
Nothing warms your heart quite like a bit of machine translated language that just doesn’t get it quite right. Sure there were examples of bad translation before (The term Chinglish/Chingrish is a testament to that) but the explosion of accessible Internet coupled with a few entrepreneurial types lead to this bad translation being commonplace anywhere multiple languages could be expected. So much so in fact that whole websites, which have been around for well over a decade, are now dedicated to the phenomena. Even the title of this blog post is a testament to the wonderfully horrible world of machine translation, thanks to this website.
So you can imagine the hilarity that ensued when Google announced it was going for a universal translator (much like the fictional Babel fish) that would leverage their translation engine as well as a good old fashioned text-to-speech program:
By building on existing technologies in voice recognition and automatic translation, Google hopes to have a basic system ready within a couple of years. If it works, it could eventually transform communication among speakers of the world’s 6,000-plus languages.
The company has already created an automatic system for translating text on computers, which is being honed by scanning millions of multi-lingual websites and documents. So far it covers 52 languages, adding Haitian Creole last week.
Don’t get me wrong though the idea intrigues me quite a lot. I spent the better part of 6 years learning Japanese and was still a while off from being fluent in the language. Sure I knew enough to get by in Japan (and really 2 weeks in the country taught me more than the previous 3 years did) but I’d have to also put that down to the majority of Japan’s younger generation knowing a substantial amount of English. Being able to connect with people who I don’t share a common language with is definitely something I can appreciate.
Whilst I don’t doubt Google’s ability to actually bring about a system like this, in fact I believe that you could probably cobble together a system right now using a couple discrete products, there’s some fundamental aspects to languages that will make the use less than optimal for many situations. For example let’s take a basic sentence in English and compare it to Japanese equivalent:
Now these are both simple sentences but the differences are in the construction. If we take the Japanese sentence literally it actually says “We pool went” which whilst understandable isn’t exactly English. This is because, in the most basic sense, Japanese sentences are constructed in the form of <subject><object><verb> whereas English comes in the form of <subject><verb><object> (an in depth analysis of these two grammatical structures is shown here and here respectively). For text translation this doesn’t matter too much since you can program your algorithm to detect the different grammatical structures.
For real time translation however it becomes more difficult. Not only do you have to detect the appropriate sentence structure you also have to figure out when someone has finished a sentence. Otherwise if you’re just translating word by word your results will end up like I showed above, with sentences making sense only after you rework them in your head. Additionally more complicated sentences can change the meaning of certain words so that a blow by blow translation ends up not making any sense at all. There is of course one way to get around this and the translation service not real time but on a per sentence basis. That however is not what has been trumped up in the media over the past couple days.
In fact devices that can do a sentence by sentence translation (with the appropriately Stephen Hawking-esque voice) already exist, especially in Japan. I was lucky enough to be able to borrow one such device from my teacher whilst staying in Japan and many of the people we ran into had one as well. Granted their translation capabilities were limited when compared to that of Google but they were more than enough for the average tourist. I only wish I had picked one up whilst I was over there as the same devices cost about ten times as much over here.
I’ll be watching the development of this technology pretty closely, as I have done with Google Translate over the past few years. They’ve come a long way with developing their own translation engine and they have made significant leaps in the languages that were the source of so much ridicule in the past. As for the hopes of a real time universal translator I’d say there are some fundamental barriers to achieving it, but that won’t stop us from talking to each other with what will appear to be a satellite delay, even though we’re standing right in front of each other.
That would be rather amusing to watch, actually…
16 hours ago space Shuttle Endeavour lifted off on its mission STS-130. The image above was captured barely a minute after take off as Endeavour passed through the thin clouds that remained after the previous launch attempt was scrubbed yesterday. This gave rise to the picture you see above and it is truly breathtaking. As with all recent Shuttle missions the event was also heavily televised as seen below (skip to 10:48 for the good bit):
Watching this video I can’t help but feel awe at the shear magnitude of power being unleashed by our triumph of science. For what seems like an eternity Endeavour shines brighter than any light and then turns into a bright star before slowly fading from our view. There’s a kind of magestic beauty seeing something so large and powerful moving so gracefully as to almost qualify itself as art. If I hadn’t planned already to see the very last of the Shuttle launches ever this one would’ve been next on my list, as for a thing of beauty nothing can quite match a night launch in my books.
In the midst of all this awe and wonder there’s still a lot of good old fashioned space work behind this launch. STS-130 brings to a close the last of the major construction work (more on that in a minute) that will be done on the International Space Station. The Tranquility module contains the most advanced life support equipment to date with facilities to recycle waste water, generate air for the astronauts to breathe as well as removing any contanimates that might taint the environment. Whilst not as spacious as the Japanese Kibo module its no small fry and will primarily be used for storage, exercise and accessing the Cupola. The main function of the Cupola will be to facilitate robotics work that will be done on the ISS using the various arms they have installed there. Additionally it also contains the largest window ever flown into space and it will be installed facing Earth. Although its not the main reason for its existance you can bet that the astronauts on board will be chomping at the bit to get some view time through that portal. I know I would.
If you watched the video above you may have noticed a little information widget on the left hand side detailing some interesting information about the Shuttle during its launch. One of them which may look a little odd (especially if you’ve got an engineering bent) is the SSME Thrust percentage which hovers above 100% for the majority of lift off. Now this might seem strange since no system is capable at operating above 100% but there’s a good reason for this. The Space Shuttle Main Engine was initially designed with a certain amount of thrust in mind and was tested successfully to that specification. However further testing showed that the engine was quite capable of running safely beyond its design, all the way up to 109% of the required thrust. This has then become the norm for all launches with the higher power levels saved for contingency operations should they be required. It really shows how talented the NASA engineers are.
There’s still a day and half to go before they actually meet up with the ISS and the majority of that time will be spent getting the Shuttle ready to dock and ensuring that the Shuttle hasn’t suffered any damage on the way up. After that they’ll do their signature backflip and take their mission into full swing. There’s a busy 2 weeks ahead for all these astronauts.
STS-130’s launch was one of beauty and its fitting that it will bring to the ISS a portal with which the astronauts can look back at us as we look up at them. Whilst I feel a twinge of sadness knowing that there are only 4 more launches left before the magestic shuttle never flies again I can also take heart in the fact that soon a new era of space will be heralded in by a new vision for NASA. Times like these remind me how far we’ve come, and how bright our future is.
Even though I’d argue that its impossible to find anyone that fits the mold perfectly when it comes to our society’s view of normality it doesn’t deter me from the pursuit of understanding it. There are a lot of norms out there that don’t make logical sense and as an engineer this becomes a curiosity, as I try to figure out these norms and map them out with easy input/output equations in my head. This sounds a bit abstract and I tend to normally describe this as “trying to understand the everyman” which has the subtle undertone of me being an outsider. I’ll put that down to my awkward teenage years still being a fresh memory 😉
Coincidentally this pursuit of understanding will usually clash quite heavily with the skeptical voice inside me. Things that don’t make sense or aren’t based in sound science and logic form a large part of our everyday norms. One of these is the perpetual myth of the Polygraph, which reared its head in a recent political scandal that hit the news this morning:
“As a first step to restoring my reputation I subjected myself to a lie detector test and the results are set out in the report dated 29 January enclosed with this letter,” the letter said.
“Despite your denials, you will see the result is conclusive. You cannot be surprised.
“I am providing you with the results of the lie detector test because I want you to publicly admit that I was telling the truth about our relationship.
Now I’m not going to comment on whether or not this story has any facts behind it (to be honest its your usual political scandal/beat up, they’re a dime a dozen) but the use of the polygraph is what got me listening. Taken on face value the results show that she wasn’t lying about the relationship in question and thus puts pressure on Rann to take the test to either prove or disprove the results. This however is one mistake that the everyman has been making for a long time, trusting the polygraph.
Most people in any modern society are familiar with the idea of a polygraph or “lie detector”. In essence its an information aggregator taking readings of bodily functions like breathing rate, respiration, pulse and other metrics like skin conductivity and blood pressure. The measures themselves are objective and quantifiable however the interpretation of them is far from it. In fact the scientific community’s consensus on the polygraph is that they are unreliable and not much better than random chance. So why are they still brought up so often in circumstances like in the article I linked above?
The most obvious reason I can see for this is the lack of education. Ask anyone on the street what the wider scientific community thinks on polygraphs and I’m sure the majority of answers will be along the lines of “I don’t know”. Scandals and their use in popular media don’t help this fact either as the majority of them fail to mention the failings of the polygraph, instead only demonstrating their use. It also doesn’t help that one of my favourite shows, Mythbusters, perpetuated the myth with their use of a discredited polygraphist and liberal interpretation of the scientific method. Don’t get me wrong though, they’ve done more for the world of critical thinking than many others have, but this is one point in time where they failed terribly. There are counter examples to them though, like those with Penn and Teller’s show Bullshit, where they aptly demonstrate the polygraph’s failings and show you how to beat it in under 10 minutes.
The good news is that in most courts around the world polygraphs are inadmissible and in our own backyard they’ve been aggressively thrown out, setting the precedent for all cases henceforth. Still they prevail in popular media mostly as an attempt to generate more controversy because their ambiguous nature makes for a good story. It’s a testament to the times that I only see a story like this once every year or so but it will still be a long time before the everyman knows that polygraphs are pure bull and should be discounted as such.
Even though I’ve been doing this whole blog thing for a while now (well, longer than I’ve held most jobs in the past 5 years which is saying something) I still feel that’s probably one of the more out-there hobbies that people take on. Whilst I share this interest in blogging with many of my social circle for the majority of them they have little to no interest in the long form of social media, gravitating much more heavily towards Facebook. Can’t say I blame them either as the format there lends itself easily to posting a quip or comment in under 5 minutes and usually generates a very immediate response. Writing a blog post takes at least 1~2 hours out of my day and the results vary wildly from a slew of comments to barely registering on anyone’s radar. It’s definitely akin to shouting into the darkness hoping someone listens.
Maybe that’s why I feel so comforted by my Google Analytics account.
More interesting however is how the long form of social media on the Internet is on the decline:
Blogging is falling out of favor among the young’uns these days as they move to quicker-moving social networking sites. At the same time, older adults are getting into blogging and teens still aren’t hot on Twitter, at least according to the latest report from the Pew Internet and American Life project.
Only 14 percent of teenage Internet users said that they blogged last year—that’s half the number from 2006. Similarly, teen commenting on blogs is way down from 76 percent in 2006 to just over 52 percent in 2009. It doesn’t matter whether the blog is on Blogspot or buried within MySpace, either—blogs in general are definitely not the new black.
Delving into the statistics that the article above was based on reveals that not only did the teenage population leave the blogging platform en masse but also the young adults. There was a slight improvement in the over 30s and as a whole the Internet has seen a rise in the usage of blogs but for the youngsters and fledgling adults it would seem that blasting your thoughts out hundreds of words at a time is just not the in thing anymore. That left me wondering: why the hell is that?
I could easily write the whole phenomena off as being part of the revolution of mobile Internet. Nearly every modern phone has a Facebook application on it or you’re a Twitter account away from enabling it on any SMS capable phone. I’ve tried doing blog posts on my phone in the past and even with a hardware keyboard its laborious work and I can imagine would feel quite unnatural to a demographic who grew up with short form communication method SMS as their defacto standard. Thus with our increasingly mobile generation the longer forms of social media become outmoded for the quick, up to the minute feeds that services like Facebook and Twitter provide.
However I believe there’s also another side to this phenomena that will be hard to find in statistics like this. The blogging medium has evolved quite a lot over the past 10 years, going from something that only the technically elite were capable of to becoming freely available to anyone who cares to spend 5 minutes setting up an account on Blogspot. Over this time corporations began to see the value in such information channels and so the corporate blogs were born. The same thing could also be said for celebrities with their blogs functioning as a direct channel between themselves and their fans. A great example of this would be the Dilbert blog as prior to launching it no one really knew the face behind the comics that parody our cubicle life so aptly.
To use a musical analogy, blogging sold out. For the most part all the large blogs around the world are centered around driving traffic and getting more eyes on the content you’re either producing or regurgitating. Gone are the days when a blog was someone talking about their life or what interests them. No today you’re more likely to find a corporate blog or niche news aggregator, with the one you’re reading now being no exception. It started out as a platform for me to collate my various Internet censorship fighting exploits and evolved into what it is today. But make no mistake I’m just a few steps away from being the newsbots I used to loathe so much.
Personally though it feels like an evolution of the medium. It initially started out as an easier way for anyone to have a presence on the web and has since evolved into a tool that’s been applied in a much wider sense. The younger generation hopped on this tech because it was new and cool but with all the late adopters coming to the field the platform of blogging has lost its cool and the likes of Facebook, Twitter and MySpace are here to pick up the slack. It will be interesting to see how long social network can go before it starts to lose its shine to, if it ever does.
I mentioned in passing recently that NASA’s future had been in question over the past few months. With the Shuttle program shutting down and their replacement scheduled to be rolled out in 2015 (and 2018 was looking like a far more realistic date) they were going to lose all capability for putting people into space. Additionally they’d sacrificed a whole lot of their core scientific activities just to try and meet the 2015 deadline with the Ares line of rockets. All of this was the result of an overly ambitious target set by Bush that lacked the additional funding to achieve such goals. Obama’s plans for NASA are not what you would expect initially, but diving deeper reveals why these changes need to occur.
- Research and development to support future heavy-lift rocket systems that will increase the capability of future exploration architectures with significantly lower operations costs than current systems – potentially taking us farther and faster into space.
- A vigorous new technology development and test program that aims to increase the capabilities and reduce the cost of future exploration activities. NASA, working with industry, will build, fly, and test in orbit key technologies such as automated, autonomous rendezvous and docking, closed-loop life support systems, in-orbit propellant transfer, and advanced in-space propulsion so that our future human and robotic exploration missions are both highly capable and affordable.
- A steady stream of precursor robotic exploration missions to scout locations and demonstrate technologies to increase the safety and capability of future human missions and provide scientific dividends.
At a high level the objectives seek to achieve a few things. The first was doing away with the lofty goals set by the former president Bush. To be honest I initially found this heart breaking as I felt this was one of the core reasons NASA existed. However without the appropriate funding for such actions (I’m talking Apollo era spending of around 5% of GDP, not the paltry 0.5% they get now) realistically it would have been far more detrimental to continue down this path than to cut our losses and refocus on the more important things. Whilst this might keep human boots off other terristerial bodies for another decade or two the missions that eventually go there won’t be flag planting missions, they’ll be permanent settlements. If we are ever going to establish ourselves throughout our solar system such sustainable missions are the way to go. It’s tough medicine to swallow, but it’s for our own good.
The new vision for NASA explicitly kills the constellation plan and with it the Ares series of rockets. I’ve lambasted the Ares I-X in the past for being an absolute waste of time but I still supported the Ares-V, mostly due to its paper capabilities. This is the in for alternative ideas like DIRECT which have had some traction in the past but were pushed aside due to the investment in Ares. I’m glad that Obama decided to include a heavy lift capability in the new plans for NASA as its one of those things that still isn’t commercially viable. Once NASA has the capability though I’m sure demand for it will start to materialize, but for now everything else is handled quite aptly by the current choices such as the DELTA IV Heavy.
Probably the best news to come out of this is an extra $6 billion for NASA over the next 5 years to support the refocus on these new objectives. Probably the most exciting part about the extra funding is that a whopping $500 million to buy services from private launch companies to ferry astronauts to the International Space Station. Up until now there wasn’t an official word on whether or not NASA could do that as they’d committed to buying seats on Russian craft at $50 million dollars each. Considering that a Falcon 9 from SpaceX plus one of their Dragon capsules costs about $100 million and can deliver 7 astronauts (over 3 times the payload) to the ISS you can see why I’m excited about this sort of thing. It also helps drive down the cost of such launch vehicles meaning that, whilst its still out of the range of the everyman, the cost may one day enter the realms of say a trip on SpaceShipTwo. It’s a while off I admit, but having NASA buying kit from these guys is a guaranteed way to make space more accessible to everyone.
Additionally there’s also a substaintial amount of funds dedicated to some heavy duty science. This include things like new satellites, observatories, robotic missions to other planets and channeling funds into research that will help further our efforts in space. One of the big ideas nestled in amongst this is the development of orbital propellant stations (think petrol pumps in space), which are going to become a necessity if we seriously want to go anywhere with people on board. This is one of the problems that faces many space missions as you have to carry all your fuel up with you, driving down usable payload and needlessly wasting fuel. With orbital refueling stations we can design simpler, more efficient and capable craft that will take us to the farthest reaches of the solar system.
Still reactions are mixed over the new proposed NASA vision and budget. The bill still has to pass congress and this could prove to be a major sticking point for it. As with any bill that has passed through there concessions will be made, hot air will flow and it could quite easily end up looking nothing like it is now. With jobs on the chopping block because of this (cancelling Constellation will see a fair few people move on) you can expect certain congress members to fight it in order to win the support of their constituents. It will be a hard point to fight to, with America’s unemployment in the double digits. I’m hoping that the American congress’ short term view doesn’t skew this proposal too much, as it’s exactly what NASA needs.
So after rejecting it initially (and putting off this blog post for 2 days because of it) I’ve come to appreciate the changes that Obama has made. Sure we lose the vision of pioneering our way through space but it’s a cost we have to pay if we want to have any kind of sustainable presence outside our atmosphere. We’ll soon know what opposition this bill faces and I can only hope, for NASA and America’s sake, that it passes through unscathed.
It has been all quiet on the western front when it comes to censorship in Australia. Even though the Internet filter test produced surprisingly good results which would lead you to believe that implementation is just around the corner but the last month has seen not a single word uttered about it. With parliament resuming this week it’s sure to come into the spotlight again very soon. If it doesn’t then that will say quite a lot about the government’s intentions for implementing the thing, it may get delayed until after the election in the hopes of saving the tech vote.
However it appears that one of my most hated politicians, Attorney-General Michael Atkins, is peddling his censorship clap-trap in his home state of South Australia. It would seem now that if you want to make a comment about the upcoming election there you have to provide your name, rank and serial number (just kidding, name and postcode will do the trick) which the government can then keep on record for 6 months:
The law, which was pushed through last year as part of a raft of amendments to the Electoral Act and supported by the Liberal Party, also requires media organisations to keep a person’s real name and full address on file for six months, and they face fines of $5000 if they do not hand over this information to the Electoral Commissioner.
Attorney-General Michael Atkinson denied that the new law was an attack on free speech.
“The AdelaideNow website is not just a sewer of criminal defamation, it is a sewer of identity theft and fraud,” Mr Atkinson said.
“There is no impinging on freedom of speech, people are free to say what they wish as themselves, not as somebody else.”
Well it would be nice if you could stifle public debate right before and election, especially when there’s been several campaigns set up against you because of your idiotic and hyperbolic views on things as trivial as a R18+ rating for games. He specifically mentioned the website AdelaideNow which has run several articles critical of his actions. I really shouldn’t be surprised at the vitriol that he spews when he gets any negative press (read my previous post about Atkins to see what I mean, the guy is a total fruitloop). All this was an attempt to shutdown the bad publicity he had been getting that he couldn’t do anything about.
That story was run at about 8:30am yesterday and you can imagine the supporters of the AdelaideNow site were in a bit of an uproar about the whole thing. Well over a thousand people posted up their comments with 90% of them against it. This sewer of criminal defamation, identity theft and fraud apparently has quite a voice since just over 14 hours after he robbed all South Australians of their rights to anonymity, he back peddled faster than anyone thought possible:
After a furious reaction on AdelaideNow to The Advertiser’s exclusive report on the new laws, Mr Atkinson at 10pm released this statement: “From the feedback we’ve received through AdelaideNow, the blogging generation believes that the law supported by all MPs and all political parties is unduly restrictive. I have listened.
“I will immediately after the election move to repeal the law retrospectively.”
Mr Atkinson said the law would not be enforced for comments posted on AdelaideNow during the upcoming election campaign, even though it was technically applicable.
“It may be humiliating for me, but that’s politics in a democracy and I’ll take my lumps,” he continued in the statement.
Far be it from me to look a gift horse in the mouth but does anyone else see through this thin veiled attempt to look like he’s completely reformed his position? Using the term “after the election” essentially amounts to “once I’m re-elected” which gives your average Joe the idea that if we don’t vote him in the next guy might not appeal it. He’s trying to play the remorseful wolf here after he’s slaughtered all the lambs in the field. I still don’t trust Atkins as far as I can throw him.
It’s not just his stance on censorship (both in speech and our right to by games for adults) that gets my goad up, it’s his hyperbolic vitriol that he spews on basically any issue he’s involved in. From using tortured refugee victims as an opposition to R18+ games to lashing out with accusations that people don’t exist I begin to feel that my previous label of fruitloop might be a little too kind.
With Gamers 4 Croydon standing up candidates in both houses there’s at least going to be some competition for the seat come election time. The seat of Croydon is unfortunately very safe and Atkins is unlikely to be dethroned over the issues that I harp on here, but the reaction of the AdelaideNow crowd shows the beginnings of a movement against Atkins. So whilst we probably won’t see a new Attorney General this election for Croydon we may see some movement on the issues that have stagnated under his rule. That is of course if he wants to keep his seat for another term after this one.
We can only hope.
I remember a long time ago saving up all my pocket money and splurging on the very first expansion I can remember, Warcraft 2: Beyond the Dark Portal. At the time I didn’t understand that it wasn’t a stand alone game but since I had borrowed the original Warcraft 2 from a friend it didn’t matter. It would seem this was the start of a beautiful relationship with Blizzard as they developed a great reputation for developing a solid game and then releasing an expansion pack some time after breathing life into the game once again. Few others seemed to replicate their success with this as many game companies wouldn’t bother releasing such expansions instead focusing their efforts more on the sequel or new IP they were developing.
Valve began experimenting with the idea of episodic content with the release of Half Life 2: Episode 1. It was a novel idea at the time as it reduced the amount of time between major game releases which had the benefit of keeping more people engaged with your product for longer. Additionally development costs were far less than they would usually be for a full expansion or new game with the added benefit of being able to update things like engine code or additional graphics settings between releases, taking some of the edge off games which aren’t renowned for aging well. To be honest I resisted the whole episodic movement for a very long time until Valve released all the episodes along with Team Fortress 2, but saw the benefit to them after I played them. They wouldn’t stand alone as a full game but I definitely got almost the same level of satisfaction from them, despite their relatively short play time.
Upon the Internet reaching a critical mass of users and freely available bandwidth publishers began to look at digital distribution methods more seriously. Steam had proven to be a roaring success and the barrier to delivering additional content to users dropped significantly. Seeing the benefit of episodic content but unwilling to sacrifice a potential sequel (which is an unfortunate truth of all games these days) developers and publishers saw the opportunity to expand a game within itself. Couple this with the buzz that surrounded the business model of micro-transactions (something that can be bought for a very small amount of cash, akin to raiding your change jar) and we saw the birth of Downloadable Content¹ as we know it today.
And to be honest, I can’t say I’m all too pleased with the bastard child the games industry has spawned.
Back in the days of expansion packs you were guaranteed a couple things. The first was that the original game had enjoyed at least marginal success and the developers would be wiser for the experience. As such the expansions tended to be more polished than their originals and, should the developers been wise enough to listen to the gaming community, more tailored to those who would play them. A great example of this was Diablo’s expansion Hellfire, which had a spell to teleport you to the nearest exit. In a game where you can only power-walk everywhere this spell was a godsend and made the original much more playable.
Secondly it gave the developers an opportunity to continue the story in either the same direction as a sequel would or explore alternative story paths. In essence you were guaranteed at least some narrative continuancy and whilst this raised the barrier of entry to new players of the game expansions were never really aimed at them. Realistically anyone who heard of a game for the first time when an expansion was released for it probably wouldn’t of played the original in the first place. Still if they did take the plunge they would at least end up buying the original to (especially when most expansions required the original to play).
I was happy with the medium struck with the episodic content idea as for the most part you got all the goodness of an expansion without the wait. The MMORPG genre survives because of this development model as can be seen with the giant of this field, World of Warcraft. Content patches are released almost quarterly with expansions coming out roughly every 2 years or so. Blizzard’s ability to churn out new content like this relentlessly is arguably why they have had so much success with World of Warcraft and aptly demonstrates how the episodic model can be used to not only keep regular users coming back, but also attract new ones to the fray.
Downloadable content however has the aspirations of episodic content with the benefits of none. When I bought Dragon Age: Origins I was treated to some free DLC as part of buying the game whilst also being slapped in the face by a person at camp offering me a great adventure if I gave him my credit card. It was pretty easy for me to ignore that part of the game completely as I had more than enough to do in the 35 hours that Dragon Age sucked away from me. The recent release of the Return to Ostagar DLC gives a couple hours more playtime onto a game that boasts over 100 hours of game play already. For someone like me who’s already finished the game there’s little incentive to go back just to experience a measly couple hours of story that won’t fit in with where my character is in my head, so I simply won’t bother.
This to me is the problem with any DLC. For the most part they are simply an additional part of the core game that’s a fantastic way to add more playtime to a full playthrough but are otherwise meaningless additions to a challenge already conquered. I was over the moon when I heard that Mass Effect was releasing some DLC but after playing through the game twice logging almost 80 hours of game time going back to spend an hour or so exploring the new planet felt extremely hollow. Sure I can appreciate them setting the scene for Mass Effect 2 but really all the DLC amounted to was a quick grab for cash and a little press.
I wish I could site examples were DLC works but frankly there are none. These bite sized bits of gaming sound like a great idea (and they’re music to publisher’s ears) but unless you’re playing the game from start to finish realistically there’s little value in them. It takes quite a lot to pull me back into a game that I’ve completed and it has to be for a damned good reason. DLC so far hasn’t been it and never will be until I start seeing episodic quality releases.
In the end the birth of DLC is yet another one of those signs of a maturing game industry that would’ve been hard to avoid. Publishers are always looking for new revenue streams and if we want to see game developers producing games such things are here to stay. I’m sure one day there will be an exception that breaks the rule for me but right now, DLC is that annoying toddler in the corner screaming loudly for attention when there’s many interesting adults I’d rather be talking to.
Hopefully one day though, that toddler will grow up.
There are few things more spectacular than a shuttle launch. With 3 engines the size of school buses putting out the equivalent power of almost 40 hoover dams lifting the 60 ton iconic craft aloft into orbit around our beautiful blue marble. There are few things that come close to demonstrating our capability as a human race such as these. One thing however is more beautiful than your regular run-of-the-mill shuttle launch, and that’s a shuttle launch that happens at night, giving rise to beautiful images such as this:
It’s not to say that such missions are rare, far from it. Just less than a third of all Shuttle launches have been at night throughout the course of it’s lifetime, but that doesn’t make them any less special. With the countdown of the final 5 launches underway the next one scheduled marks the last ever night launch of the craft, something that is surely to be missed:
WASHINGTON – Six NASA astronauts are ready to rocket into space on the shuttle Endeavour in just over a week as questions swirl over the impact of the space agency’s upcoming budget request.
Endeavour commander George Zamka said Friday that he and his crew are completely focused on the planned Feb. 7 launch to the International Space Station. Their mission: to deliver a new room to the $100 billion orbiting lab that will leave it nearly complete.
The shuttle is scheduled to blast off from NASA’s Kennedy Space Center in Cape Canaveral, Fla., before dawn on Feb. 7 at 4:39 a.m. EST (0939 GMT), making it the last planned night launch of Endeavour or any other orbiter. The launch will come six days after NASA rolls out its new spending goals for the next fiscal year – a plan that may depart substantially from the agency’s earlier human spaceflight goals.
STS-130 is set to be a spectacular mission in its own right with it taking up some of the last bits of the American section of the International Space Station. This includes the Tranquility module which contains highly advanced life support systems and much need space for storage, exercise equipment and so on as well as the Cupola which is in essence a giant window that will be used for operating the robotic elements of the ISS as well as observations. It will also be the final home for the C.O.L.B.E.R.T, something I’m sure Colbert will be quite happy about.
A night launch really couldn’t come at a better time for NASA. Right now with the turmoil that’s surrounding its manned programs it will serve them well to show off some iconic imagery with a night launch. With so much attention on NASA at the moment showing off the raw power and beauty of a Shuttle launch can only help to bolster their cause. The launches haven’t had more than a passing glance in most of the mainstream media (although I have been surprised by the morning news in Australia, covering the last 5 or so in detail) but with added political controversy we might see some actual movement on this.
It will definitely keep the debate going on NASA’s spending and the future programs it has been chasing. Whilst I lamented in the past how pointless the Ares I-X was I failed to mention how in awe I was of the program’s end game of Ares V which, on paper, appears to be an extremely capable rocket. There’s been some speculation of dropping the Ares I in favour of pushing forward development of Ares V, which has it merits. The slack could then be picked up by say SpaceX’s Falcon 9, which is scheduled for its first test flight later this year. I doubt anyone else is going to work on something as enormous as the Ares V as a commercial endeavour as there’s really little need to build 188 ton satellites. Still this comes back to the point that NASA should be pushing the science and not the building of new launching platforms, but there’s really little need for a heavy lifter from a commercial perspective.
NASA’s future is all up in the air now and with that comes heavy speculation. There’s been so many “leaked” reports on almost every aspect of NASA that I’ve fallen to information overload and decided to wait until some verified reports come out. I’m hoping Obama and the American congress don’t get too short sighted on this matter but doing what is right by NASA (funnelling a couple extra billion their way) is hard to justify politically at this point. Sure there’s quite a lot of data to say that NASA and the space industry are creating a lot of jobs, but your average American voter doesn’t seem much to care for that (since they’re not really jobs for your average American).
So in just under a week from now we’ll bear witness to the last time a Shuttle will light up our night skies. I highly recommend catching one of the live feeds with the amazing NASA commentary if you can but rest assured, if you miss the live event I’ll be posting the highlights up here.