You know whilst I appreciate that the Internet filter was the trigger for the creation of this blog and has been a healthy source of fodder for me to post on I still wish it would just up and die already. It’s been said time and time again that the filter won’t achieve its goals and will only serve to make Australia more of an Internet backwater than it already is. When you’re planning to roll out a national broadband network at the same time it seems rather counter-intuitive to go ahead and strangle it with an infrastructure bottle-neck that makes said network almost null and void.
That being said I still stand by my position that the filter, at least in its current form, will not make its way into reality. The tech crowd is universally opposed to it and there’s increasing pressure from the giants of the Internet (Google, et al) to abandon such ideas. It seems now that even our good friends across the ocean are starting to have concerns that such a policy would be harmful not only to Australia and its citizens, but also to relations abroad:
Asked about the US view on the filter plan US State Department spokesman Noel Clay said: “The US and Australia are close partners on issues related to cyber matters generally, including national security and economic issues.
…In a speech in January US Secretary of State Hillary Clinton put internet freedom at the heart of American foreign policy as part of what she called “21st century statecraft”. The US, she said, would be seeking to resist efforts by governments around the world to curb the free flow of information on the internet and encouraged US media organisations to “take a proactive role in challenging foreign governments’ demands for censorship”.
Clay’s statement added: “The US Government’s position on internet freedom issues is well known, expressed most recently in Secretary Clinton’s January 21st address. We are committed to advancing the free flow of information, which we view as vital to economic prosperity and preserving open societies globally.”
Conroy’s first response was to say that hey hadn’t heard anything and failed to make any comment on what his opinion was on the matter. I don’t blame him for doing that either as up until recently he was only fighting the people of Australia and a few corporations. Now he’s got to deal with the US putting pressure on him to not go ahead with his proposal and he can’t openly attack them like he has done with Google leaving him with very few rhetorical options. I’m sure his spin doctors are working overtime on this one and I don’t envy the job they have (I mean really how to do brush off an attack from the US government?).
More importantly there’s also the small issue of an agreement that Australia and the US signed in about 6 years ago, the Australia – United States Free Trade Agreement. Back when it was first introduced there was hefty opposition to the proposal, mostly from Australia’s side, as it had the potential to wreck havok on things like the Pharmaceutical Benefits Scheme (PBS) and forced Australia to make changes to its intellectual property laws. Despite all this the agreement passed and came into effect on the 1st of January 2005 and hasn’t really come up in political discussions since.
The FTA was much futher reaching than the issues that were brought up in during negotiations. Other areas it covered were financial services, environmental issues, investment and government procurement. More interestingly however there are 2 key areas that the FTA covers that are quite likely to be affected by the proposed Internet filter, and they are:
This section details agreed upon terms by both countries to assure fair trade between the telecommunications industries in each country. The rules specifically exclude measures relating to broadcast or cable distribution of radio or television programming.
Among other provisions, the agreement lays out rules for settling disputes among the members of the telecommunications industries in one country with the members in the other. It entitles enterprises to:
- seek timely review by a regulator or court to resolve disputes;
- seek review of disputes regarding appropriate terms, conditions, and rates for interconnection; and
- to obtain judicial review of a determination by a regulatory body.
The parties agreed to co-operate on mechanisms to facilitate electronic commerce, not to impose customs duties on digital products and for each to apply non-discriminatory treatment to the digital products of the other.
The first relates to how Australia and the US will provide communications infrastructure and services to each other in a fair and equitable way and provides a framework for settling disputes. The bolded point outlines an area where the FTA could be invoked if Australia decides to implement a filter. Whilst the debate is still open on just how much an Internet filter would harm Australia’s ability to do business on the Internet the greater tech community is of the mind that it will be detrimental, regardless of implementation. Whilst this doesn’t directly damage the FTA it could be used as an injunction to stop such a filter from becoming reality, at least for a short while.
Probably the more important part of the FTA that is directly affected by the implementation of the filter is the Electronic Commerce section which explicitly states that there be no discriminatory treatment to digital products. This can extend to information on subjects such as abortion, euthanasia or drug harm minimisation which under the current filter proposal would be outright banned, but are still perfectly legal within the US. There’s also the possibility, thanks to the lack of transparency of the filter and its blacklist, that an online retailer could end up blocked from people within Australia and be effectively barred from trading with us.
I’ll admit that the links to the FTA are a bit tenuous but there’s no doubt in my mind that businesses with an online presence in Australia will suffer under the proposed filter legislation. The FTA is just another bit of ammunition to argue against the filter and with the US now putting pressure on Conroy I’m sure that we’re not too far away from the FTA being mentioned at a higher level. Conroy really has his work cut out for him if he thinks he’ll be able to convince the US that the filter is a good idea.
Would the filter require the FTA to be amended? I doubt it, but then again I’m not particularly qualified to comment on that. If you know (or have a good opinion) let me know in the comments below.
Tip of the hat to David Cottrill for giving me the idea of mashing the FTA with the Internet filter.
I can remember the decision that led up to me purchasing my very first smart phone. Sometime around the end of 2003 I had managed to land myself 4 different part time jobs, mostly because none of them would give me the hours I wanted. This of course meant that my schedule was a tad hectic at the best of times and I found that managing all of them at once usually ended up with me showing up at the wrong place at the right time. So I got myself a cheap and cheerful PDA that ran Windows Mobile and kept my schedule in there, letting me keep track of everything and making sure I never disappointed my various bosses again.
About 3 years later I had landed my first ever System Administrator job and I thought that since I was such an IT bigshot (HA!) I would need a device to match, casting my aging PDA aside. A couple clicks through Ebay and $1000 later I was in possession of an O2 XDA Atom Exec and all the pains that it brought along with it. Initially I was pretty happy with my purchase as it let me do away with 2 devices in place of one and the upgrades to Windows Mobile made it a lot more usable that its predecessor.
Still I can remember trying to use the Internet on it and being extremely disappointed. Apart from the ludicrous charges from my mobile carrier (which was Telstra at the time) most websites failed to render properly and would take an impossible amount of time to load. The experience improved every so slightly when I was in range of a wifi point but considering the only places that had free wifi were in fact my or any of my friend’s houses the usefulness of a mobile web device was completely and utterly non-existent.
I’d mostly given up on the mobile web until the end of 2008. My O2 XDA last legs had long fallen off and it had developed the cute problem of switching itself off if you bumped it even slightly. After doing the rounds for a phone I had initially settled on a Nokia N95 although that quickly got traded in (long story short, got sold wrong model) for a HTC Diamond. The slim device came with a bevvy of Internet ready applications and I had specifically chosen a carrier that had a decent 3G wireless plan to make use of them. It seems that the bad taste that I had left over from 5 years ago was about to be washed away by the minty freshness of a mobile Internet revolution.
And was it ever. I set up my email to sync directly to my phone, my RSS feeds would update every morning before I headed out to work and I always had the weather forecast at my fingertips (with cool animations to boot). The Internet experience was much improved thanks to the Opera Mini browser that does a lot of the heavy lifting on proxy servers before forwarding you the results and the speed of 3G brought all those web pages to me in a time frame that was actually quite usable. I even went so far as to put my phone on my employer’s network and had my work email being pushed to my phone as well, which proved to only be mildly useful but a good demonstration to the higher ups.
The last year has seen a tremendous amount of growth and refinement in the mobile Internet experience and I’m begrudged to admit that its due to Apple’s iPhone. The original iPhone made highly capable (and expensive) phones the ubiquitous status symbol that so everyone wanted. The release of the 3GS made a point of making the mobile Internet experience something that should be available and extremely easy to use. This in turn put the other smart phone giants on the back foot to bring about a similar experience for their users, which until recently they’ve been struggling to do.
Google has done extremely well in this regard with their Android platform steadily gaining ground on Apple every month. It’s got to the point where I can’t say the growth is due to the tech crowd anymore, there has to be a good share of everymen buying Android handsets. It also can’t be due to the Nexus One either, as the numbers were looking pretty good before its release early this year. Whilst they’ve still got a ways to go to dethrone Apple as the number one (7.7 million sold in 2009, 60,000 are moving every day apparently) they’re looking to keep competition healthy in the mobile space, which is a win for us consumers.
Microsoft on the other hand has been extremely slack in this space. Whilst I’m very excited to get my hands on the Windows Phone 7 series devices (I really should install that emulator…) mostly due to their 0 cost to entry for programming on them the first retail device isn’t scheduled to be released until late in the year. Couple that with the fact that their share of the mobile Internet space has been in the single digits for almost 2 years now means that they’re probably the furthest thing from everyone’s minds when they’re going to buy a new phone. It will be interesting to see if they can turn their luck around and make the mobile scene a three horse race, but I’ve got my doubts.
In all honesty the revolution in the mobile space should come as a surprise to no one, but it always gets me when I’m rummaging through my desk and I happen across my old O2 and just remember how far the whole scene has come. With the latest hand helds coming out with processors that were considered top of the line in desktop PCs just a decade ago the days of a phone just being a phone are long behind us, and the future is always looking that much more awesome.
I’m constantly amazed by the number of people who say they work in IT yet have very little to do with anything in the field (apart from doing their work on a computer). Admittedly most of these people are in management so saying that they’re “in IT” is about as applicable as them being “in field X” where X can be any industry where you need to organise a group of people with another group of people for a common goal. Still there’s quite a variety of career paths in IT and as far as the everyman goes most of them get lumped into the same area “guy who knows computers”. I thought it might be interesting to take you down the road of a couple career paths that I have been down and where I’ve seen them lead people over the past half a decade or so.
This is probably the career path that everyone is most familiar with, those guys who fix computers for a living. Landing a job in this area doesn’t require anything more than any other entry level job you might find around the place but you’ll usually end up in one of those dreaded call centers. Good news is that for anyone looking to break into IT there’s always going to be positions like these going as the turnover rate is quite high for entry level work, somewhere in the order of 30~50% for most places. Still if you can stick this out for a good year or two (depending on how skilled you are) there’s light at the end of the help desk tunnel.
Funnily enough the next “level” of IT support is just that, Level 2 Support. In essence you’ll be one of the behind the scenes guys who has more access and more knowledge about the systems the front line people are taking calls for and will be the one they come to for help. At this level you’ll probably be expected to start doing some outside learning about products that you (or your company) haven’t had any experience with yet, usually in the hopes to move you up into the next level. Second level guys are usually not responsible for adding new things to the environment and are best suited to being support to the first level and being the conduit to the next level guys.
The final incarnation of the IT support person is usually referred to as Level 3 Support or Technology Specialist. After spending a couple years at the second level most people will have gained a significant amount of skills in troubleshooting various software and hardware issues and hopefully acquired some certifications in various technologies. At this point there are a couple options open to such people: continue down the support line (generalist) or focus on a specific technology (specialist). Both of these have their advantages as the generalist won’t have trouble finding a job in almost any organisation and the specialists will attract quite high salaries for their specified skill set. Generally most people become a generalist first for a year or so while they work out what they want to build their career on.
This is the level I’m currently at and I initially tried to specialize in virtualization and Storage Array Networks (SANs) however my current position uses neither of these skills. It’s a good and bad thing as whilst I’m learning about a whole lot of new technologies (like Hyper-V) my specialist skills go unused. In all honesty though my most valuable skills as an engineer have gone for the most part un-used since I got my degree back at the end of 2006 so it’s really not that suprising and traditionally I’ve found that the ability to quickly adapt to the requirements of your employer seems to land me more jobs than my skills in one area.
They did help me get my foot in the door though 😉
Behind those who support the things you’re viewing this web page on are those who actually built the software that it runs on. In a general sense these guys are referred to as developers and there’s quite a few different types ranging from your more traditional desktop application programmers to the current rock stars of the programming world the web programmers.
Starting off a career in programming isn’t as easy as IT support. For the most part you’ll have to have some level of academic experience in the field before most places will give you a second look. Most programmers will have done a bachelor degree in either Computer Science or Software Engineering (or Engineering in Software Engineering for those true engineers) with a few starlets from the generic IT degrees making their way into the entry level programmer ranks. Junior programming jobs are a bit harder to come across but there’s usually good opportunities to be had in smaller firms who will help nuture you past this first hurdle.
Senior developers are someone who’s had a demonstratable amount of experience in either building systems of a certain type or in a certain language. They’re much like the second level of IT support as they’re usually responsible for helping the juniors out whilst working on the harder problems that their underlings would be unable to do. Again at this level there’s some expectation of training to be done in order to sharpen your skills up to match that of what your employer requires and this is the time when they should look to specializing.
Developers don’t technically have a third level like IT support however once they’re past the junior level specializing in one kind of development (say SAP customizations) becomes far too lucrative to pass up. There’s varying levels of specialisation available and this is when many people will make the jump into a field they’re interested in, say games or web, that demands a certain level of experience before taking them on.
I never got past the junior developer level mostly because I jumped into a System Administrator position before I had the chance to develop my programming career any further. I’ve kept my skills sharp though through creating automation scripts and various programs that served specific purposes but none so much as my current pet project Geon. I don’t think I’ll ever develop for anyone though as the last large project I worked on was more clerical admin work than actual programming.
Whilst not terribly distinct from the IT support career path those in the business of providing networks and communications links for the varying computer systems they deserve their own mention as their technology predates the first real computer by over 70 years. Ostensibly they will spend most of their career using computers but only to administer the communication technology they’re responsible for.
At the heart of the career path is the same 3 levels with the first level being an almost identical help desk hell. However instead of working on the computer systems that you know and love they work on the cables and interconnects that keep the information flowing around the world. The number of jobs available is heavily dependant on which brand of network devices you choose to base your career around with the largest one currently being CISCO. Specialisations tend even further down the telecommunications path with most of them either being things like CISCO Certified Internetwork Expert (with a test that has an 80% fail rate on the first try) or something like a PABX/VoIP (basically telephones) expert.
I have a minimum amount of knowledge in this area as I skipped out on my college’s computer networking course and found my career in IT support much easier
I’ve struggled to find people who understand the term Business Analyst but don’t work in IT. In essence these people are the interface between the real world who want some kind of computer based system and those of us who have the skills to provide them. This is yet another position which usually requires some form of academic accreditation before anyone will take you seriously, and even then some people might feel like you’re still getting in their way.
People employed as business analysts are probably the most removed from actual IT whilst still being counted as part of it. There’s very little technical experience required to become one but you do have to have a keen eye for identifying what people want, managing their expectations as well as acting as a glorified telephone between the everyman and the IT nerds. Interestingly enough this is one of the areas of IT where a healthy percentage of the employees are women, something that is quite rare in the world of IT.
The next step for business analyists is usually that of what is wrongly referred to as an Architect. These are the people who are responsible for setting out a strategic direction for whole systems and whose work is usually of a fairly high level. Traditionally these kinds of people work side by side with project managers to organise various resources in order to deliver their vision but that’s where the tenuous relationship to real architects ends. In fact its more common to find third level IT support people graduate to the architect position thanks to their grass roots level experience in delivering systems that were set out by architects for them.
I’ve worked with a few architects and for the most part they’re worth the top dollars they’re paid. The ones that weren’t just simply didn’t communicate with their experts and promised things that just weren’t possible.
Once you’ve reached a certain point in any of the previous career paths I’ve mentioned there’s always an option to switch over to the sales side of IT. Whilst this position isn’t highly suited to many who join the ranks of IT (high levels of social interaction? Say it ain’t so!) I’ve known more than a few who made the jump mostly because of the money and travel opportunities it provides.
For those who come directly from IT they’re usually placed into what’s called a Pre-Sales role. Rather than actually selling anything directly they’re responsible for getting into the client’s environment and working out what they need, much like a business analyst. They’ll then draw up a bill of materials for the system and then hand it off to their sales team to close the deal. The reason pure IT people are attracted to these kinds of positions is that you’re still required to have a high level of knowledge about certain systems but don’t have to be involved in their support, which can be quite refreshing after many years of fixing someone else’s problems.
For the softer IT career choices there’s the option of becoming a consultant or basically a gun for hire. Once you’ve achieved a high level of specialization it becomes profitable to work either freelance or part of a larger consulting group who will hire you to clients who have very specific requirements. Usually consultants are used in order to get an outside opinion on something or to analyse a certain system or process. It’s quite lucrative as there’s little overheads past what your basic entry level employee has, but the going rates for their time are almost an order of magnitude higher.
There are of course many more ancillary positions in IT but with this post dragging on a bit I thought I would leave it there. In essence I wanted to convey the breadth of careers that IT offers to people and how far away from computers you can be yet still be “in IT”. Maybe next time you’ll think twice before asking your friend in IT to fix your computer 😉
I’ve been at this whole blog thing for a while now. Not as long as many of the big names mind you but long enough to get into the culture and social conventions that fellow bloggers adhere to. As with anything on the Internet the rules are fast and loose and the worst thing that will happen to you for breaking them will usually be an angry email from someone you didn’t even know you could offend. For the most part though I’ve avoided incurring the wrath of any of my fellow netizens, apart from the good old fashioned trolls who make an appearance anywhere on the web.
One of these unspoken rules is that if you’re going to use someones content, maybe a quote from an article or picture off their website, you provide a link back to their site. The reasoning behind this is that the biggest gateway to the Internet, Google, uses the number of sites linking in as a sort of popularity count to judge how relevant a site is to a particular search. The more links you have coming in the more popular you are and the higher up in the search results your sites will appear. There is of course many other factors taken into consideration but nothing beats a good old fashioned link from someone else’s site to yours, especially if it comes from what Google considers to be a highly ranked page in itself.
Personally I have no problem with giving out links to those who’ve created content that I have purloined for my site. Usually I’m taking a quote from an article that’s inspired me to write a post on something and they deserve to have their work recognised. More often than not though I’m not even using the content directly and giving them a link as to support my own view which I’m putting forth. This healthy little eco-system of tit-for-tat means that the original content creators get the credit they deserve and the information gets freely distributed across the web.
More recently however it’s become apparent that some people are more interested in just taking the content and not giving credit where its due. I’ve come across a couple sites that have blatantly copied my articles verbatim and posted them to their sites as their own. You’d think I wouldn’t be able to find most of them but since quite a few of my articles contain links to my other writings on the site these content thieves unwittingly send links my way. When their site is eventually crawled by Google they show up on my report that shows all the links coming back into my site. For the most part though they’re a minority, and I’ve happily ignored the majority of them (in fact most of them seem to disappear rather quickly, leading me to believe they’re probably scam/malware sites).
Sure it was a small thing and it took me all of 10 seconds to go into the HTML editor and remove it but I can’t help but feel like that implicit trust that had been there for so long has been cast by the wayside by those who think we’re all out to profit off their hard work. Nothing could be further from the truth, I want people to read the original articles that’s why I link to them, but there are few organisations out there who just have to be unnecessarily rude by doing these things and they’re not going to win any friends by doing so.
Don’t make me write a plugin to scrub your cruft from WordPress blogs automatically. Hell hath no fury like a blogger/programmer scorned.
Ever since my last post on the whole Google vs China situation I’ve steered clear of jumping into the fray again. That’s not for lack of material though especially when Google took the impressive step of shutting down its China servers and redirecting all google.cn traffic to their Hong Kong (which are politically and legally isolated from mainland China) servers which put the ball square back in China’s court. I knew it wouldn’t be long before the Chinese government retaliated and I expected that they would do much the same as they have done to other services that don’t follow their rules, I.E. block them outright. Especially when they accused them of being spies.
It seems however that the situation is a little bit more complicated than that:
The Chinese government has attempted to restrict access to the Hong Kong–based servers where Google is offering uncensored search results to mainland China users.
On Tuesday, according to The New York Times, mainland China users could not see uncensored Hong Kong–based content after the government either disabled certain searches or blocked links to results.
Citing business executives “close to industry officials,” The Times also reports that China Mobile – the country’s largest wireless carrier – is under pressure from the government to cancel a pact with Google that puts the web giant’s search engine on the carrier’s mobile home page. The carrier is expected to end the pact – though it doesn’t have an agreement in place with a new search provider.
The Chinese government isn’t stupid and they know that blocking Google outright would just fan the fire that’s swelling up against them. Instead they’ve curtailed the uncensored search engine as best they could to match how it worked previously, leaving the switch to the Hong Kong servers mostly transparent to the less tech savvy amongst its residents (who really wouldn’t have been bothered by the initial censoring anyway). What does come as a surprise is the reaction of the government towards Google’s other businesses which seems to be their way of strong-arming Google back into place.
Initially Google signed on to censor its results as it thought that at least having some presence in China was better than none at all. Whilst the shareholders were unanimously for the move (come on, who wouldn’t want their company to make more money?) they copped a beating from their critics who trotted out their corporate motto of “Don’t Be Evil” as a sticking point for bowing to the Chinese Government’s will. It was well founded as many felt capitulating implied some level of support for the government’s activities which, even at the best of times, been highly questionable to observers. Even more interesting is that the same critics also threw a bit of flak Google’s way for pulling out of China, as it provided them with vindication of their initial stance.
Google didn’t make this decision lightly. Ever since their initial scuffle and rebellion against the Chinese government Google’s shares have taken a whopping 6% hit since January. From a business perspective they would have to judge this (hopefully) short term damage to their stock price as less than what continuing business in China would have done to them, which is saying quite a lot. They’re far from shutting down all of their operations within China’s borders but pulling the plug on their biggest asset shows that they aren’t keen to play games with the Chinese government anymore, despite the damage it will do to its bottom line.
I made the prediction that should Google pull out of China many companies would begin to follow suit. At the time I was really only focused on Internet based companies, as they’re the ones who struggle the most within China’s borders. As it turns out I was right as the domain name giant GoDaddy is discontinuing its services to the region:
GoDaddy.com Inc., the world’s largest domain name registration company, told lawmakers Wednesday that it will cease registering Web sites in China in response to intrusive new government rules that require applicants to provide extensive personal data, including photographs of themselves.
The rules, the company believes, are an effort by China to increase monitoring and surveillance of Web site content and could put individuals who register their sites with the firm at risk. The company also believes the rules will have a “chilling effect” on new domain name registrations.
GoDaddy’s move follows Google’s announcement Monday that it will no longer censor search results on its site in China.
It’s not only pure Internet companies that are looking for solutions to the China problem, Dell has also begun looking to other less restrictive regions as well. So whilst there isn’t a mass exodus of all western based companies from China there is mounting pressure that such companies aren’t willing to deal with the government’s regulations in order to do business there. Honestly I wouldn’t of expected such moves from either company as Dell makes its money on the volumes it moves (provided in the most part by China) and GoDaddy isn’t renowned as a bastion of corporate morals, but they do have the freedom of not being controlled by share holders. Still if two large multi-national companies are willing to throw their weight behind Google you’ve got to wonder how long other companies will put up with China’s restrictive market.
Hopefully enough big names will jump on the Google bandwagon and we’ll begin to see China’s government rethink its restrictive stance. I’m not naive though and I know it will take a lot of pressure for the Chinese government to make any concessions for western companies looking to make a name for themselves on China’s shores. However what we’re seeing now are the opening chapters to a book that still has many pages to be written, and has a long time to go until it’s published to the wider world.
Thanks to the engineer in me I’m somewhat of a hoarder. My wardrobe at home is littered with components of PCs gone by and hundreds of CDs that contain various drives and backups that I will probably never, ever end up looking at again. My garage is filled with all manner of junk that I’ve kept on the off chance that I might have a use for it some day in some weird project and every box of every product I’ve bought over the years if I ever want to sell them. It comes as no surprise then that I also have an extensive range of old video games around the place, from my goold old NES (which currently resides at my parent’s house) to my original Playstation games.
In all honesty I haven’t played any of them in quite a long time. Every 6 months when the big clean up and chuck out comes around I always look on them fondly, but none of them make the transition to the lounge room for a playthrough. The same could be said for the games folder on my PC which I’ve only ever deleted games from when space was getting critical (and thanks to my new 1TB drive for it, that won’t be for a while now). Still they remain there should I find myself in a situation like I did a couple years ago where I was without Internet for a week or so when moving house. Warcraft 3 and Freelancer are still my fallbacks during these times.
More recently it seems that many publishers are looking to cash in on our nostalgia. At the end of last year I picked up the Eidos pack (mostly for Batman and Tomb Raider… don’t judge me bro) and noticed that it included Deus Ex and Deus Ex 2. They were definitely a bonus as I tried to run the original from my massive game folder only to find it threw up some strange errors that my Google-Fu was unable to fix. Talking to a mate who had also bought the pack he said it worked without a problem and I saw him playing it a couple times over the next few days.
Getting past the fact that I got these titles for basically free (They’re $10 each on Steam by themselves) it still took me back that in essence I had paid again for a game that I already owned. My original install of the game refused to run properly under Windows 7 so I can understand that at least some effort went into reworking it but I wasn’t paying for the game per say, I was paying for the transition of format. The sour taste this left me with only got worse when I found a few people who had got the game to work without incident which in essence meant I had paid for a service I really could have performed myself.
Eidos aren’t the only one cashing in on fan nostalgia and format transitions. Nintendo has the virtual console which has a selection of games from many of Nintendo’s old systems as well as some of their former competitors (Sega being one of them). Sony brought out the PSOne Classics section of the Playstation store to do much the same thing, offering up a catalogue of games that can be played directly off the hard drive. That also opened up the option for those who purchased a second generation PS3 fat or any slim console to play old games that their hardware no longer supported. Microsoft, as far as I can tell, hasn’t got a service like this for the Xbox360 but since it can play nearly all the games (with 470 verified as supported) there’s probably not much of a market for it. Plus the Xbox hasn’t been around as long as any of Nintendo or Sony’s consoles, so there’s little for them to cash in on there. Still they’ve done well with their online marketplace, which is arguably the best out of the big 3’s offerings.
Still for someone like me who does actually have a rather large collection of old games the thought of paying for them again feels a little rough. I’ve got original PS1 games that still work in my PS3 that I’d love to be able to rip to the hard drive for those times when I might enjoy a 10 minute bash on something, but despite the fact that the technology is obviously there Sony will never let me do it. I’ll admit their service does provide something that is worthwhile (like when your originals are scratched to hell) but what about us long time fans who have massive backlogs that we’d love to play on our new consoles?
The primary argument from Sony et al is that most people buying new consoles are doing so to play new games, and I agree with that sentiment. The occaisions when I bust out an old game are few and far between, especially when I struggle to finish one game a week these days. Still asking long time fans (and let’s be honest here, these are the guys who are buying the old titles) to pony up again for games that they more than likely still have doesn’t do them any favours. I can understand that opening up such a service would present quite a few problems (how do you verify that the ripped game is playing on one console only?) but it’s still something I and many other fans would love to see.
Maybe I’m just spoiled since I’ve been doing it for a long time anyway…
Some days you just wake up to good news:
R18+ video games are a step closer to being allowed in Australia following the resignation of South Australian Attorney-General Michael Atkinson.
Mr Atkinson’s decision to leave the front bench means he will no longer be in a position to vote on changes to the country’s classification system, including the introduction of an R18+ rating for games.
The decision came after voters gave the Rann Government a kicking in last weekend’s state election. Mr Atkinson won his seat of Croydon comfortably but still suffered a 14.3 per cent swing against him, according to ABC reports.
Whilst a lot of gamers out there were hoping for an epic dethroning of Atkinson from his position by the Gamers 4 Croydon party who thrust themselves into the limelight on a single issue it was always far more likely that he’d walk away with a comfortable win. However you’d be forgiven for not expecting that Atkinson would step down after he was elected (I sure didn’t) but in retrospect its classic politics. Remember during the last federal election where there were rumours circulating that John Howard was planning to retire part way through his term if he was reelected. He had already lost the election thanks to his bungled Work Choices legislation but the notion that a vote for Howard was actually a vote for Costello didn’t win them any favours. Naturally if Atkinson had announced he would retire from the front bench before the election you can almost guarantee he wouldn’t of won his seat again, especially with the large swing against him regardless.
So with Atkinson out of the way and the next meeting of the attorney-generals in April it looks like we might see the introduction of a R18+ classification to Australia sometime in the near future. There’s still a lot of work to be done in this area (How can the games be displayed in retail stores? Will there be required ID checks? Etc.) however with none of the representatives agreeing with Atkinson’s stance it looks like a sure thing that the classification will be put through. Couple this with the fact that if Aktinson’s replacement does give R18+ the tick they’re almost guaranteed to be looked upon more favourably, to the tune of 3.7%.
That’s probably the biggest surprise of the election as Gamers 4 Croydon managed to grab a considerable percentage of the votes. Whilst they’re far from a single issue party their claim to fame was the push for a R18+ rating. Atkinson did his best to cut them off with crazed legislation like banning posters during the election campaign (the cheapest and one of the most effective ways for smaller parties to get noticed) but they still managed to make quite an impression on the people of South Australia. They’ve stated that they’ll be undergoing a transformation soon to ditch the direct association with gamers in their party name (as the issue will be pretty much settled in the coming months) but they will still carry on with the G4C tag. For all the work they’ve put into it I’m sure we’ll continue to hear from them for a long time to come and I hope they keep their progressive technological bent.
For what its worth I’m happy this thorn in my side will be disappearing soon. Whilst I was only marginally affected by the lack of a R18+ rating (Curse you Australian Left 4 Dead 2!) it was still something that needed to be rectified in order to make all entertainment mediums in Australia as equal as they should be. The next few months will see a flurry of activity to get this whole issue off the drawing board and into reality and it really couldn’t come any sooner.
I just had to post this up:
That, my fellow space nuts, is White Knight Two (VMS Eve) carrying the very first SpaceShipTwo (VSS Enterprise) on its maiden voyage into the sky. The last time we saw something this momentous it was almost 7 years ago when the very first White Knight was carrying the first private sub-orbital vehicle SpaceShipOne into the sky. It’s been a long time coming and I’m sure that everyone at Scaled Composites and Virgin Galactic are over the moon that they can write down this first 3 hour test flight as a success.
The media has lit up in response to seeing the iconic pair up in the air and with good reason, it signals the dawn of a new era for those who need (or want) cheap access to space. I’m not just talking about those of us who are after those 5 minutes of weightlessness and the spectacular view of our precious blue marble. No there’s another class of people who are excited about the prospect of cheap space access, scientists:
But this next generation of rockets from Virgin Galactic (Richard Branson’s effort with Space Ship 2, a model of which is pictured above), Blue Origin (Jeff Bezos from Amazon.com), and others will reach a height making a lot of this science possible. The region up to 100 km is too high to reach by balloon, and too low for orbital rockets, which is why it’s been dubbed “the ignorosphere”. But it has its uses…
Observations of the Sun, for example, may not need much time to do because (you may have noticed) the Sun is pretty bright, so a three or four minute flight is enough to get some good data. The way incoming energy from the Sun couples with the Earth’s atmosphere is not hugely well understood, and a lot of it happens in this region high above the planet’s surface. Effects of low gravity on the human body can be tested, as well as on plants and other biological systems.
In fact, enough science can be done on these trips that the conference itself brought in 250 people interested in the topic. I was surprised at how many people came, as were the conference planners themselves: they were expecting half that many.
In another post he also links to a video done by 2 scientists who are amongst those few who have already booked tickets on board SpaceShipTwo who explain exactly why this is such a big deal:
When I read those articles I was already convinced that cheap access to space was a good thing. Seeing SpaceShipTwo being carried up into the wild blue yonder just brought that all home and made me realise that we’re so close to having something that less than a decade ago was considered fantasy. There’s still many milestones to go before we get there but the clock is ticking down to the day when the first paid sub-orbital flights begin. After that it’s only a matter of time before we make the jump to orbital, and then the frontiers beyond.
Last year, whilst not a stellar year for games due to many delayed releases slipping into 2010, still had many great games towards the end of the year. I’ve played my way through most of them and for those who have been following my exploits over the past 6 months or so know the quality has been pretty high. Naturally after playing AAA title after AAA title my expectations for games have been set rather high and lesser games (namely Bayonetta and Supreme Commander 2) have been left sitting on the shelves waiting for their turn. After looking through my Steam list I remembered that I got Batman: Arkham Asylum as part of the Eidos pack when it was a mere $50 and on the advice of many of my friends I decided to give it a go.
Thankfully Arkham Asylum, whilst drawing on the rich background offered by the Batman IP, isn’t based off any of the Batman movies that have been released. This helps it avoid the usual filter the gaming community puts on movie based games (read: utter rubbish) and gave the developers a lot more creative freedom with developing the story and characters. Still every aspect that makes Batman who he is will be shown to the player at some point so that even dedicated Batman fans will find something in the game that appeals to them.
The story begins with Batman bringing in The Joker to Arham Asylum, a super prison dedicated to housing the myriad of Gotham’s super-villains. Whilst it’s somewhat disappointing that you can’t gallivant around Gotham city like the real Batman the game still does its best to make you feel like the caped crusader, a shining beacon of justice in an increasingly dark world. Whilst I initially felt very detracted from Batman and his supporting characters after the first few hours of gameplay I found myself wanting to know more about all of them, hoping to gain some form of insight into the twisted minds of the characters laid out before me.
My first gripe about the game is that (during the first few hours before I became wholly engrossed in the story) the whole experience feels a little cheap. The graphics for instance aren’t terribly spectacular even when everything is cranked up to the max and the pre-rendered videos were done using the game engine. Whilst I can appreciate that this was done to keep the pace of the game and gloss over loading screens when you have pre-rendered movies and in-game sequences that look the same I start wondering why you bothered pre-rendering them at all. This is probably because the movies were rendered at a much lower screen resolution than my monitor (1680 x 1050), making them appear rather blocky. Additionally the in game dialogue sequences were often rather stilted with the characters barely moving and the faces showing little to no emotion. I know I’ve been spoiled with Mass Effect and Uncharted and it’s probably not fair to compare them, but that still didn’t take away that cheap feeling.
The most enjoyable part of Arkham Asylum is the combat. On first look it appears to be something of a hack ‘n’ slash adventure with a rapid succession of clicks able to take down a group of foes with little trouble. After a while though more and more variables are thrown in that force you to use other moves and combos in order to come out the other end successfully. Just when you think you’re unstoppable the game would throw yet another larger challenge at you, bringing you down a peg. It was this ramping up of the action that hooked me and kept me in my seat for the last 4 hours of the game, giving the bad guys of Arkham a good throttling. The only issue I had was counter moves not working most of the time, but I got around that by throwing Batman wildly all over the place to avoid having to use it.
On the flip side of this rough and tumble action game is a surprisingly well done stealth combat system. So whilst you could happily punch every foe into the ground there are some situations that will be a might be easier if you instead sneak your way around them and take them out quietly. The unlockable upgrades for Batman allow for many interesting ways to take out your opponents quietly, such as hanging upside down from a gargoyle and then swooping down and hanging them upside down by one leg. Since the days of of the Theif games few games have been able to do stealth right but Arkham Asylum gets it just right as it is both enjoyable and as thrilling as punching your way through the game.
Yet another interesting mechanic is that of the good old fashioned platformer. There are several occasions where the camera will become locked and you’re forced into a good old fashioned jump puzzle, with the added complication of avoiding detection by a giant madman with glowing eyes. This psyhcological thriller mini-game was one of my favourite frustrations of Arkham Asylum as it was just so far apart from the regular gameplay in terms of what you do and where you are.
Lastly you’re Batman the crack detective, following evidence and solving various puzzles to move the story along. I’ll admit a few of these had me stumped for a good while, reaching out to the Internet for answers. Still for the vast majority I was able to knock them down without too much hassle, giving me that warm fuzzy feeling that we all get when we conquer something without having to take the easy way out.
Overall Batman: Arkham Asylum was one of those games that was in my to-play list but I’d never really given a second thought to. It’s received wide spread critical acclaim and garnered enough talk amongst my friends to have cemented itself firmly as a must play amongst us all and after playing through it I can see why. It just oozes that classic Batman feel and the little extra bits like the character bios and interview tapes just help to draw you in that much more. The game wraps up beautifully and lends itself to a sequel without leaving too many loose ends, and I for one can’t wait to see what these guys come up with next.
Batman: Arkham Asylum is available right now on the Xbox 360, PS3 and PC for $99, $99 and $49.99 respectively. Game was played on the second hardest difficulty setting with around 12 hours of gameplay and 65% completion on one playthrough.
Life is a tricky thing to get right. As far as we know right now we’re a completely unique in this universe and the conditions that led to us being here are both mysterious and endlessly intriguing. Whilst I won’t dive into the debate on science vs religion here (I’ve already done that) my own personal views are ones of abiogenesis, or more simply the idea that the complex life that we know and love today arose from a long chain of events that started with just the basic elements of the universe. Whilst there’s still a lack of consensus around the actual mechanisms that would have led to this happening the basic idea remains the same.
This is mostly due to the lack of another point of data, I.E. us encountering life that arose on another planet. So instead we start looking around our own earth to find examples of how life got started and where it exists. We’re discovering more and more that environments that we thought were completely incompatible with life are actually teaming with creatures that seem almost impossible to us. From complex curiosities like the Yeti Crab and the Flashlight Fish to bacteria that thrive on the heat radiated from black smokers it seems that once conditions are favourable to life you’ll end up finding it pretty much anywhere.
Still there are some places you just don’t expect to find life, like 185 meters below an ice sheet:
Researchers in Antarctica got a surprise visit from a creature in a borehole 185 meters (600 feet) below the Antarctic ice, where there is usually no light. A Lyssianasid amphipod, a shrimp-like creature can be seen swimming in this video. A NASA team had lowered a small video camera to get the first-ever photograph of the underside of an ice shelf when the curious little 7 cm (3- inch) shrimp stopped by to check out the equipment. Scientists say this could challenge the idea of where and how forms of life can survive. Anyone else thinking Europa?
To say that little shrimp was completely unexpected would be putting it lightly as for all we knew there was absolutely nothing down there capable of supporting life any larger than simple bacteria. They also found what appears to be the tentacle of a jellyfish tangle around the cord of the camera suggesting there’s not only life but also some amount of diversity down there. So whilst this might be cool and all why is everyone asking about Europa?
For those of you not in the know Europa is a moon of the planet Jupiter and is only a bit smaller than our very own Moon. It’s quite a striking thing to look at as it’s surface looks like a round ice cube that’s covered in dust, very different to our closest neighbour who’s an even shade of dull gray. When we get up and close to it we see it’s covered in these long lines which look scarily similar to ice sheets on earth. As it turns out Europa’s crust is actually a solid layer of ice that’s a few kilometers thick and under that is an internal ocean that, as our best guess goes, is tens of kilometers deep. The lines on the surface are cracks that opened up to the internal ocean below where upon water from below swelled up to fill the gap.
What the scientists’ unexpected visitor tells us is that there is the possibility for complex life to evolve in places where light cannot reach it, and that means that there’s a chance that life evolved in the sea under Europa.
You may be wondering how life could evolve in a place that’s covered by kilometers of ice in a frigid sea so far from the sun. Well as it turns out thanks to its giant parent planet and slightly non-circular orbit Europa is constantly being squeezed and pulled every time it completes one round trip. This has the effect of creating an extreme amount of internal heat that not only serves to keep the internal ocean liquid but could also serve to generate the volcanism that some theories believe is required to create life. Out of all the other places in the solar system this is probably the only other place where life could potentially exist based on the evidence we’ve gathered here on earth.
It’s discoveries like this that get me all excited about the infinite possibilities of the universe. Whilst there’s no evidence that there are any other intelligent life forms out there the evidence is getting stronger and stronger that it’s there, we just have to go and find it. I know that one day we’ll send a probe to Europa to see what is really under that thick ice blanket and should we find life there you can bet your bottom dollar that it will change how we view ourselves and our place in the universe forever.