You know how I’ve got a thing for simple demonstrations of physical/scientific laws? Well check out this one:
I believe most people are familiar with the concept of a gyroscope: a spinning object (usually a disk) that exhibits some counter-intuitive behaviours like appearing to defy gravity. The above demonstration show cases the mechanism by which a gyroscope functions quite aptly in that the torque from the spinning wheel is applied perpendicular to its surface. This has the effect of making the heavy device seem almost weightless. It would seem to be defying gravity but in fact the act of lifting the wheel up will drain it of some its kinetic energy and as the professor alluded to it could climb about 200ft in the air before it ran out of puff.
Simply amazing, isn’t it?
Like all great debates there seems to be two irreconcilable sides to the great education question of “Should I go to university?”. On the one side there’s the drive from parents, many of whom grew up in times where tertiary education was a precious resource, who want to give their children the very best chance at getting somewhere in live. On the other side is the self-taught movement, a growing swell of people who’ve eschewed the traditional progression of education and have done quite well. This in turn raises the question of whether further education is a necessity in today’s society or whether it’s all a giant waste of time that could be better spent pursuing the career of your dreams in the field of your choosing.
From a statistical point of view the numbers seem to favour pursuing some form of education beyond that of a secondary level. Employment rates for people with university level education are far higher than those without and it’s quite typical for a university educated graduate to be earning more than the average wage. Facts like these are what have driven the tertiary education levels in Australia from their lows in the post World War 2 era to the dizzying highs that we see today. This trend is what inspired the Howard government to create things like the New Apprenticeship System in order to boost the industries that relied on people eschewing university education in favor of learning a trade. Indeed not going to university, at least in Australia, would appear to be outside the norm just as going to university used to be.
It should come as no surprise then that I am a product of the Australian university system. Being one of the lucky (or not so lucky, depending) people born before the cut off date I was always a year younger than most of my class mates which meant that, since I skipped the traditional gap year that nearly all Australians seem to take, I managed to graduate at the same time as many of my peers despite my degree being 4 years long. Like many of my fellow students I was fully employed long before graduation day and had a career path mapped out that would see me use my degree to its fullest potential. Whilst I have been extremely fortunate in my career I can’t say that my degree was 100% responsible for the success I’ve enjoyed, nor for others who’ve walked similar paths to mine.
Now there are some professions (law, medicine and I’d like to say engineering but everyone’s a bloody engineer these days) where university is a legal requirement and there’s no getting around that. However for many other industries a degree, whilst seen as a useful “foot in the door” for initial job applications, is ancillary to experience and length of time in the industry. Indeed my rise through the ranks of IT support was mostly on the back of my skills in a chosen specialization with the degree just being a useful footnote with many not even realising that I was one of the few people in the IT industry legally allowed to call myself an engineer. The question then, for me at least, shifts from “should I go to university” to “what value can I derive from university and how is that comparable to similar time in industry?”.
It’s not exactly an easy question to answer, especially for an 18-year-old who’s fresh out of college and looking to make a hard decision about their future career. Indeed at the time I made the decision I didn’t think along those lines either, I just felt that it was probably the way to go. About 2 years into my degree though I was soon jealous of the money and progress that my friends were making without going to unversity and began to question why I was there. Upon reflection I don’t believe my time at university was wasted but the most valuable skills I learnt whilst there weren’t part of the syllabus.
This, I believe, is where you need to make a personal judgement call on whether university is right for you. The most valuable things I learnt at university (critical thinking, modularity, encapsulation, etc.) aren’t things that are reserved for the halls of an education institution. If you’re autodidactical by nature then the value proposition of higher education might very well be lost on you. When I started out at university I was definitely not an autodidact as I’d rarely seek to improve myself mentally beyond what I was required. Afterwards however I found myself craving knowledge on many wide and vast subjects, reveling in the challenge of conquering a new topic. This is not to say that university is a clear path to becoming like this, and indeed it seems to have the opposite effect for many, but it sure did wonders for my fledgling mind.
My main point here is that there’s no definitive stance on whether university is right for you or not and anyone who tells you that is at best being misguided. To truly understand if higher education is the right path you must reflect on whether you can attain knowledge in other ways and in similar time frames. It’s a deeply personal thing to think about, one that requires an objective view of your own abilities and desires, and sometimes you won’t be able to make a logical decision. In that case it’ll come down to what you feel is right for you and, like many of my friends found out, you’ll eventually figure out if it was right for you or not.
It’s never too late to start learning again.
It was late Friday night. My companions and I had just finished up work as we stumbled out into the hot, humid air that surrounded us here in Brunei. After a nearly 12 hour day we had our sights fixed on grabbibng some dinner and then an early night as we would have to come in the next day to finish the job. As we chatted over our meals a curious image appeared on the television, one that I recognized very clearly as SpaceX’s Dragon capsule that was launched no more than a couple days earlier. At the time it appeared that they were performing some last manuevers before the docking would occur. I couldn’t take my eyes away from it staring intently at the capsule that was driftly serenely across the beautiful backdrop of our earth.
The time came for us to make our departure and we headed back to the hotel. I hit up Facebook to see what was going on when I saw a message from a long time friend: “I hope you’re not missing this http://on.msnbc.com/JxfRMS“.
I assured him I wasn’t.
I was fixated on the craft watching it intently from 2 different streams so that I’d never be out of the loop. I monitored Twitter like a hawk, soaking in the excitement that my fellow space nuts shared. I almost shed a tear when Houston gave SpaceX the go to make the final docking approach as, for some unknown reason, that was when it all became real: the very first private space craft was about to dock with the International Space Station. At 13:56 UTC on May 25th, 2012 the SpaceX Dragon became the first private space craft to be captured by the International Space Station and not 6 minutes later it was birthed on the earth side docking port of the American Harmony module.
It’s an incredible achievement for SpaceX and proves just how capable they are. This is only the second launch of both the Falcon 9 rocket and the Dragon capsule which demonstrates just how well engineered they are. Most of the credit here can go to the modularity of the Falcon series systems meaning that most of the launch stack has already seen a fair bit of flight testing thanks to the previous Falcon 1 launches. The design is paying off in spades for them now as with this kind of track record it won’t be long before we see them shipping humans up atop their Falcon rockets, and that’s extremely exciting.
The payload of the COTS Demo Flight 2 Dragon capsule is nothing remarkable being mostly food, water and spare computing parts and small experiments designed by students. What’s really special about the Dragon though is its ability to bring cargo back to earth (commonly referred to as downrange capability) something that no other craft currently offers. The ATV, HTV and Progress crafts all burn up upon re-entry meaning that the only way to get experiements back from the ISS now will be aboard the Dragon capsule. Considering that we now lack the enormous payload bay of the Space Shuttle this might be cause for some concern but I think SpaceX has that problem already solved.
Looking over the scheduled flights it would appear that SpaceX is looking to make good on their promise to make the launches frequent in order to take advantage of the economies of scale that will come along with that. If the current schedule is anything to go by there will be another 2 Dragon missions before the year is out and the pace appears to be rapidly increasing from there. So much so that 2015 could see 5 launches of the Dragon system rivalling the frequency at which the Soyuz/Progress capsules currently arrive at the ISS. It’s clear that SpaceX has a lot of faith in their launch system and that confidence means they can attempt such aggressive scheduling.
I have to congratulate SpaceX once again on their phenomenal achievement. For a company that’s only just a decade old to have achieved something that no one else has done before is simply incredible and I’m sure that SpaceX will continue to push the envelope of what is possible for decades to come. I’m more excited than ever now to see the next Dragon launch as each step brings us a little closer to the ultimate goal: restoring the capability that was lost with the Space Shuttle. I’ve made a promise to myself to be there to see it launch and I simply can’t wait to see when it will be.
I learnt a long time ago that one of the biggest factors in pricing something, especially in the high tech industry, is convenience. For someone who was always a do-it-yourself-er the notion was pretty foreign to me, I mean why would I spend the extra dollars to have something done for me when I was equally capable of doing it myself? Of course the second I switched from being a salaried employee to a contractor who’s time is billed in hours my equations for determinting something’s value changed drastically and I begun to appreciate being able to pay to get something done rather than having to spend my precious time on it myself.
The convenience factor is what has driven me to try and find some kind of TV solution akin to those that are available in the USA. Unfortunately the only thing that comes close are the less than legal alternatives which is a right shame as I would gladly pay the going rate to get the same service here in Australia. I’m not alone in this regard either as many Australians turn to alternative methods in order to get their fix of their favorite shows. What this says to me is that teh future of TV is definitely moving towards being a more on demand service like those provided by Netflix and Hulu and less like traditional TV channels.
Some industry executives would disagree with me on that point, to the point of saying that watching TV on the Internet is nothing short of a fad that will eventually pass. There’s been a couple clarifications to that post since it first went live but the sentiment remains that they believe people who abandon their cable subscriptions, “cable cutters” as it were, are in the minority and once economic conditions improve they’ll be back again. I can understand the reasoning behind a cable exec taking this kind of position, but it’s woefully misguided.
For starters Netflix alone counts for around a third of peak bandwidth usage in the USA. To put this in perspective that’s double all BitTorrent traffic and triple YouTube, both considered to be hives of piracy among the cable cartels. This is in conjunction with the fact that people are using their Xboxs to watch movies and listen to music more than they’re using them to play games, usually through online services. Taking all of this into consideration you’d be mad to think that the future is still in traditional pay TV services as there’s a very clear trend towards on-demand media, provided through your local Internet connection, is what customers are looking for.
There’s two reasons to explain why cable companies are thinking this way. The first, and least likely, is that they’re simply unaware of the current trends in the media market space. This is not entirely impossible as there have been a few examples in recent times (BlockBuster being the first that comes to mind) who simply failed to recognise where the market was moving and paid the ultimate price for it in the end. The far more likely reason is simple bravado as the cable companies can’t really take the stand and say that they’re aware of the changing market demands but will do nothing about it. No for them its best, at least in the short term, to write off the phenomena completely. In the long term of course this tactic won’t work, but I get the feeling none of them are playing a particularly long game at this point.
As I’ve said many times before media companies and rights holders have fought tooth and nail against every technological advancement for the past century and the only constant in every one of them is that in the end the technology won out. Eventually these companies will have to wake up to the reality that their outdated business models don’t fit into the current market and they’ll either have to adapt or die.
The wonderful world of tech Initial Public Offerings isn’t the same beast that it was back in the hey days of the dot com boom. Gone are the days when caution was thrown to the wind on any company that managed to demonstrate a modicum of social proof, where the idea of going IPO was just a way to get another round of keeping a company going until they found a sustainable business model. Today whilst going IPO is still done with an eye to gather more funds for expansions they’re also big events for investment companies to make a quick buck on the hype surrounding a tech company going public. So much so that it’s become something of a trend for sexy high tech companies stock’s to soar on the first day only to come back down to reality not long after.
Take LinkedIn for example. On its opening day the share price skyrocketed, more than doubling its price on the opening day. Many took this as a sign that the tech bubble was returning with a vengeance, that tech companies would soon be inflating the market beyond its sustainable limits and that we were seeing the makings of another crash. More astute observers recognised that instead it was actually a ploy by the investment companies managing the IPO process. Instead of it being a sign that these tech companies were fuelling another bubble it was the investment companies severely under-pricing the IPO. Doing this would seem highly counter-intuitive, I mean who wouldn’t want the best debut price? The answer is of course, and unfortunately, very simple.
They wanted to be the ones who profited the most from the IPO.
Pricing the IPO so low meant that the initial buyers could acquire many more shares than they could if the IPO. Knowing that the stock was undervalued they then just had to wait for the pricing to hit it’s trading peak before unloading their shares on the market. Done at the peak of the LinkedIn IPO companies like Morgan Stanley, Bank of America Merrill Lynch and JPMorgan who were underwriters were able to get an easy 1X return without little to no risk. Employees and preferred stock holders who elected to have shares in the IPO got screwed of course, but that’s not a concern for these big name investment firms.
So it was with great anticipation that I watched the recent Facebook IPO. It’s by far the biggest tech IPO in history and also managed to set records in terms of trade volume on the first day. Since then it’s been a slow downhill trend for the nascent stock, shedding something like $11 per share since its high of around $42. Whilst the first day of trading was cause for concern, mostly because there wasn’t an insane pop like there was for all other tech stocks, the following days have been nothing short of astonishing at least for the investors who jumped in alongside everyone else on the first release shares. You’d think that this was a bad thing but for this aspiring start-uper it’s nothing short of glorious.
The other tech IPOs that showed explosive growth only did so because they were engineered that way. Now I have no idea why the Facebook IPO didn’t, it certainly had all the makings of one, but there’s a good chance that the watchful eye of the SEC had something to do with it. For all the people who bought in early they’re undeniably screwed but there is one group of people who (rightly so) profited from Facebook’s IPO: the people at the company.
The shares that made up the original offering would have come from preferred stock (early investors), common stock (employees) and options that other people had accured over Facebook’s 8 year lifespan.For them a right priced IPO that then declines in value means that they’ve got the maximum amount of value they could and were not screwed over by an artificially low stock price. Of course this has the not-so-nice aspect of pissing off a lot of investors, many of whom are now crying foul over the share price making a beeline for penny stock level. That’s warranted to some extent but you’ll forgive me if I don’t shed a tear for those companies who screwed over many a tech company in the past in the pursuit of a quick buck.
The question on everyone’s lips is where Facebook’s stock will go from here. Honestly I’m not sure, they’ve definitely struggling with mobile which is starting to heavily cut into their revenue and apparently the reason behind their Instagram acquisition but you’d figure that they’ve innovated heavily in the past so they should be able to turn it around in the not too distant future. Still all this negative press isn’t going to do the stock price any favours so unless the commentators want to see the price keep falling they should probably just shut their yaps and wait for the market to properly correct. The next few weeks will be very interesting times indeed and I can’t wait to see how the investor butthurt plays out.
If you’ll allow me to get a little hipster for a second you’ll be pleased to find out that I’ve been into the whole Multiplayer Online Battle Arena (MOBA) scene since it first found its roots way back in Warcraft 3. Back then it was just another custom map that I played along with all the other customs I enjoyed, mostly because I suffered from some extreme ladder anxiety. Since then I’ve played my way through all of the DOTA clones that came out (Heroes of Newerth, Leaggue of Legends and even that ill fated experiment from GPG, Demigod) but none of them captured me quite as much as the seemingly official successor, DOTA 2, has.
Defense of the Ancients 2 should be familiar to anyone who played the original DOTA or one of the many games that followed it. In a team of 5 you compete as single heros, choosing from a wide selection who all have unique abilities and uses, pushing up one of three lanes with a bunch of NPC creeps at your side. The ultimate goal is the enemies ancient, a very well defended building that will take the concerted effort of all team members to reach and finally, destroy. There are of course many nuances to what would, on the surface, seem to be a simple game and it’s these subtleties which make the game so engrossing.
When compared to its predecessor that was limited by the graphics engine of WarCraft 3 DOTA2 stands out as a definite improvement. It’s not a graphical marvel, much like many of the MOBA genre, instead favoring heavily stylized graphics much like Blizzard does for many of their games. The recent updates to DOTA2 have seen some significant improvements over the first few initial releases both in terms of in-game graphics and the surrounding UI elements. Valve appears to be heavily committed to ensuring DOTA2’s success and the graphical improvements are just the tip of the iceberg in this regard.
Back in the old days of the original DOTA the worst aspect of it was finding a game and then hoping that no one would drop out prematurely. There were many 3rd party solutions to this problem, most of which were semi-effective but were open to abuse and misuse, but none of them could solve the problem of playing a game with similarly skilled players. DOTA2, like nearly every other MOBA title, brings in a matchmaking system that will pair you up with other players and also brings with it the ability to rejoin a game should your client crash or your connection drop out.
Unfortunately since DOTA2 is still in beta the matchmaking system is not yet entirely working as I believe it’s intended to. It does make the process of finding, joining and completing a game much more streamlined but it is blissfully unaware of how skilled a potential player is. What this means is that the games have a tendency to swing wildly in one teams favour and unlike other games where this leads to a quick demise (thus freeing you up toplay again) DOTA instead is a drawn out process and should you decide to leave prematurely you’ll be hit with a dreaded “abandoned” mark next to your record. This is not an insurmountable probelm though and I’m sure that future revision of DOTA2 will address this issue.
The core gameplay of DOTA2 is for the most part unchanged from back in the days of the original DOTA. You still get your pick from a very wide selection of heros (I believe most of the AllStars team are in there), the items have the same names and you still go through each of the main game phases (laneing, pushing, ganking) as the game progresses. There have been some improvements to take away some of the more esoteric aspects of DOTA2 and for the most part they’re quite welcome.
Gone are the days where crafting items required either in depth knowledge of what made what or squinting at the recipe text, instead you can click on the ultimate item you want to craft and see what items go in to make it. Additionally there’s a list of suggested items for you hero which, whilst not being entirely appropriate for every situation, will help to ease players into the game as they learn some of the more intricate aspects of iteming a character correctly. It’s still rather easy to draw the ire of players who think they know everything there is to know about certain characters (I’ll touch more on the community later) but at least you won’t be completely useless if you stick to the item choices the game presents for you.
Know which hero to pick is just as important as knowing how to item them and thankfully there are some improvements to the hero choosing system that should make do so a little easier for everyone. Whilst the hero picking has always made delineations between int/str/agi based heros you can now also filter for things like what kind of role the character fills like support, ganker or initiator. For public games though it seems everyone wants to play a carry (mostly because they’re the most fun) and there’s little heed paid to good group composition but this is not a fault of the game per se, but there is potential there for sexing up the lesser played types so that pub compositions don’t end up as carry on carry battles.
It’s probably due to the years of play testing that the original DOTA received but the heroes of DOTA2 are fairly well balanced with no outright broken or overpowered heroes dominating the metagame. There are of course heros that appear to be broken in certain situations (I had the pleasure of seeing Outworld Destroyer killing my entire team in the space of 10 seconds) but in reality it’s the player behind that character making them appear broken. This bodes well for the eSports scene that Valve is fostering around DOTA2 and they’re going to need to keep up this level of commitment if they want a chance of dethroning the current king, League of Legends.
The eSports focused improvements in DOTA2 are setting the bar for new game developers who have their eye on developing an eSports scene for their current and future products. The main login screen has a list of the top 3 spectated games and with a single click you can jump in and watch them with a 2 minute delay. This can be done while you’r waiting to join a game yourself and once your game is ready to play you’re just another click away from joining in on the action. It’s a fantastic way for both newcomers and veterans of the genre to get involved in the eSports scene, but that’s just he start of it.
Replays can be accessed directly from a player’s profile or downloaded from the Internet. Game casters can embed audio directly into the replay allowing users to watch the replay in game with the caster’s commentary.They can also watch the caster’s view of the game, use a free camera or using the built in smart camera that will automatically focus on the place where the most action is happening. It’s a vast improvement over how nearly all other games do their replays and Valve really has to be commended for the work they’ve done here.
For all the improvements however there’s one thing that DOTA2 can’t seem to get away from and that’s its elitist, almost poisonous community that is very hostile to new players. Whilst the scsreenshot above is a somewhat tongue-in-cheek example of the behavior that besots the DOTA2 community it still holds true that whilst many concessions have been made to make the game more palatable for newcomers the DOTA2 community still struggles with bringing in new players to the fold. League of Legends on the other hand crack this code very early on and the following success is a testament to how making the game more inviting for new users is the ultimate way to drive the game forward. I don’t have an answer as to how to fix this (and whilst I say LoL cracked the code I’m not 100% sure their solution is portable to DOTA2) and it will be very interesting to see how DOTA2 develops in the shaodw of the current MOBA king.
DOTA2 managed to engage me in a way that only one other game has managed to do recently and I belive there’s something to that. Maybe it’s a bit of nostalgia or possibly my inner eSports fan wanting to dive deep into another competitive scene but DOTA2 has really upped the MOBA experience that I first got hooked on all those years ago and failed to rekindle with all the other titles in this genre. I’d tell you to go out and buy it now but it’s still currently in beta so if you can get your hands on a key I’d definitely recommend doing so and if you’re new to this kind of game just ignore the haters, you won’t have to deal with them for long.
Defense of the Ancients 2 is currently in beta on PC. Approximately 60 hours of total game play were undertaken prior to this review with a record of 32 wins to 36 losses.
Coding a location based service introduced me to a lot of interesting concepts. The biggest of which was geocoding, an imprecise science of transcribing a user’s IP address into a real world location. I say imprecise because there’s really no good way of doing it and most of the geocoding and reverse-geocoding services out there rely on long lists that match an IP to its location. These lists aren’t entirely accurate so the location you get back from them is usually only good as an initial estimate and you’re better off using something like the HTML5 location framework or just simply asking the user where the hell they are in the world. Unfortunately those inaccurate lists drive a whole lot of current services, most of them with the intent of limiting said service to a certain geographical location.
I’ve written about this practice before and how it’s something of a hangover from the times of DVDs and region locking. From a technology standpoint it makes little sense to block access to certain countries (whether they block you is another matter) as all you’re doing is limiting your market. From a business and legal standpoint the waters are a little murkier as most of the geo-restricted services, the ones of note anyway, are done simply because it’s either not in their business interests to do so (although I believe that’s short sighted) or there’s a lot of legal wrangling to be done in order for it to be made available globally.
A clucky New Zealand ISP, FYX, was attempting to solve this problem of geoblocking and whilst they have withdrawn the service from the market (but are looking to bring it back) I still want to talk about their approach and why its inherently flawed.
FYX is offering what they call “Global Mode” for their Internet Services which apparently makes their users appear as if they’re not from any particular country at all. Their thinking is that once you’re a global user services that were once blocked because of your region will suddenly be available to you, undoing the damage to the free Internet that those inaccurate translations lists can cause. However the idea that no location = geoblocking services ineffective is severely flawed which would be apparent to anyone who’s even had a passing encounter with these services.
For starters most sites with geoblocking enabled do so by using a whitelist meaning that only people of specific countries will be able to access those services. For things like Hulu and netflix they are hard coded to IPs residing within the USA boundaries and anything that’s not on those lists will automatically get blocked. Of course there’s some in-browser trickery that you can do to get around this (although that’s not at the ISP layer) but the only guaranteed solution is to access them through a connection that appears to originate from an IP they trust. Simply not updating the location on those lists won’t do the trick so you’d need to do something more. It’s entirely possible that they’re doing something more fancy than this but the solution I can think of wouldn’t be very scalable, nor particularly proftiable.
It also seems that they might’ve got the attention of some rights holders groups who put pressure on their parent company to do away with the service. Legally there didn’t seem to be anything wrong with the idea (apart from the fact that it probably wouldn’t work as well as advertised) but that wouldn’t stop media companies from threatening to take them to court if such a service was continued to be offered. It really shows how scared such organisations are of new technology if a small time ISP with a not-so-special service can be a big enough blip on the radar to warrant such action. I’ll be interested to see how FYX progresses with this, especially if they detail some more info on just how they go about enabling their Global Mode.
The reality of the situation is that we’re trending to a much more connected world, one where the traditional barriers to the free flow of information are no longer present. Companies that made their fortunes in the past need to adapt to the present and not attempt to litigate their way to profitability. Eventually that won’t be an option for them (think BlockBuster vs Netflix) and I really can’t wait for the day that geoblocking is just a silly memory of when companies thought that their decades old business models still worked in an ever changing world.
I’m a big fan of simple ways to demonstrate complicated physical principles. For the ideas of potential energy and mechanical advantage there’s nothing better than the good old fashioned domino, lined up in file which only takes a single one to fall to start off a chain reaction down the line. What most people don’t know is that a domino can knock over another domino almost 1.5 times bigger than itself and that leads us to interesting demonstrations like this one:
In reality the dominoes’ power comes from you, the person who lifted them up in the first place. The act of knocking them over is simply a recovery of the energy you expended to lift them up, transferring it into kinetic energy. Being able to knock over the larger domino means you can amplify the amount of energy recovered which is commonly referred to as a mechanical advantage.
It’s a wonderfully simple demonstration, don’t you think?
When I started out with this idea of doing 1 review a week it was mostly because I always seemed to find myself with a backlog of big name titles to play through. There aren’t however enough titles like that to sustain that kind of pace throughout the year and for the first 3 months of this year most of the titles I was reviewing were actually things released last year that I hadn’t got around to playing. Consequently I’ve found myself playing a lot of games that I wouldn’t have otherwise given a second thought to and Warp, the action-puzzle-stealth hybrid from Trapdoor, is one of those titles that I wouldn’t have considered playing.
Warp has you playing as an oddly shaped alien who’s named Zero (something I don’t think was made clear in the game, I certainly can’t remember anyone saying his name) waking up in an undersea laboratory. You’re surrounded by scientists who begin to perform surgery on you to remove a disk shaped object from you which turns out to be your internal power source. After a short obstacle course, which serves as the tutorial for the basics of the game, you are then reunited with your power supply and regain your ability to teleport short distances. Warp flows on from there, following Zero’s quest to escape the confines of the laboratory.
On first appearances Warp isn’t too much to look at, mostly due to its roots as a Xbox Arcade game. For the actual game play the graphics are fine with Warp making heavy use of lighting effects to cover up their less-than-stellar models but the cut scenes unfortunately didn’t appear to get any extra treatment to make them any better. Thus the artwork, graphics and sound work are all around the level I had come to expect from say around 5 years ago when I had friends tinkering with 3D models. Sure I can understand that there are limitations thanks to the target platform but when you don’t even bother to try and do rudimentary lip syncing for dialog scenes I get the feeling that a lot of this was done due to budgetary constraints rather than a lack of technical ability.
The core game play of Warp revolves around Zero’s ability to teleport short distances and also hide inside objects and people. At first it starts off with rudimentary things like finding non-obvious was to get around your environment but as the game progresses the challenges start to scale up dramatically. Zero also gains additional abilities as you complete levels augmenting himself with things like producing a controllable decoy (so you can get guards to kill each other), using said decoy to swap places with other objects and being able to launch objects a great distance. The combination of all these abilities makes for some rather interesting puzzles, some that are actually quite challenging to figure out.
Also thanks to the integration of a half decent physics engine there’s actually the opportunity for a lot of emergent game play which makes it a whole lot more interesting than your rudimentary puzzle game. Since every object can be moved and flung around quite easily there’s a lot of opportunity to break the intended solution by bringing objects along with you that the game doesn’t expect you to. There are also times when it goes horribly wrong like the travelator towards the end that you can change the direction of, try destroying both power supplies. The animation stops but you’ll still move if you stand on it. Still problems with the physics based game play are thankfully few, although Warp is far from free of issues.
Scattered throughout the game are challenges like the one above that push your use of certain skills to the limit in order to get extra “grubs” that are used to upgrade your abilities. These are usually timed affairs and in the words of someone I can’t remember “You know how to make something not fun in a game? Slap a timer on it.” and that’s exactly how all these challenges feel: not fun. I probably spent about a fifth of my in game time trying to get better than bronze on these challenges and I managed to get a few of them but at no time did I have fun doing it. It was kind of like Super Meat Boy all over again where the replay value is derived from it’s rather frustratingly hard difficulty. Not all of them were like this but the initial ones definitely were and it’s likely that it’s me being retarded, but there is another reason why I think its not.
The game is a very obvious port from Xbox360 to PC and that brings with it all the issues that are usually associated with them. For starters whilst the mouse is available in the initial start up screens it doesn’t work in the actual game for anything, not even the upgrade menus. Instead of redesigning the control paradigm around the mouse and the keyboard all the interface controls are simply remapped to the keyboard. This means that sometimes the game engine expects input in a certain way and doesn’t get it which can lead to all sorts of unintentional behavior. It’s not game breaking once you get used to it but it does smack of lazy porting just to grab another market.
The upgrade system is interesting at first glance, being able to augment your abilities in ways that change the game play significantly. As you can see above I chose to invest my grubs in certain keys skills, namely the ones that form the basis of the core game play (teleporting and moving faster). These definitely made the game somewhat easier as there were many times I could fudge my way through or get out of a situation that I wouldn’t have been able to do otherwise but looking over the other skills I couldn’t be sure why anyone would get them or how’d they make the game easier.
In fact I played the majority of the game sans these two skill upgrades mostly because I didn’t bother with the challenges nor religiously tracking down grubs in order to get said upgrades. This isn’t a problem with Warp per se, more the with the idea of combining a puzzle game with an upgrade system. For all the main challenges you’re going to have to give the player the required skills anyway and all the upgrades then can really only be making the player’s life easier. Deus Ex: Human Revolution did the upgrades that unlock other potential pathways/secrets bit quite well but they still had to accommodate for the possibility that the player didn’t choose a specific upgrade, at least for story critical sections. All of Warps sections appear to be story critical though, rendering the upgrade system kind of moot.
All that being said however I still found Warp extremely fun to play. I’m not sure how I’d describe it but the combination of puzzle solving, the over the top reactions from NPCs when they spotted you and the decidedly dark enjoyment you get from making people explode from the inside out made my time with Warp very enjoyable. This is in spite of the story that’s so thin on the ground that it might as well not even exist in the first place, something which indie games like this don’t usually forgo. Considering this game can be had for $20 as part of a 5 pack of games I think it’s incredibly great value for the time I spent with it and would recommend giving it a shot.
Warp is available on PC, PS3 and Xbox360 right now for $9.99 or equivalent on all platforms. Game was played entirely on the PC with around 5 hours of total play time and about 2/3rds of the grubs found.
There’s little doubt that the past decade has brought upon us rapid change that our current legislature is only just beginning to deal with. One of my long time bugbears, the R18+ rating for games, is a great example of this showing how outdated some of our policies are when it comes to the modern world. Unfortunately such political antiquity isn’t just isolated to the video games industry it extends to all areas that have been heavily affected by the changes the Internet has brought, not least of which is the delivery of content such as TV programs, newspapers and radio. This rift has not gone unnoticed and it seems the government is finally looking to take action on it.
Enter the Convergence Review a report that’s was commissioned in 2011 to review the policy framework surrounding Australia’s media and communications. It’s a hefty tome, weighing in at some 176 pages, detailing nearly every aspect of Australia’s current regulatory framework for delivering content to us Australians. I haven’t managed to get through the whole thing but you don’t need to read far into it to understand that it’s a well researched and carefully thought out document, one that should definitely be taken into consideration in reforming Australia’s regulatory framework for media. There are a couple points that really blew me away in there and I’d like to highlight them here.
For starters the review recommends that the licensing of broadcasting services be abolished in its entirety. In essence this puts traditional broadcasters on a level playing ground with digital natives who don’t have the same requirements placed upon them and their content. Not too long ago such an idea would seem to be a foolish notion as no licensing means that anyone could just start broadcasting whatever they wanted with no control on how it was presented. However with the advent of sites like YouTube such license free broadcasting is already a reality and attempting regulate it in the same fashion as traditional methods would be troublesome and most likely ineffective. Abolishing licensing removes restrictions that don’t make sense anymore given that the same content can be delivered without it.
Such a maneuver like that brings into question what kind of mechanisms you would have to govern the kind of content that gets broadcasted. The review takes this into consideration and recognizes that there needs to be some regulation in order to keep in line with Australian standards (like protecting children from inappropriate content). However the regulations it would apply are not to every content organisation. Instead the regulations will target content organisations based on the size of the organisation and the scope of their audience. This allows content organisations a lot of flexibility with how they deliver content and will encourage quite a bit of innovation in this area.
The review also recommends that media standards apply to all platforms, making the regulations technology agnostic. Doing this would ensure that we don’t end up in this same situation again when another technological breakthrough forces a rethink of our policy platform which as you can tell from the review is going to be a rather arduous process. Keeping the standards consistent across mediums also means that we won’t end up with another R18+ situation where we have half-baked legislation for one medium and mature frameworks in another.
The whole review feels like a unification that’s been long coming as the media landscape becomes increasingly varied to the point where treating them individually is complicated and inefficient. These points I’ve touched on are also just the most striking of the review’s recommendations with many more solid ideas for reforming Australia’s communications and media policies for a future that’s increasingly technologically driven. Seeing reports like this gives me a lot of hope for Australia’s future and I urge the government to take the review to heart and use it to drive Australia forward.