Microsoft has been pursuing its unified platform strategy for some time now with admittedly mixed results. The infrastructure to build that kind of unified experience is there, and indeed Microsoft applications have demonstrated that it can be taken advantage of, but it really hasn’t spread to third party developers and integrators like they intended it to. A big part of this was the fact that their mobile offering, Windows Phone, is a very minor player that has been largely ignored by the developer community. Whilst its enterprise integration can’t be beaten the consumer experience, which is key to driving further adoption of the platform, has been severely lacking. Today Microsoft has announced a radical new approach to improving this by allowing iOS and Android apps to run as Universal Applications on the Windows platform.
The approach is slightly different between platforms however the final outcome is the same: applications written for the two current kings of the smartphone world can run as a universal application on supported Windows platforms. Android applications can be submitted in their native APK form and will then run in a para-virtualized environment (includes aspects of both emulation as well as direct subsystem integration). iOS applications on the other hand can, as of today, be compiled directly from Objective-C into Universal Applications that can be run on Windows Phones. Of course there will likely still be some effort required to get the UX inline but not having to maintain different core codebases will mean that the barriers to developing a cross platform app that includes Windows Phone will essentially drop to nothing.
Of course whether or not this will translate into more people jumping onto the Windows Phone ecosystem isn’t something I can readily predict. Windows Phone has been languishing in the single digit market share ever since its inception and all the changes that Microsoft has made to get that number up haven’t made a meaningful impact on it. Having a better app ecosystem will be a drawcard to those who like Microsoft but haven’t wanted to make the transition but this all relies on developers taking the time to release their applications on the Windows Phone platform. Making the dev experience easier is the first step to this but then it’s a chicken and egg problem of not having enough market share to make it attractive for both ends of the spectrum.
Alongside this Microsoft also announced the ability for web pages to use features of the Windows Phone platform, enabling them to become hosted web pages with enhanced functionality. It’s an interesting approach for enabling a richer web experience however it feels like something that should probably be a generalized standard rather than a proprietary tech that only works for one platform. Microsoft has shown that they’re willing to open up products like this now, something they never did in the past, so potentially this could just be the beachhead to see whether or not there’s any interest before they start pushing it to a wider audience.
This is definitely a great step in the right direction for Microsoft as anything they can do to reduce the barrier to supporting their ecosystem will go a long way to attracting more developers to their ecosystem. There’s still a ways to go to making their mobile platform a serious contender with the current big two but should this app portability program pay dividends then there’s real potential for them to start clawing back some of the market share they once had. It’s likely going to be some time before we know if this gamble will pay off for Microsoft but I think everyone can agree that they’re at least thinking along the right lines.
I’ll be honest here: my faith in OUYA has long since faded away.
Wind back 3 years and you’d get a completely different story; the idea of a console freed from publishers and big money marketers appealed to the guy who wanted to see the indie renaissance turn into a full blown revolution. I wasn’t alone in thinking this and many of the backers saw the OUYA as the key to unlocking a console market for indie devs that, back then, wasn’t really available. The late release to backers and dated hardware meant that it wasn’t much of a platform indie devs wanted to find themselves on which meant that users really couldn’t find much to love about the console either. Couple that with the fact that the major consoles are now extremely friendly to indie devs and the OUYA has lost the mission it was the champion of. The result of this is that OUYA has been struggling and now it’s looking for someone to rescue it.
Over its life OUYA has taken in some $23 million in both Kickstarter funds as well as venture capital making it quite a well funded startup comparatively. A quick search around will reveal a lot of vanity metrics, like the fact they’ve got over 1000 games and 40,000 developers signed up but no where will you find number of units shipped nor any solid figures on how many titles developers are able to move on the system. Indeed even those metrics they tout paint a pretty grim picture of the OUYA as only 1 in 40 developers signed up to the program have actually released a title on it, a mere 2.5%. Mix in the conversion rate of users actually on the console and it’s really no wonder that they’re looking for a buyer as the revenue on their sales, both hardware and software, can’t be good.
Thinking back on it though there’s really no one defining problem that I can point to that really soured OUYA, more it was death by a myriad of little problems that just compounded the impending irrelevance it was facing. Not getting the console into the hands of the enthusiasts, I.E. the backers who were genuinely excited to see the product come to life, long before anyone could get their hands on it meant that when it did finally release there was no Kickstarter fuelled fanfare to go with it. Taking as long as it did to release meant the hardware was long since surpassed and whilst it was sufficient at the time it quickly started to show its age. The quality of the console, whilst decent for the price point, wasn’t great and only helped to compound the idea that it was a gimmick and not much more. All of this, and more, has meant that the OUYA has just faded from the greater gaming community’s conscious and I don’t think it’s likely to return.
I won’t pretend to have a solution for their woes as I don’t. As far as I’m concerned the ship has long since sailed on the OUYA idea and now, with the major console players coming onboard with better support for indie developers, there really isn’t any room for them to play in. Today most people would much rather just play Android games on their Android phone and, should they want it, pairing it up with the controller of their choice. If OUYA manages to find a willing buyer I’d be very surprised as I can’t really see a future for OUYA that ends with them becoming a successful niche console.
My Xperia Z managed to last almost 2 years before things started to go awry. Sure it wasn’t exactly a smooth road for the entire time I had the phone, what with the NFC update refusing to apply every time I rebooted my phone or the myriad of issues that plagued its Android 4.4 release, but it worked well enough that I was willing to let most of those problems slide. However the last month of its life saw its performance take a massive dive and no matter what I did to cajole it back to life it continued to spurt and stutter making for a rather frustrating experience. I had told myself that my next phone would be a stock Android experience so I could avoid any potential carrier or manufacturer issues and that left me with one option: the Nexus 6. I’ve had this phone for just over a month now and I have to say that I can’t see myself going back to a non-stock experience.
First things first: the size. When I moved to the Xperia Z I was blown away by how big it was and figured that anything bigger would just become unwieldy. Indeed when I pulled the Nexus 6 out of the box it certainly felt like a behemoth beside my current 5″ device however it didn’t take me long to grow accustomed to the size. I attribute this mostly to the subtle design features like the tapered edges and the small dimple on the back where the Motorola logo is which make the phone both feel thinner and more secure in the hand than its heft would suggest. I definitely appreciate the additional real estate (and the screen is simply gorgeous) although had the phone come in a 5″ variant I don’t think I’d be missing out on much. Still if the size was the only thing from holding you back on buying this handset I’d err on the side of taking the plunge as it quickly becomes a non-issue.
The 2 years since my last upgrade have seen a significant step up in the power that mobile devices are capable of delivering and the Nexus 6 is no exception in this regard. Under the hood it’s sporting a quad core 2.7GHz Qualcomm chip coupled with 3GB RAM and the latest Adreno GPU, the 420. Most of this power is required to drive the absolutely bonkers resolution of 2560 x 1440 which it does admirably for pretty much everything, even being able to play the recently ported Hearthstone relatively well. This is all backed by an enormous 3220mAh battery which seems more than capable of keeping this thing running all day, even when I forget that I’ve left tethering enabled (usually has about 20% left the morning after I’ve done that). The recent updates seem to have made some slight improvements to this but I didn’t have enough time before the updates came down to make a solid comparison.
Layered on top of this top end piece of silicon is the wonderful Android 5.1 (codename Lollipop) which, I’m glad to say, lives up to much of the hype that I had read about it before laying down the cash for the Nexus 6. The material design philosophy that Google has adopted for its flagship mobile operating system is just beautiful and with most of the big name applications adhering to it you get an experience that’s consistent throughout the Android ecosystem. Of course applications that haven’t yet updated their design stick out like a sore thumb, something which I can only hope will be a non-issue within a year or so. The lack of additional crapware also means that the experience across different system components doesn’t vary wildly, something which was definitely noticeable on the Xperia Z and my previous Android devices.
Indeed this is the first Android device that I’ve owned that just works, as opposed to my previous ones which always required a little bit of tinkering here or there to sand off the rough edges of either the vendor’s integration bits or the oddities of the current Android release of the time. The Nexus 6 with its stock 5.1 experience has required no such tweaking with my only qualm being that newly installed widgets weren’t available for use until I rebooted my phone. Apart from that the experience has been seamless from the initial set up (which, with NFC, was awesomely simple) all the way through my daily use through the last month.
The Nexus line of handsets always got a bad rap for the quality of the camera but, in all honesty, it seems about on par with my Xperia Z. This shouldn’t be surprising since they both came with one of the venerable Exmor chips from Sony which have a track history of producing high quality cameras for phones. The Google Camera software layered on top of it though is streets ahead of what Sony had provided, both in terms of functionality and performance. The HDR mode seems to actually work as advertised, as demonstrated above, being able to extract a lot more detail of a scene than I would’ve expected from a phone camera. Of course the tiny sensor size still means that low light performance isn’t its strong suit but I’ve long since moved past the point in my life where blurry pictures in a club were things I looked on fondly.
Overall I’m very impressed with the Google Nexus 6 as my initial apprehension had me worried that I’d end up regretting my purchase. I’m glad to say that’s not the case at all as my experience has been nothing short of stellar and has confirmed my suspicions that the only Android experience anyone should have is the stock one. Unfortunately that does limit your range of handsets severely but it does seem that more manufacturers are coming around to the idea of providing a stock Android experience, opening up the possibility of more handsets with the ideal software powering it. Whilst it might not be as cheap as other Nexus phones before it the Nexus 6 is most certainly worth the price of admission and I’d have no qualms about recommending it to other Android fans.
For some games mods are the lifeblood that keep them going for many years after their initial release. These mods add in things that the developers either didn’t think to create or simply wouldn’t, elevating the game well past its intended station. Some of these mods even take on a life all of their own with many of the most successful titles of all time being born out of mods, some of them even creating entire new genres as they rose to stardom. These mods were often born out of the free time and relentless dedication of their creators and provided free to gamers worldwide. Last week Valve announced a paid mod program for Skyrim, a natural extension of their other paid content programs, which has not been well received and, honestly, I think the community needs to stop drinking the haterade.
The system is pretty simple: mods that are on the Steam Workshop can now set a price for their mods which users can pay for if they’re so inclined. It’s not a mandatory system, Steam still supports modders who want to peddle their wares through the system for free, however if you want to you can set a price you can. Looking over the mods that have decided to do that most of the prices are what you’d expect to be typical prices for apps or cosmetics in other games (and indeed the most popular items are cosmetics) with a few content mods here or there. Of course this may be due to the program still being early days and the backlash that’s resulted from the announcement but it’s largely inline with what I expected a program like this to generate.
Generally I think this program is a great idea as it gives modders an easy way to monetize their content without resorting to begging for donations or trying to do something inane like streaming them creating mods over Twitch. Indeed it works much the same way as the app ecosystem does on mobile platforms today, with people who want to release a labor of love to the wild world for free doing so using the platform. On the flip side there are those who’d really like to put in a lot of effort but couldn’t justify doing so without some kind of compensation and it’s these people that I think this system was designed to attract. Sure you’ll get the scammers, plagiarizers and other unwanted people attempting to game the system but you get that with anything that relies predominantly on user submitted content so I don’t think that’s an issue worth discussing.
One thing I do disagree with is the rather unfair revenue distribution that the system current has with a whopping 75% of the total revenue going to Valve (30%) and Bethesda (45%). This means that for every dollar that the mod makes the vast majority of that doesn’t end up in the hands of the developer with them taking home a measly 25 cents. I think much of the criticism of this system would be much less severe if the revenue that the creators received was much higher, say in the 70% region that’s typical of most app store purchases, although I’m unsure as to whether Valve and Bethesda would be keen to take such a hit. Realistically for both of them it’s free money (well, for Bethesda anyway, Valve has to provide the infrastructure) so the hit they take would be small compared the goodwill they could win from the community.
The problem I see with most of the outrage is that it assumes that a system like this will inevitably lead to a split among the mod community, one of haves and have nots which is contrary to the ethos that the modding community holds. Sure, it may attract some unscrupulous individuals, but by and large modders are aware of the communities that they’ve helped develop and the last thing they’d want to do is alienate those who’ve made them so popular. Indeed if they did then free alternatives are far more likely to rise out of their ashes, providing the same service that those mods once did without the paywall. On the flip side if a mod is really worth it then I’m sure the community would be more than happy to support a modder in the quest to deliver something of value to the community, rather than them giving up all semblance of decency and going for a cash grab.
Suffice to say I think the program is a good idea from Valve, it just needs a little more tweaking to make it more fair to the modders and more palatable for the community. I know calling for rationality on the Internet is likely to be met with a blazing wall of silence but paid mods aren’t the devils that many would make them out to be and, if they are, then people will simply not pay for them. Those kinds of modders will quickly realise that this is a community that’s not ripe for exploitation and those who’ve served that community for years, for free in most cases, will continue to reap the benefits of the relationships they’ve created. To think that the opportunity to make money on a platform would somehow ruin that relationship is honestly hurtful to those who’ve put their hearts and souls into these mods and the community should be their advocates rather than their critics.
A game’s controls being deliberately obtuse or unintuitive used to be a sign of a poorly designed game. Indeed the whole reason I avoid certain genres, survival horror being one of them, is that their controls are usually designed in a way that breaks your natural motions in order to induce extra challenge or panic. However most recently a new genre of games has popped up that thrives on this very idea, making games that use weird control schemes that are highly unintuitive and very specific to that game. These are often coupled with all sorts of weird and wonderful game play ideas, ones that really don’t fit the mould of any one genre. The latest title to fall under this banner (which I’ve tentatively termed Frustration Simulators) is I am Bread which should basically tell you everything you need to know about this weird indie experiment.
You are bread. You do not know how it came to be that you are bread but you do know your ultimate purpose: you become toast. For some reason though the person that brought you here doesn’t seem interested in supporting your goals so you must take it upon yourself to make yourself the goldeny treat you so desire to be. Indeed your quest seems to irritate your captor to no end with him putting you in all sorts of places where becoming toast is more easier said than done. But no matter, there are many ways to become toast and you shall do so until the final slice.
I am Bread feels a lot like the other simulator-esque games that have come before it like Surgeon Simulator and Goat Simulator. This is almost wholly due to its Unity and, from what I guess, are stock 3D models that they’ve used to create the worlds you’ll flopping yourself through. It’s not a bad aesthetic at all just one that’s become the default for titles like this much like all Flash games of years past shared a very similar look due to the limitations of the platform they were built on. The flip side of this is that I am Bread will likely run at anything you throw at it. This will likely be its boon when it comes to phones and tablets later this year.
The goal of I am Bread is simple: you’re bread and you want to become toast. Initially this goal is an easy one to understand, you’re on one side of a kitchen and there’s a toaster thats on the other side which you have to get to. You do this by flopping yourself around which you can do by sticking one of your four corners to a surface and swinging around on that point. Whilst that might sound relatively easy there’s an awful lot of obstacles that stand in your way and should you hit them your edibility rating will start to plummet. You also have a limited amount of grip so you can’t simply throw yourself up on the room and tarzan your way across the room and hope for the best. After the first level though making yourself toast isn’t as straightforward of a challenge as it first was and this is where you have to get creative.
I am Bread tells you upfront that it’s better played with a controller and, after struggling to play the first section with a mouse and keyboard, I couldn’t agree with them more. However that’s not going to make the game easy, far from it, more it’s just slightly more intuitive when you’re using the shoulder buttons of a controller to control each of the four corners rather than the awkward keys they selected. You’ll still have to endure the steep learning curve for controlling your unwieldy piece of bread but after a while you’ll develop strategies to help you traverse sections faster and to avoid flopping around helplessly on the ground.
It seems the game’s time in Early Access was well spent as there’s mechanics to make the game more palatable if you’re struggling to make it past a particular section. Should you become inedible twice a power up will appear next to the starting point that gives you unlimited grip and edibility. You can choose not to take this however if you just want to explore or have fun triggering all the set pieces in the environment it means you don’t have to endure the tedium that these games are renown for. Suffice to say I only ever managed to complete a level once without having to resort to that buff but, honestly, had that buff not been there I would’ve put the game down after 10 minutes.
For me though games like I am Bread have somewhat limited appeal as whilst it’s fun to play around with experimental mechanics like this, coupled with the hilarious results of a semi-working physics engine, the sheen starts to wear thin rather quickly. I’m sure many will find a lot to love in getting high scores or finding new and inventive ways to cook themselves but I simply can’t find the appeal. This is not to say I am Bread is a bad game, far from it, more that if you’re not the kind of person that likes making their own challenge within a game then you’ll likely get bored with I am Bread after the first level.
I am Bread is a truly unique game that’s hard to find comparisons for as it really is unlike anything else that’s come before it. The control scheme, whilst being frustrating at the best of times, is what distinguishes I am Bread from your garden variety 3D platformer. The main game mechanic and the way it plays is charming, hilarious and satisfying when you finally achieve your ultimate goal. However since the core game doesn’t change much the replayability is incredibly low, meaning that for people like me the game starts to lose its appeal very quickly. Still if you were a fan of other frustration titles like this then you wouldn’t go astray with I am Bread and I’m sure you’d find much more to enjoy in it than I did.
I am Bread is available on PC right now for $12.99. Total play time was approximately 2 hours.
I remember when I travelled to the USA back in 2010 I figured that wifi was ubiquitous enough now that I probably wouldn’t have to worry about getting a data plan. Back then that was partly true, indeed I was able to do pretty much everything I needed to for the first two weeks before needing Internet on the go became something of a necessity. Thankfully that was easily fixed by getting a $70, prepaid plan from T-Mobile which had unlimited everything which was more than enough to cover the gap. Still that took a good few hours out of my day just to get that sorted and since then I’ve always wanted a universal mobile plan that didn’t cost me the Earth.
Today Google has announced just that.
Not to be confused with Google’s other similar endeavour Project Fi is a collaboration between Google and numerous cellular providers to give end users a single plan that will work for them across 120 countries. Fi enabled handsets, of which there are currently only one: the Nexus 6, are able to switch between wifi and a multitude of local cellular providers for calls, txts and, most important of all, data. This comes hand in hand with a bunch of other features like being able to check your voicemails through Google Hangouts as well as other nifty features like Google Voice. Suffice to say it sounds like a pretty terrific deal and, thankfully, remains so even when you include the pricing.
The base plan will set you back $20 which includes unlimited domestic calls (I’m assuming that means national), unlimited txts to anywhere and access to the wifi and cellular networks that are part of the service. From there you can add data onto your plan for the rate of $10 per GB which, whilst not exactly the cheapest plan around (What I currently get on Telstra for $95 would cost me $120 on Fi) does come with the added benefit of being charged in 100MB increments. So if you don’t use all your data cap by the end of the month you don’t get charged for it. The benefit here is, of course, that that data works across 120 countries than my current 1, something I would’ve made good use of back when I was travelling a lot for work.
Like many cool services however Fi will only be available to US residents to begin with as their coverage map doesn’t extend far past American border. This is most likely due to the first two providers they’ve partnered with, Sprint and T-Mobile, not having a presence elsewhere. However it looks pretty likely that Google will want to extend this partnership to carriers in other countries, mostly in the aims of reducing their underlying costs for providing data coverage overseas. The real kicker will be to see who they partner with in some countries as depending on who they choose the experience could be wildly different, something I’m sure they’re keen to avoid.
I don’t think I’d make the switch to Google Fi right now even if it was available, not at least until I had a few good reports on how their service compared to the other big providers. To be sure it’d definitely be something I’d like to have when I’m travelling especially now considering how much more I can get done on my phone compared to when I last spent a good chunk of time abroad. As my everytime provider though I’m not so sure as the features they’re currently offering aren’t enough to overcome the almost $30 price differential.
I’m sure that will change with time, however.
It’s sometimes hard to remember that smartphones are still a recent phenomenon with the first devices to be categorised as such being less than a decade old. Sure there were phones before that which you could say were smartphones but back then they were more an amalgam of a PDA and a phone more than a seamless blend between the two. Back then the landscape of handset providers was wildly different, one that was dominated by a single player: Nokia. Their failure to capitalize on the smartphone revolution is a testament to incumbents failing to react to innovative upstarts and their sale to Microsoft their admittance of their fault. You can then imagine my surprise when the now much smaller company is eyeing off a return to the smartphone market as pretty much everyone would agree the horse has long since bolted for Nokia.
The strategy is apparently being born out of the Nokia Technologies arm, the smallest branch out of the three that remained after the deal with Microsoft (the other two being its network devices and Here location division). This is the branch that holds Nokia’s 10,000 or so patents and so you’d think that they’d likely just be resting on their laurels and collecting patent fees for time immaterial. However this section has been somewhat busy at work having developed and licensed two products since the Microsoft deal. The first of which is z Launcher an Android launcher and the N1 a tablet which they’ve licensed out to another manufacturer whom they’ve also lent the Nokia brand name too. The expectation is that future Nokia devices will likely follow the latter’s model with Nokia doing most of the backend work but then offloading it to someone else to manufacture and ship.
There’s no doubt that Nokia had something of a cult following among Windows Phone users as they provided some of the best handsets for that platform. Their other smartphones however had no such following as their pursuit of their own mobile ecosystem made it extremely unappealing to developers who were already split between two major platforms. Had Nokia retained control of the Lumia brand I could see them having an inbuilt user base for a future smartphone, especially if came in an Android flavour, however that brand (and everything that backed it) went to Microsoft and so did all the loyalty that went with it. Nokia is essentially starting from scratch here and, unfortunately, that doesn’t bode well for the once king of the phone industry.
Coming in at that level you’re essentially competing with every other similarly specced handset out there and, to be honest, it’s a market that eats up competitors like that without too much hassle. The outsourcing of the actual manufacturing and distribution means that they don’t shoulder a lot of the risk that they used to with such designs however it also means they have little control over the final product that actually reaches consumers. That being said the N1 does look like a solid device but that doesn’t necessarily mean that future devices will share the same level of quality.
Nokia is going to have to do something to stand out from the pack and, frankly, without their brand loyalty behind them I’m struggling to see what they could do to claw back some of the market share they once had. There are innumerable companies now that have solid handset choices for nearly all sectors of the market and the Nokia brand name just doesn’t carry the weight it once did. If they’re seriously planning a return to the smartphone market they’re going to have to do much more than just make another handset, something which I’m not entirely sure the now slimmed down Nokia is capable of doing.
Space debris is becoming more of an issue as time goes on with the number of objects doubling in the last 15 years. Part of that problem is inevitable as the stage based approach to rocketry, whilst being the most efficient way to transport mass to orbit, unfortunately leaves behind a considerable amount of mass. This, combined with the numerous defunct satellites and other bits of junk, means that our lower orbits are littered with objects hurtling through space with enough force to cause some rather significant damage to anything else we put up there. Solving this problem isn’t easy as just picking it up is far more complicated than it sounds. Thus researchers have long thought of ideas to tackle this issue and scientists working at the RIKEN institute may have come up with a workable solution for some of the most dangerous and hardest to remove debris out there.
The idea comes off the back of the Japanese Experiment Module – Extreme Universe Space Observatory (JEM-EUSO) telescope which is slated to be launched and installed on the International Space Station sometime in 2017. The telescope is designed to use the Earth’s atmosphere as a giant detector for energetic particles which will leave a trail of light behind them as they decay in the Earth’s atmosphere. The design of the telescope, which consists of three large lenses that direct the light to some 137 photodetector modules, means it has an extremely wide field of view. Whilst this is by design for its primary mission it also lends itself well to detecting space debris over a large area, something which is advantageous to the ISS which needs to do everything it can to avoid them.
However that’s only half the solution; the other half is a freaking laser.
Scientists at the RIKEN institute have posited that using something like the CAN laser, which is a fibre based laser that was originally designed for use in particle accelerators, could then be used to zap space junk and send it back down to Earth. This kind of approach only works for debris that are centimeters in size however they’re among some of the most devastating pieces of junk due to the difficulty in detecting them. With the JEM-EUSO however these bits of debris could be readily identified and, if they’re within the reach of the laser, heated up so their orbit begins to decay.
The current plan is to develop a proof of concept device that uses a 1/10th scale version of the current JEM-EUSO telescope combined with a 100 fiber laser. Whilst they haven’t provided any specifications beyond that going off their full scale design (10,000 fibers) the concept should be able to deorbit debris up to a kilometer away. The full scale version on the other hand would be able to zap space junk at a range of up to 100km, an incredible feat that would dramatically help in cleaning up Earth’s orbit. The final stage would be to develop a standalone satellite that could be put into a 800km polar orbit, one of the most cluttered orbits above Earth.
Our approach to tackling space debris is fast becoming a multi-faceted approach, one that will require many different methods to tackle the various types of junk that we have circling our Earth. Things like this are the kind of approach we’ll need going forward as one launch will be able to eliminate several times its own mass in debris before its useful life is over. It’s far from an unsolvable problem however whatever solutions we develop will need to be put to use soon lest our low orbits become a place that no man can ever venture through again.
For us Australians the reasons behind our high rates of piracy are clear: we want the same things that people are able to get access to overseas at the same prices that they receive them for yet we are unable to get them. Our situation has been steadily improving over the past couple years with many notable international services now being available on our shores however we’re still the last on the list for many things, fuelling further piracy. Of course this has prompted all sorts of reactions from rights holder groups hoping to stem the tide of piracy in the misguided hope that it will somehow translate into sales. The latest volley comes in the form of the Copyright Amendment (Online Infringement) 2015 which, yet again, attempts to address the issue in the dumbest way possible.
Essentially the amendment would empower rights holders to get an injunction against Carriage Service Providers (a broader term that encompasses all telecommunications providers) to block access to a site that either infringes on copyright or enables infringement. The amendment starts out by saying it’s prescriptive however the language used in it is anything but, often painting broad strokes which could conceivably be construed as being applicable to a wide range of sites and services, even VPNs in some cases. Whilst there are provisions in there that are supposed to prevent misuse and abuse much of it is left up to the discretion of the court with very little recourse for sites that find themselves blocked as part of it.
To be clear the legislation targets foreign sites only but makes no strict provisions for the site being targeted to be notified that they are facing an injunction. That’s left to the party seeking the injunction to do, something which I’m sure no rights holders will attempt to do. Whilst the law does say that this law isn’t meant to target sites that are mostly based on user generated content however it’s clear that the intention is to go after index sites, many of which are primarily based on user submissions. This puts the legislation at odds with the current safe harbor provisions which could see a site blocked due to a number of users submitting things which put it in the realms of “aiding infringement”.
Of course whatever blocking method is used will be readily circumvented, as it has always been in the past.
The rhetoric that’s surrounding this amendment is worse still, with the CEO of ARIA saying things like “We’ve made the content available at a reasonable price [but] piracy hasn’t diminished”. Funnily enough that’s a pretty easy thing to verify (or rebuke, as the case is) and last year Spotify did just that and found that music piracy, in Australia specifically, has been on the downward trend ever since the music streaming services came to our shores. Strangely enough Australians aren’t a bunch of nasty pirates who will repeatedly pillage the rights holder’s pockets, we’re just seeking a legitimate service that’s priced appropriately. If the rights holders spent as much money on deploying those services here in Australia as they did lobbying for copyright reform they might find their efforts better rewarded, both monetarily and in the form of good will.
Hopefully this amendment gets shot down before it becomes reality as it would do nothing to help the rights holder’s situation and would just be another burden on the Australian court system. It’s been shown time and time again that providing Australians with the same services that are available overseas will reduce piracy rates significantly and that draconian ideas like this do nothing to stem the tide of illegitimate content. The companies that are realising this are the ones that are killing the old media giants and things like this are just the last death throes of an outdated business model that is no longer relevant in today’s digital economy.
The Battlefield series has, for the most part, stuck to its roots of giant war-based combat which has served it well over the past 13 years that it has existed. This put it in direct competition with Call of Duty although they favoured a longer development and release cycle with their games usually having a 2+ year cycle with various expansions and DLCs peppered in between. For many it served as the more refined version of Call of Duty, favouring tactics and skill rather than fast action and twitch reflexes. Battlefield Hardline marks DICE’s first departure from the Battlefield formula and whilst parts of what made the series great can be seen in here the game unfortunately leaves a lot to be desired.
Miami has gone to hell, the streets flooded with drugs and gang warfare escalating to all new heights. You are Nicholas Mendoza, newly minted detective in the Miami PD who’s looking to clean up Miami through good, honest police work. However it doesn’t take long for things to start going awry with your first bust turning into a bloodbath and questions to start arising around your methods. Indeed the more you try stop the plague that’s spreading through Miami the more you seem to be drawn into it, with your fellow cops being the ones dragging you in.
Unlike it’s predecessors that used the Frostbite 3 engine Battlefield Hardline doesn’t feel like a massive step up graphically, indeed it actually feels like it’s gone backwards in some respects. Whilst we still have the wide open environments that are a signature of the Battlefield franchise they just don’t feel as visually impressive as they used to, even with the enormous amount of grunt that my new rig can provide. Looking over my screenshots from previous reviews confirms this, showing that the engine is capable of quite a bit more than what Hardline seems to make use of. It makes even less sense when you find out that this isn’t Visceral’s first experience with the Frostbite engine either so I can only assume that the reduction in fidelity was done for optimization reasons.
Hardline plays much like Battlefield 4 did before it, retaining many of the core mechanics whilst adding in a few new tricks that tie into the police theme. You’ll still be running and gunning quite often, although in slightly smaller environments than you’d be used to, and the stealth mechanic that appeared in Battlefield 4 makes a return in Hardline. However now instead of getting points to level up your character by killing people you instead only level up by taking people down non-lethally or arresting them, something you can accomplish by telling them to “freeze” and then tackling them to the ground. There’s also bonus objectives like warrant suspects (who give quadruple score for arrests), cases for you to investigate by finding evidence and completing additional objectives. The multiplayer introduces a bunch of new modes which are mostly variants of the standard game styles we all know and love although it seems everyone is really still only interested in the big, 32 on 32 conquest maps.
The FPS combat in hardline feels a little unpolished as all the guns in their own categories feel pretty much the same as one another. Indeed once you get a few levels under your belt and unlock a couple guns there’s really no need to switch to anything else and the game rarely pits you against enemies with new and interesting guns, meaning you’ll have to level up or complete case missions in order to add in some variety. Couple this with the absolutely dumb as rocks AI and you’ve got a FPS experience that’s highly forgettable, even in the scenes which feel like they’re supposed to be action packed but just end up feeling bland.
This is only exacerbated by the repetition that’s introduced by the arrest mechanic which you’re required to use if you want to level up your character. Sure it’s pretty fun to work out the best way to approach a section so you can arrest everyone in it, but after you’ve done that a dozen times it starts to lose its luster. Thankfully you don’t have to do that for long as I was able to reach max rank somewhere around episode 7 or so but even the freedom granted by being able to run and gun everything past then didn’t add any life back into Hardline’s combat. This is what made it incredibly easy to put the game down at the end of each “episode” as playing more than one in a night was a recipe for frustration and boredom.
The story is somewhat serviceable in comparison to the rest of the game, with most of the characters being given enough screentime and background to be believable even if the situations you find them in are wholly unbelievable. I couldn’t find myself empathizing with any of the characters though, even the main protagonist, as they didn’t really feel relatable until right near the end. Even then it felt like too little too late, even if I had enough information to understand the decisions they were making. The ending might not scream sequel but it’s definitely hinting at it, raising its eyebrows suggestively and giving you a sly wink as you walk out the door.
I’ve only spent a brief few hours with the multiplayer (for issues I’ll dig into below) but the horror that is Battelog has made yet another return for Hardline and the issues surrounding it still remain. For the most part it seems like the community has little interest in the new game modes as servers that cater towards them are barren wastelands, devoid of players wanting to play them. Instead the vast majority have huddled around the safe place of Battlefield’s large scale warfare maps, something that feels quite at odds with the game’s more intimate setting and direction. Suffice to say it pretty much plays how you’d expect it to with the key difference coming from you being able to generate cash to buy new weapons and perks, rather than having to unlock them by levelling a class. It takes the edge of the levelling curve but doesn’t do much else.
The icing on this rather unappealing cake comes in the form of bugs, glitches and good old fashioned crashes that seem to be a mainstay of all Battlefield releases. I had the single player crash on me multiple times, often when I wasn’t doing anything particular of note at all. Battlelog simply refused to recognise that I had Origin installed until I reinstalled it, something I seem to have to do with every Battlefield release. Then when I did try to play some multi games the game would often just up and exit without any notification of what happened, sometimes in the middle of the game and others when I was spending the mandatory 5 minute wait while the game reloaded itself again. I honestly cannot understand why, after 2 previous releases that suffered the exact same issues, that DICE and Visceral couldn’t work out these issues before release and it’s not something I’d expect from a veteran AAA developer.
Battlefield Hardline is an unfortunate fall from grace for the series, trashing the things that made them great and failing to add in anything that could justify taking such a huge risk. The gameplay is bland and uninteresting, failing to capture the player’s attention even for the short duration of the episodes in the single player game. The changes to the multiplayer are completely out of line with what the community wants, as shown by the fact that the only playable servers are those that emulate the previous title’s play style. Topping it all off is the instability and lack of polish on the core game itself, with crashes and bugs plaguing the already beleaguered experience. I honestly can’t recommend this game even for the die hard fans of the series as it just falls so short of the standard that its predecessors set.
Battlefield Hardline is available on PC, Xbox360, XboxOne, PlayStation3 and PlayStation4 right now for $59.95, $89.95, $109.95, $89.95 and $109.95 respectively. Game was played on the PC with approximately 9 hours of total play time.