Nokia was once the king of the phones that everyone wanted. For many it was because they made a solid handset that did what it needed to do: make calls and send text messages. Their demise came at their inability to adapt to the rapid pace of innovation that was spurred on by Apple and Google, their offerings in the smartphone space coming too late, their customers leaving for greener pastures. The result was that their handset manufacturing capability was offloaded to Microsoft but a small part of Nokia remained independent, one that held all the patents and their research and development arm. It seems that that part of Nokia is looking to take it in crazy new directions with their first product being the Ozo, a 360 degree virtual reality video camera.
Whilst Nokia isn’t flooding the newswaves with details just yet we do know that the Ozo is a small spherical device that incorporates 8 cameras and microphones that are able to capture video and sound from any angle. It’s most certainly not the first camera of its kind with numerous competitors already having products available in this space but it is most certainly one of the better looking offerings out there. As for how it’d fare against its competition that’s something we’ll have to wait to see as the first peek at the Ozo video is slated to come out just over a week from now.
At the same time Nokia has taken to the Tongal platform, a website that allows brands like Nokia to coax filmmakers into doing stuff for them, to garner proposals for videos that will demonstrate the “awesomeness” of the Ozo platform. To entice people to participate there’s a total of $42,000 and free Ozo cameras up for grabs for two lucky filmmakers, something which is sure to attract a few to the platform. Whether that’s enough to make them the platform of choice for VR filmmakers though is another question, one I’m not entirely sure that Nokia will like the answer to.
You see whilst VR video has taken off of late due to YouTube’s support of the technology it’s really just a curiosity at this point. The current technology strictly prohibits it from making its way into cinemas, due to the fact that you’d need to strap an Oculus Rift or equivalent to your head to experience it. Thus it’s currently limited in appeal to tech demos, 3D renderings and a smattering of indie things. Thus the market for such a device seems pretty small, especially when you consider there’s already a few players selling their products in this space. So whilst Nokia’s latest device may be a refreshing change for the once king of phones I’m not sure it’ll become much more than a hobby for the company.
Maybe that’s all Nokia is looking for here, throwing a wild idea out to the public to see what they’d make of it. Nokia wasn’t exactly known for its innovation once the smartphone revolution began but perhaps they’re looking to change that perception with the Ozo. I’m not entirely convinced it will work out for them, anyone can throw together a slick website with great press shots, but the reaction from the wider press seems to indicate that they’re excited about the potential this might bring.
The never-ending quest to satisfy Moore’s Law means that we’re always looking for ways to making computers faster and cheaper. Primarily this focuses on the brain of the computer, the Central Processing Unit (CPU), which in most modern computers is now how to transistors numbering in the billions. All the other components haven’t been resting on their laurels however as shown by the radical improvement in speeds from things like Solid State Drives (SSDs), high-speed interconnects and graphics cards that are just as jam-packed with transistors as any CPU is. One aspect that’s been relatively stagnant however has been RAM which, whilst increasing in speed and density, has only seen iterative improvements since the introduction of the first Double Data Rate (DDR). Today Intel and Micron have announced 3D Xpoint, a new technology that sits somewhere between DRAM and NAND in terms of speed.
Details on the underlying technology are a little scant at the moment however what we do know is that instead of storing information by trapping electrons, like all memory currently does, 3D Xpoint (pronounced cross point) instead stores bits via a change in resistance of the memory material. If you’re like me you’d probably think that this was some kind of phase change memory however Intel has stated that it’s not. What they have told us is that the technology uses a lattice structure which doesn’t require transistors to read and write cells, allowing them to dramatically increase the density, up to 128GB per die. This also comes with the added benefit of being much faster than current NAND technologies that power SSDs although slightly slower than current DRAM, albeit with the added advantage of being non-volatile.
Unlike most new memory technologies which often purport to be the replacements for one type of memory or another Intel and Micron are position 3D Xpoint as an addition to the current architecture. Essentially your computer has several types of memory, all of which are used for a specific purpose. There’s memory directly on the CPU which is incredibly fast but very expensive, so there’s only a small amount. The second type is the RAM which is still fast but can be had in greater amounts. The last is your long term storage, either in the form of spinning rust hard drives or a SSD. 3D Xpoint would sit in between the last two, providing a kind of high speed cache that could hold onto often used data that’s then persisted onto disk. Funnily enough the idea isn’t that novel, things like the XboxOne use a similar architecture, so there’s every chance that it might end up happening.
The reason why this is exciting is because Intel and Micron are already going into production with these new chips, opening up the possibility of a commercial product hitting our shelves in the very near future. Whilst integrating it in the way that they’ve stated in the press release would take much longer, due to the change in architecture, there’s a lot of potential for a new breed of SSD drives to be based on this technology. They might be an order of magnitude more expensive than current SSDs however there are applications where you can’t have too much speed and for those 3D Xpoint could be a welcome addition to their storage stack.
Considering the numerous technological announcements we’ve seen from other large vendors that haven’t amounted to much it’s refreshing to see something that could be hitting the market in short order. Whilst Intel and Micron are still being mum on the details I’m sure that the next few months will see more information make its way to us, hopefully closely followed by demonstrator products. I’m very interested to see what kind of tech is powering the underlying cells as a non-phase change, resistance based memory is something that would be truly novel and, once production hits at-scale levels, could fuel another revolution akin to the one we saw with SSDs all those years ago. Needless to say I’m definitely excited to see where this is heading and I hope Intel and Micron keep us in the loop with the new developments.
Ah Razer and OUYA, two companies I once liked and respected who have both done something to draw my ire. For Razer it was their shameless price gouging tactics for Australian citizens, something which they continue to this day. OUYA simply smoldered its way through all the goodwill I had towards them, ultimately delivering an unfinished product late that has since lumbered along in a kind of zombie state. There had been rumours that OUYA had been courting Razer for a while now, hoping to find a buyer, but like all acquisition talks both sides were rather mum on the details. Today Razer has announced that they will acquire OUYA and use them to bolster their own efforts in this space.
The deal has said to be an “all cash” acquisition meaning that Razer has used its own cash reserves to pay off all of OUYA’s investors. This pegs the asking price at somewhere around the $33 million mark which sounds like a lot however Razer was just valued somewhere in the order of $1 billion meaning an acquisition of this size won’t put much of a strain on their purse strings. Still I and many others really didn’t see how OUYA could fit into Razer’s business model which, for the most part, is centered around gaming peripherals more than platforms. As it turns out Razer may be looking to OUYA to fix it’s Forge platform which, funnily enough, is encountering many of the same issues that OUYA struggled with.
As part of the acquisition deal Razer will take on the software branch of OUYA but will drop the hardware business. This makes sense since Razer is, in essence, already selling a competing platform but also because the OUYA in its current state is heavily outdated and unlikely to provide much value as another product line. The OUYA store will be integrated in Razer’s Cortex platform, along with the 200,000 user accounts and all the games currently published on the platform. The OUYA brand name will remain but will transition to focus more on becoming a publisher for the Cortex platform more than anything else. Overall it seems like a great outcome for OUYA but I’m not convinced that it’ll do much for Razer.
The Razer Forge’s launch was, to be blunt, a complete disaster as the console proved to be buggy and largely unusable on launch day. Sure the base functionality seemed to work fine however that’s not something that’s unique to the Razer Forge and indeed other products will provide that at a much more reasonable price. Things get worse when you compare it to, admittedly slightly more expensive options, like the NVIDIA shield which received universal praise for the quality of all aspects of the product. Whilst the OUYA team might be able to help fix these problems I feel like they’re already several steps behind the competition and throwing more bodies at the problem isn’t going to solve it.
It may be my dislike for both these companies speaking through but in all honesty the only people that won out in this deal where the investors who were likely staring down the barrels of a soon to be bankrupt company. The Android micro-console market just doesn’t have the legs that everyone hoped it would and the market is already saturated with dozens of other devices that do a multitude of other things than just play games. I will be very surprised if Razer manages to make their Forge TV anything more than it currently is, even with the supposed expertise of the OUYA team behind them.
In terms of broadband Australia doesn’t fair too well, ranking somewhere around 58th in terms of speed whilst being among some of the most expensive, both in real dollar terms as well as in dollars per advertised megabit. The original FTTN NBN would’ve elevated us out of the Internet doldrums however the switch to the MTM solution has severely dampened any hopes we had of achieving that goal. However if you were to ask our current communications minister, the esteemed Malcolm Turnbull, what he thought about the current situation he’d refer you to a report that states we need to keep broadband costs high in order for the NBN to be feasible. Just like with most things that he and his department have said about the NBN this is completely incorrect and is nothing more than pandering to current incumbent telcos.
The argument in the submission centers around the idea that if current broadband prices are too cheap then customers won’t be compelled to switch over to the new, obviously vastly more expensive, NBN. The submission makes note that even a 10% reduction in current broadband prices would cause this to happen, something which could occur if Telstra was forced to drop their wholesale prices. A quick look over the history of the NBN and broadband prices in Australia doesn’t seem to support the narrative they’re putting forward however, owing mostly to the problems they claim would come from a price drop already happening within Australia.
You see if you take into consideration current NBN plan pricing the discrepancies are already there, even when you go for the same download speeds. A quick look at iiNet’s pricing shows that your bog standard ADSL2+ connection with a decent amount of downloads will cost you about $50/month whereas the equivalent NBN plan runs about $75/month. Decreasing the ADSL2+ plan by 10%, a whopping $5, isn’t going to change much when there’s already a $25/month price differential between the two. Indeed if people only choose the cheaper option then we should’ve seen that in the adoption rates of the original NBN, correct?
However as the adoption rates have shown Australians are ready, willing and able to pay a premium for better Internet services and have been doing so for years with the original FTTP NBN. The fact of the matter is that whilst ADSL2+ may advertise NBN level speeds it almost always delivers far less than that with most customers only getting a fraction of the speeds they are promised. The FTTP NBN on the other hand delivers exactly the kind of speeds it advertises and thus the value proposition is much greater than its ADSL2+ equivalent. The MTM NBN won’t have this capability unfortunately due to its mixed use of FTTN technologies which simply can’t make the same promises about speed.
It’s things like this that do nothing to endear the Liberal party to the technical vote as it’s so easy to see through the thin veil of political posturing and rhetoric. The facts on this matter are clear, Australians want better broadband and they’re willing to pay for it. Having cheaper options aren’t going to affect this, instead they will provide the opportunity for those who are currently locked out of the broadband market to get into it. Then for those of us who have a need for faster Internet connections we’ll happily pay the premium knowing full well that we’ll get the speeds that are advertised rather than a fraction of them. The sooner the Liberal party wakes up and realises things like this the better, but I’m not holding out any hopes that they will.
Before the days of ubiquitous broadband many of us would have to wait until the monthly LAN event to get our fix of multiplayer gaming. Of course this was also the time when the vast majority of games didn’t include some form of multiplayer so the long time between drinks was easy enough to handle. However since then the inclusion of some form of multiplayer in many games has diluted experiences that used to be specifically crafted for that purpose. Indeed the most rare of rare kind of multiplayer game, the one where you and bunch of mates would crowd around a TV to play, have shrunk down into a very specific niche. However there are still titles that come out every once in a while that exemplify that multiplayer-first experience of years gone by and Rocket League is one of them.
Rocket League is the sequel to the 2008 game Supersonic Acrobatic Rocket-Powered Battle Cars which was met with a rather lukewarm reception upon its release. Being a PlayStation3 only game, one with niche appeal at best, it’s easy to see why. Rocket League takes the same basic idea, rocket powered cars playing soccer, and modernizes the idea slightly, packaging it up for today’s gaming market. The reception this time around has been far more welcoming but looking back through videos of game play from its predecessor it’s hard to see the differences that make Rocket League that much more appealing. Still the buzz was more than enough to convince me that it was worth a look in.
On first glance you’d be forgiven for thinking that Rocket League runs on something other than the Unreal 3 engine as it does manage to do a lot with so little. Psyonix, the developer of Rocket League and its predecessor, does have something of a history with the engine having co-developed Unreal Tournament 2004. That experience has definitely come in handy as Rocket League looks great and runs fabulously, admittedly on my incredibly over-speced PC. One thing I did note however is that it doesn’t look anywhere near as good on the PlayStation4 as it does on the PC, even when running at the same resolution. That’s just my anecdotal experience however and I’m sure it’d look great on a much larger screen (I run my PlayStation4 through a HDMI capture card to my main PC monitor)
At a basic level Rocket League is a simple game: get the ball into the opposing team’s goal. However instead of human players you’re driving around in what looks like an overgrown remote control car, one outfitted with a rocket boost system that allows you to reach incredible speeds and heights. You’ll start out by driving around on the ground, attempting to crash and bash your way through your opponents in order to score a goal. After a while though you’ll start to get a little trickier, flying through the air to intercept the ball and bringing it crashing back down to earth with incredible speed. Of course no multiplayer game is complete without a treasure trove of cosmetics behind it, allowing you to customize the look of your little racer however you see fit. For the ultra-competitive amongst us there’s ranked matchmaking and an inbuilt league system, allowing you to set up tournaments for your friends and foes alike. Taken as a whole it’s got the makings of a game with aspirations to be an eSports contender, although whether it will become one is up to the community at large.
All matches have a 5 minute timer on them meaning that, for the most part, a full game won’t take you much longer than that to complete. Of course if you or your opposing team is dominating you, and the entire team doesn’t skip the goal replays (which seems to happen all the time), it’ll take a lot longer than that as you watch every goal repeated for 30 seconds. Still it’s not the kind of game where you’ll start a game an hour before you have to do something and then find yourself running short on time. For the most part you’ll spend most of this fervently chasing the ball around the court, trying to pry it off your opponents and ramming everyone enthusiastically. You’ll get points (which are just used to determine who the MVP of the game is) for doing things like scoring and stopping goals which are a good way to encourage you to actually play properly rather than just playing like a ball obsessed puppy.
At the start the matches are chaotic and fun, with everyone racing around everywhere trying their best to wrangle their car to hit the ball in the right direction. However it didn’t take long for the matchmaking system to breakdown somewhat, often paring me with opponents who far exceeded my (and my team-mates’) own abilities. This is partially due to me being a little late to the party, coming into it almost 3 weeks after its initial release, however a good matchmaking system would ensure that, for any given match, we had a 50/50 chance of winning. So now, with the initial wave of players starting to dwindle, the people that are left behind are the ones who are more than a couple steps above rookies like myself. It’s a challenge that faces any multiplayer game that has aspirations of running for years past its original release date and unfortunately one that doesn’t have a great solution. Rocket League will still be a blast with friends, you can pick up the core mechanics in 10 minutes, but the online may end up being just as difficult to crack as other long term multiplayer games.
Whilst I didn’t get enough time on the PlayStation4 version to comment on how stable it is (although tales of PlayStation4s overheating while playing it don’t bode particularly well in my book) Rocket League on PC is stable during regular play. However I had numerous, inexplicable crashes to desktop that seemed to occur randomly during the game. Sometimes it was during the initial part of the game where I was revving my engine, others whilst a bunch of us crashed into each other. Looking through the Steam folder I can’t seem to locate any crash dumps or debug logs so I can’t comment as to what’s causing it but it’s definitely an issue that I’ve yet to see a resolution for.
Rocket League demonstrates that sequels can outshine their predecessors as it took an idea that was met with lukewarm reception and turned it into the game everyone is talking about. The core game play is fun and frantic, made even better when you throw a few friends into the mix. The online multiplayer works well for the most part however newcomers might be greeted by a wall of players who are far more skilled than they are. Still that doesn’t detract from the fact that playing this game with a bunch of mates is tons of fun, something that will keep it alive for many years to come.
Rocket League is available on PlayStation 4 and PC right now for $19.99 (currently free for PlayStationPlus subscribers). Game was played on both platforms with a combined playtime of approximately 3 hours with 31% of the achievements unlocked on the PC.
Striking out on your own is a risky proposition. Having an idea is one thing, we all have an idea we’re sure that would make us rich, but turning that idea into a reality is something that takes time, dedication and, above all resources. The problem that many face is that last item as without the lifeblood of any company, money, you’ll struggle to get the resources you require in order to bring your idea to fruition. Ask other entrepreneurs however and they’ll tell you quite a different story, about bootstrapping and minimum viable products and other jargon, but the fact of the matter is that access to money is the key determining factor in whether an entrepreneur succeeds or fails.
Whilst this might seem like an obvious point to make it belies a more troubling conundrum: that kind of opportunity, striking out on your own to create a sustainable business, is not available to everyone. For a great number of people leaving their current place of employment to pursue an idea is simply an untenable position as they don’t have the capital reserves or the connections to get said capital to work on that idea exclusively. Consequently this means that the idea that anyone can be an entrepreneur if they want to is unfortunately a flawed prospect, but there is a solution which has been proven to work in the past.
For five years, between 1974 and 1979, a small city in Canada called Manitoba conducted an experiment whereby those who weren’t earning a liveable wage were sent a cheque that brought them up to that level. Essentially that meant that everyone living in this town was guaranteed to make enough to keep a roof over their heads and feed their family regardless of any contributing factors. Similar programs had been run elsewhere in the past however Manitoba’s project, dubbed Mincome, was special in that it didn’t exclude anyone. Thus for the entire duration of the program poverty was eliminated however when it came to an end in 1979 the incoming government failed to release a report on the outcomes.
However we can infer from other data sources, like the census, about the effects that such a program had on the residents of Manitoba. As the article I linked to discusses in much greater detail the benefits were quite clear, including flow-on benefits like hospitalization rates falling. The key take away though was that, whilst many would say that a universal basic income would lead to people not wanting to work, the Mincome project did not show that at all. Indeed it’s my belief that if such a program was adopted at a national level you’d likely see a tremendous increase in the number of small business and startups that were created, spurring a new wave of innovation.
There are many capable people who’d love nothing more than to develop the ideas that they’re passionate about but the problem is current safety nets aren’t geared towards supporting them. Australian programs like NEIS provide only temporary aid and quite often not enough to cover all the costs that are incurred when trying to establish a business. Replacing that (and most other) welfare programs with a universal basic income would provide the safety net that many require to pursue these ideas allowing programs like NEIS to focus more on mentorship and guidance rather than financial assistance.
Of course whilst an universal basic income would provide the basis upon which many could build their futures it’s not the only thing that would be required to elevate everyone out of poverty. Still programs of this nature have proven to be effective in the past and have far less overheads than current welfare schemes do. Coupling this with other ideas like Labor’s Future Tech policy has the potential to spur a massive wave of innovation in Australia, making it far more attractive to pursue radical ideas here than overseas. At the very least it’s an idea worth trialling as I’m sure the benefits would far outweigh the small cost that it would incur.
Everyone is familiar with the traditional bar magnet, usually painted in red and blue denoting the north and south poles respectively.You’re also likely familiar with their behaviour, put opposite poles next to each other and they’ll attract but put the same poles next to each other and they repel. If you’ve taken this one step further and played around with iron filings (or if you’re really lucky a ferrofluid) you’ll be familiar with the magnetic field lines that magnets generate, giving you some insight into why magnets function the way they do. What you’re not likely familiar with is magnets that have had their polarity printed onto them which results in some incredible behaviour.
The demonstrations they have with various programmed magnets are incredibly impressive as they exhibit behaviour that you wouldn’t expect from a traditional magnet. Whilst some of the applications they talk about seem a little pie in the sky at their current scaling (like the frictionless gears, since the amount of torque they could handle is directionally proportional to field strength) a lot of the others would appear to have immediate commercial applications. The locking magnets for instance seem like they’d be great solution for electronic locks although maybe not for your front door just yet.
What I’d be interested to see is how scalable their process is and whether or not that same programmability could be applied to electromagnets as well. The small demonstrator magnets that they have show what the technology is capable of doing however there are numerous applications that would require much bigger and bulkier versions of them. Similarly electromagnets, which are widely used for all manner of things, could benefit greatly from programmed magnetic fields. With the fundamentals worked out though I’m sure this is just an engineering challenge and that’s the easy part, right?
Since its inception back in 1960 the Search for Extraterrestrial Intelligence (SETI) has scanned our skies looking for clues of intelligent life elsewhere in our universe. As you might have already guessed the search has yet to bear any fruit since, as far as we’re concerned, no one has been sending signals to us, at least not in the way we’re listening for them. The various programs that make up the greater SETI aren’t particularly well funded however, often only getting a couple hours at a time on any one radio telescope on which to make their observations. That’s all set to change however as Russian business magnate Yuri Milner is going to inject an incredible $100 million into the program over 10 years.
SETI, for the unaware, is a number of different projects and experiments all designed to seek out extraterrestrial life through various means. Traditionally this has been done by scanning the sky for radiowaves, looking for signals that are artificial in nature. Whilst the search has yet to find anything that would point towards a signal of intelligent origin there have been numerous other signals found which, upon further investigation, have turned out to have natural sources. Other SETI programs have utilized optical telescopes to search for communications using laser based communications, something which we have actually begun investigating here on earth recently. There are also numerous other, more niche programs under the SETI umbrella (like those looking for things like Dyson Spheres are other mega engineering projects) but they all share the common goal of answering the same questions: are we alone here?
Since these programs don’t strictly advance science in any particular field they’re not well funded at all, often only getting a handful of hours on telescopes per year. This means that, even though such a search is likely to prove difficult and fruitless for quite a long time, we’re really only looking for a small fraction of the year. The new funds from Yuri Milner will bolster the observation time substantially, allowing for continuous observations for extended periods of time. This will both increase the chances of finding something whilst also providing troves of data that will also be useful for other scientific research.
As Yuri says whilst we’re not expecting this increased funding to instantly result in a detection event the processes we’ll develop along the way, as well as the data we gather, will teach us a lot about the search itself. The more we try the more we’ll understand what methods haven’t proved fruitful, narrowing down the possible search areas for us to investigate. The science fiction fan in me still hopes that we’ll find something, just a skerrick, that shows there’s some other life out there. I know we won’t likely find anything for decades, maybe centuries, but that hope of finding something out there is what’s driving this program forward.
Left to their own devices many home PC users will defer installing updates for as long as humanly possible, most even turning off the auto-updating system completely in order to get rid of those annoying pop ups. Of course this means that exploits, which are routinely patched within days of them being discovered, are often not installed. This leaves many unnecessarily vulnerable to security breaches, something which could be avoided if they just installed the updates once in a while. With Windows 10 it now seems that most users won’t have a choice, they’ll be getting all Microsoft updates regardless of whether they want them or not.
Currently you have a multitude of options to select from when you subscribe to Windows updates. The default setting is to let Windows decide when to download, install and reboot your computer as necessary. The second does all the same except it will let you choose when you want to reboot, useful if you don’t leave your computer on constantly or don’t like it rebooting at random. The third option is essentially just a notification option that will tell you when updates are available but it’ll be up to you to choose which ones to download install. The last is, of course, to completely disable the service something which not many IT professionals would recommend you do.
Windows 10 narrows this down to just the first two options for Home version users, removing the option for them to not install updates if they don’t want to. This is not just limited to a specific set of updates (like say security) either as feature updates as well as things as drivers could potentially find their way into this mandatory system. Users of the Pro version of Windows 10 will have the option to defer feature updates for up to 8 months (called Current Branch for Business) however past that point they’ll be cut off from security updates, something which I’m sure none of them want. The only version of Windows 10 that will have long term deferral for feature updates will be the Enterprise version which can elect to only receive security updates between major Windows updates.
Predictably this has caught the ire of many IT professionals and consumers alike, mostly due to the inclusion of feature updates in the mandatory update scheme. Few would argue that mandatory security updates are a bad thing, indeed upon first hearing about this that’s what I thought it would be, however lumping in Windows feature updates alongside it makes a much less palatable affair. Keen observers have pointed out that this is likely due to Microsoft attempting to mold Windows into an as-a-service offering alongside their current offerings like Office 365. For products like that continuous (and mandatory) updates aren’t so much of a problem since they’re vetted against a single platform however for home users it’s a little bit more problematic, given the numerous variables at play.
Given that Windows 10 is slated to go out to the general public in just over a week it’s unlikely that Microsoft will be drastically changing this position anytime soon. For some this might be another reason for them to avoid upgrading to the next version of Windows although I’m sure the lure of a free version will be hard to ignore. For businesses though it’s somewhat less of an issue as they still have the freedom to update how they please. Microsoft has shown however that they’re intent on listening to their consumer base and should there be enough outrage about this then there’s every chance that they’ll change their position. This won’t be stopping me from upgrading, of course, but I’m one of those people who has access to any version I may want.
Not everyone is in as fortunate position as I am.
Time waster style games were once the bastion of Flash games hosted on sites like Newgrounds. Since the introduction of smartphones they’ve slowly transitioned themselves away from the web and instead found a comfortable home on everyone’s mobile device. Thus it seems kind of odd these days to play a time waster style game on the PC as they’re no longer the platform of choice for this genre. Still when deciding on whether or not I should get Hook on my mobile or my PC I opted for the latter, if only because I rarely find time to play games on my mobile these days. Interestingly though Hook seems simple enough that it can service both platforms without needing to make any concessions with either.
Hook has a very simple premise: you have to pull all the wires back without any of them colliding with each other. You do this by pushing a trigger that initiates the pulling and, if you done everything in the correct order, it’ll slide all the way back. Other than that there’s not a whole lot more to speak of and the base game comes with a grand total of 50 levels to make your way through. If you’re a power gamer this won’t take you much longer than an hour to accomplish although I’m sure if you got this on the mobile you could stretch out that play time over the course of weeks if you were so inclined.
Hook, like many other minimalistic puzzlers, has a very clean and simple aesthetic. I’m sure part of this was an artistic choice but later on it becomes obvious that the lack of distinction between visual elements is actually a key element of the game play. The background music is similarly simplistic, swelling and fading as you solve puzzles or make a mistake that triggers the level to refresh again. I’m sure some would like the option to change the colour palette but in all honesty I don’t think I’d bother.
As I described before the mechanics of Hook are pretty simple, pull all the wires back without any of them colliding with each other. The puzzles start out pretty simple, literally just clicking any of the buttons in any order will solve them, but after that new mechanics start getting dropped in every 10 puzzles or so to spice things up a bit. Most of these additional mechanics come in the form of ways to block off paths however there’s also a few that break the line, forcing you to retrace the paths again. It would be easy enough to brute force the puzzles however if you make one mistake (or 3 in the later ones) the puzzle refreshes, forcing you to restart from the beginning.
There’s a pretty simple algorithm you can use to beat every one of the puzzles contained within this game although executing it may be a little easier said than done. What you first need to do is find the line that can be moved first, usually one without anything blocking it. Then you need to block off all other paths so that only it gets moved. Then from there it’s simply an iterative process to eliminate the rest of them. Using this process I was easily able to breeze through all 50 puzzles in just over an hour, something that many other reviewers have been able to do. This is probably one of those games that could benefit immensely from a level editor and Steam Workshop integration as I’m sure the community would be able to come up with infinite puzzles that would be orders of magnitude more difficult than the default set.
Hook is a great little puzzler with an unique mechanic. The puzzles, whilst not especially challenging, are rewarding enough that I felt compelled to blast through them all in one sitting. It’s shortness is something of a detraction, especially considering that the addition of a level editor and a way to share user created levels would ensure a near endless supply of content. Still for the asking price I don’t think anyone will really mind the lack of content as $1 for 1 hour of entertainment is pretty good by anyone’s standards.
Hook is available on iOS, Android, Windows Phone and PC right now for $0.99 on all platforms. Game was played on the PC with a total of 1 hour playtime.